2010-11-25 18:00:26 +00:00
|
|
|
/*
|
|
|
|
* Copyright © 2008,2010 Intel Corporation
|
|
|
|
*
|
|
|
|
* Permission is hereby granted, free of charge, to any person obtaining a
|
|
|
|
* copy of this software and associated documentation files (the "Software"),
|
|
|
|
* to deal in the Software without restriction, including without limitation
|
|
|
|
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
|
|
|
|
* and/or sell copies of the Software, and to permit persons to whom the
|
|
|
|
* Software is furnished to do so, subject to the following conditions:
|
|
|
|
*
|
|
|
|
* The above copyright notice and this permission notice (including the next
|
|
|
|
* paragraph) shall be included in all copies or substantial portions of the
|
|
|
|
* Software.
|
|
|
|
*
|
|
|
|
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
|
|
|
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
|
|
|
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
|
|
|
|
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
|
|
|
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
|
|
|
|
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
|
|
|
|
* IN THE SOFTWARE.
|
|
|
|
*
|
|
|
|
* Authors:
|
|
|
|
* Eric Anholt <eric@anholt.net>
|
|
|
|
* Chris Wilson <chris@chris-wilson.co.uk>
|
|
|
|
*
|
|
|
|
*/
|
|
|
|
|
2016-08-04 15:32:42 +00:00
|
|
|
#include <linux/dma_remapping.h>
|
|
|
|
#include <linux/reservation.h>
|
2017-01-27 09:40:08 +00:00
|
|
|
#include <linux/sync_file.h>
|
2016-08-04 15:32:42 +00:00
|
|
|
#include <linux/uaccess.h>
|
|
|
|
|
2012-10-02 17:01:07 +00:00
|
|
|
#include <drm/drmP.h>
|
|
|
|
#include <drm/i915_drm.h>
|
2016-08-04 15:32:42 +00:00
|
|
|
|
2010-11-25 18:00:26 +00:00
|
|
|
#include "i915_drv.h"
|
2017-02-22 11:40:48 +00:00
|
|
|
#include "i915_gem_clflush.h"
|
2010-11-25 18:00:26 +00:00
|
|
|
#include "i915_trace.h"
|
|
|
|
#include "intel_drv.h"
|
2016-08-04 15:32:35 +00:00
|
|
|
#include "intel_frontbuffer.h"
|
2010-11-25 18:00:26 +00:00
|
|
|
|
2016-08-18 16:16:52 +00:00
|
|
|
#define DBG_USE_CPU_RELOC 0 /* -1 force GTT relocs; 1 force CPU relocs */
|
|
|
|
|
2016-07-14 13:52:03 +00:00
|
|
|
#define __EXEC_OBJECT_HAS_PIN (1<<31)
|
|
|
|
#define __EXEC_OBJECT_HAS_FENCE (1<<30)
|
|
|
|
#define __EXEC_OBJECT_NEEDS_MAP (1<<29)
|
|
|
|
#define __EXEC_OBJECT_NEEDS_BIAS (1<<28)
|
|
|
|
#define __EXEC_OBJECT_INTERNAL_FLAGS (0xf<<28) /* all of the above */
|
drm/i915: Prevent negative relocation deltas from wrapping
This is pure evil. Userspace, I'm looking at you SNA, repacks batch
buffers on the fly after generation as they are being passed to the
kernel for execution. These batches also contain self-referenced
relocations as a single buffer encompasses the state commands, kernels,
vertices and sampler. During generation the buffers are placed at known
offsets within the full batch, and then the relocation deltas (as passed
to the kernel) are tweaked as the batch is repacked into a smaller buffer.
This means that userspace is passing negative relocations deltas, which
subsequently wrap to large values if the batch is at a low address. The
GPU hangs when it then tries to use the large value as a base for its
address offsets, rather than wrapping back to the real value (as one
would hope). As the GPU uses positive offsets from the base, we can
treat the relocation address as the minimum address read by the GPU.
For the upper bound, we trust that userspace will not read beyond the
end of the buffer.
So, how do we fix negative relocations from wrapping? We can either
check that every relocation looks valid when we write it, and then
position each object such that we prevent the offset wraparound, or we
just special-case the self-referential behaviour of SNA and force all
batches to be above 256k. Daniel prefers the latter approach.
This fixes a GPU hang when it tries to use an address (relocation +
offset) greater than the GTT size. The issue would occur quite easily
with full-ppgtt as each fd gets its own VM space, so low offsets would
often be handed out. However, with the rearrangement of the low GTT due
to capturing the BIOS framebuffer, it is already affecting kernels 3.15
onwards. I think only IVB+ is susceptible to this bug, but the workaround
should only kick in rarely, so it seems sensible to always apply it.
v3: Use a bias for batch buffers to prevent small negative delta relocations
from wrapping.
v4 from Daniel:
- s/BIAS/BATCH_OFFSET_BIAS/
- Extract eb_vma_misplaced/i915_vma_misplaced since the conditions
were growing rather cumbersome.
- Add a comment to eb_get_batch explaining why we do this.
- Apply the batch offset bias everywhere but mention that we've only
observed it on gen7 gpus.
- Drop PIN_OFFSET_FIX for now, that slipped in from a feature patch.
v5: Add static to eb_get_batch, spotted by 0-day tester.
Testcase: igt/gem_bad_reloc
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=78533
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> (v3)
Cc: stable@vger.kernel.org
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-05-23 06:48:08 +00:00
|
|
|
|
|
|
|
#define BATCH_OFFSET_BIAS (256*1024)
|
2013-11-26 11:23:15 +00:00
|
|
|
|
2016-08-02 21:50:38 +00:00
|
|
|
struct i915_execbuffer_params {
|
|
|
|
struct drm_device *dev;
|
|
|
|
struct drm_file *file;
|
2016-08-04 15:32:31 +00:00
|
|
|
struct i915_vma *batch;
|
|
|
|
u32 dispatch_flags;
|
|
|
|
u32 args_batch_start_offset;
|
2016-08-02 21:50:38 +00:00
|
|
|
struct intel_engine_cs *engine;
|
|
|
|
struct i915_gem_context *ctx;
|
|
|
|
struct drm_i915_gem_request *request;
|
|
|
|
};
|
|
|
|
|
drm/i915: Convert execbuf code to use vmas
In order to transition more of our code over to using a VMA instead of
an <OBJ, VM> pair - we must have the vma accessible at execbuf time. Up
until now, we've only had a VMA when actually binding an object.
The previous patch helped handle the distinction on bound vs. unbound.
This patch will help us catch leaks, and other issues before we actually
shuffle a bunch of stuff around.
This attempts to convert all the execbuf code to speak in vmas. Since
the execbuf code is very self contained it was a nice isolated
conversion.
The meat of the code is about turning eb_objects into eb_vma, and then
wiring up the rest of the code to use vmas instead of obj, vm pairs.
Unfortunately, to do this, we must move the exec_list link from the obj
structure. This list is reused in the eviction code, so we must also
modify the eviction code to make this work.
WARNING: This patch makes an already hotly profiled path slower. The cost is
unavoidable. In reply to this mail, I will attach the extra data.
v2: Release table lock early, and two a 2 phase vma lookup to avoid
having to use a GFP_ATOMIC. (Chris)
v3: s/obj_exec_list/obj_exec_link/
Updates to address
commit 6d2b888569d366beb4be72cacfde41adee2c25e1
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Wed Aug 7 18:30:54 2013 +0100
drm/i915: List objects allocated from stolen memory in debugfs
v4: Use obj = vma->obj for neatness in some places (Chris)
need_reloc_mappable() should return false if ppgtt (Chris)
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
[danvet: Split out prep patches. Also remove a FIXME comment which is
now taken care of.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2013-08-14 09:38:36 +00:00
|
|
|
struct eb_vmas {
|
2016-08-18 16:16:52 +00:00
|
|
|
struct drm_i915_private *i915;
|
drm/i915: Convert execbuf code to use vmas
In order to transition more of our code over to using a VMA instead of
an <OBJ, VM> pair - we must have the vma accessible at execbuf time. Up
until now, we've only had a VMA when actually binding an object.
The previous patch helped handle the distinction on bound vs. unbound.
This patch will help us catch leaks, and other issues before we actually
shuffle a bunch of stuff around.
This attempts to convert all the execbuf code to speak in vmas. Since
the execbuf code is very self contained it was a nice isolated
conversion.
The meat of the code is about turning eb_objects into eb_vma, and then
wiring up the rest of the code to use vmas instead of obj, vm pairs.
Unfortunately, to do this, we must move the exec_list link from the obj
structure. This list is reused in the eviction code, so we must also
modify the eviction code to make this work.
WARNING: This patch makes an already hotly profiled path slower. The cost is
unavoidable. In reply to this mail, I will attach the extra data.
v2: Release table lock early, and two a 2 phase vma lookup to avoid
having to use a GFP_ATOMIC. (Chris)
v3: s/obj_exec_list/obj_exec_link/
Updates to address
commit 6d2b888569d366beb4be72cacfde41adee2c25e1
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Wed Aug 7 18:30:54 2013 +0100
drm/i915: List objects allocated from stolen memory in debugfs
v4: Use obj = vma->obj for neatness in some places (Chris)
need_reloc_mappable() should return false if ppgtt (Chris)
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
[danvet: Split out prep patches. Also remove a FIXME comment which is
now taken care of.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2013-08-14 09:38:36 +00:00
|
|
|
struct list_head vmas;
|
2010-12-08 10:38:14 +00:00
|
|
|
int and;
|
2013-01-08 10:53:17 +00:00
|
|
|
union {
|
drm/i915: Convert execbuf code to use vmas
In order to transition more of our code over to using a VMA instead of
an <OBJ, VM> pair - we must have the vma accessible at execbuf time. Up
until now, we've only had a VMA when actually binding an object.
The previous patch helped handle the distinction on bound vs. unbound.
This patch will help us catch leaks, and other issues before we actually
shuffle a bunch of stuff around.
This attempts to convert all the execbuf code to speak in vmas. Since
the execbuf code is very self contained it was a nice isolated
conversion.
The meat of the code is about turning eb_objects into eb_vma, and then
wiring up the rest of the code to use vmas instead of obj, vm pairs.
Unfortunately, to do this, we must move the exec_list link from the obj
structure. This list is reused in the eviction code, so we must also
modify the eviction code to make this work.
WARNING: This patch makes an already hotly profiled path slower. The cost is
unavoidable. In reply to this mail, I will attach the extra data.
v2: Release table lock early, and two a 2 phase vma lookup to avoid
having to use a GFP_ATOMIC. (Chris)
v3: s/obj_exec_list/obj_exec_link/
Updates to address
commit 6d2b888569d366beb4be72cacfde41adee2c25e1
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Wed Aug 7 18:30:54 2013 +0100
drm/i915: List objects allocated from stolen memory in debugfs
v4: Use obj = vma->obj for neatness in some places (Chris)
need_reloc_mappable() should return false if ppgtt (Chris)
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
[danvet: Split out prep patches. Also remove a FIXME comment which is
now taken care of.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2013-08-14 09:38:36 +00:00
|
|
|
struct i915_vma *lut[0];
|
2013-01-08 10:53:17 +00:00
|
|
|
struct hlist_head buckets[0];
|
|
|
|
};
|
2010-12-08 10:38:14 +00:00
|
|
|
};
|
|
|
|
|
drm/i915: Convert execbuf code to use vmas
In order to transition more of our code over to using a VMA instead of
an <OBJ, VM> pair - we must have the vma accessible at execbuf time. Up
until now, we've only had a VMA when actually binding an object.
The previous patch helped handle the distinction on bound vs. unbound.
This patch will help us catch leaks, and other issues before we actually
shuffle a bunch of stuff around.
This attempts to convert all the execbuf code to speak in vmas. Since
the execbuf code is very self contained it was a nice isolated
conversion.
The meat of the code is about turning eb_objects into eb_vma, and then
wiring up the rest of the code to use vmas instead of obj, vm pairs.
Unfortunately, to do this, we must move the exec_list link from the obj
structure. This list is reused in the eviction code, so we must also
modify the eviction code to make this work.
WARNING: This patch makes an already hotly profiled path slower. The cost is
unavoidable. In reply to this mail, I will attach the extra data.
v2: Release table lock early, and two a 2 phase vma lookup to avoid
having to use a GFP_ATOMIC. (Chris)
v3: s/obj_exec_list/obj_exec_link/
Updates to address
commit 6d2b888569d366beb4be72cacfde41adee2c25e1
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Wed Aug 7 18:30:54 2013 +0100
drm/i915: List objects allocated from stolen memory in debugfs
v4: Use obj = vma->obj for neatness in some places (Chris)
need_reloc_mappable() should return false if ppgtt (Chris)
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
[danvet: Split out prep patches. Also remove a FIXME comment which is
now taken care of.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2013-08-14 09:38:36 +00:00
|
|
|
static struct eb_vmas *
|
2016-08-18 16:16:52 +00:00
|
|
|
eb_create(struct drm_i915_private *i915,
|
|
|
|
struct drm_i915_gem_execbuffer2 *args)
|
2010-12-08 10:38:14 +00:00
|
|
|
{
|
drm/i915: Convert execbuf code to use vmas
In order to transition more of our code over to using a VMA instead of
an <OBJ, VM> pair - we must have the vma accessible at execbuf time. Up
until now, we've only had a VMA when actually binding an object.
The previous patch helped handle the distinction on bound vs. unbound.
This patch will help us catch leaks, and other issues before we actually
shuffle a bunch of stuff around.
This attempts to convert all the execbuf code to speak in vmas. Since
the execbuf code is very self contained it was a nice isolated
conversion.
The meat of the code is about turning eb_objects into eb_vma, and then
wiring up the rest of the code to use vmas instead of obj, vm pairs.
Unfortunately, to do this, we must move the exec_list link from the obj
structure. This list is reused in the eviction code, so we must also
modify the eviction code to make this work.
WARNING: This patch makes an already hotly profiled path slower. The cost is
unavoidable. In reply to this mail, I will attach the extra data.
v2: Release table lock early, and two a 2 phase vma lookup to avoid
having to use a GFP_ATOMIC. (Chris)
v3: s/obj_exec_list/obj_exec_link/
Updates to address
commit 6d2b888569d366beb4be72cacfde41adee2c25e1
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Wed Aug 7 18:30:54 2013 +0100
drm/i915: List objects allocated from stolen memory in debugfs
v4: Use obj = vma->obj for neatness in some places (Chris)
need_reloc_mappable() should return false if ppgtt (Chris)
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
[danvet: Split out prep patches. Also remove a FIXME comment which is
now taken care of.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2013-08-14 09:38:36 +00:00
|
|
|
struct eb_vmas *eb = NULL;
|
2013-01-08 10:53:17 +00:00
|
|
|
|
|
|
|
if (args->flags & I915_EXEC_HANDLE_LUT) {
|
2013-09-19 12:00:11 +00:00
|
|
|
unsigned size = args->buffer_count;
|
drm/i915: Convert execbuf code to use vmas
In order to transition more of our code over to using a VMA instead of
an <OBJ, VM> pair - we must have the vma accessible at execbuf time. Up
until now, we've only had a VMA when actually binding an object.
The previous patch helped handle the distinction on bound vs. unbound.
This patch will help us catch leaks, and other issues before we actually
shuffle a bunch of stuff around.
This attempts to convert all the execbuf code to speak in vmas. Since
the execbuf code is very self contained it was a nice isolated
conversion.
The meat of the code is about turning eb_objects into eb_vma, and then
wiring up the rest of the code to use vmas instead of obj, vm pairs.
Unfortunately, to do this, we must move the exec_list link from the obj
structure. This list is reused in the eviction code, so we must also
modify the eviction code to make this work.
WARNING: This patch makes an already hotly profiled path slower. The cost is
unavoidable. In reply to this mail, I will attach the extra data.
v2: Release table lock early, and two a 2 phase vma lookup to avoid
having to use a GFP_ATOMIC. (Chris)
v3: s/obj_exec_list/obj_exec_link/
Updates to address
commit 6d2b888569d366beb4be72cacfde41adee2c25e1
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Wed Aug 7 18:30:54 2013 +0100
drm/i915: List objects allocated from stolen memory in debugfs
v4: Use obj = vma->obj for neatness in some places (Chris)
need_reloc_mappable() should return false if ppgtt (Chris)
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
[danvet: Split out prep patches. Also remove a FIXME comment which is
now taken care of.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2013-08-14 09:38:36 +00:00
|
|
|
size *= sizeof(struct i915_vma *);
|
|
|
|
size += sizeof(struct eb_vmas);
|
2013-01-08 10:53:17 +00:00
|
|
|
eb = kmalloc(size, GFP_TEMPORARY | __GFP_NOWARN | __GFP_NORETRY);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (eb == NULL) {
|
2013-09-19 12:00:11 +00:00
|
|
|
unsigned size = args->buffer_count;
|
|
|
|
unsigned count = PAGE_SIZE / sizeof(struct hlist_head) / 2;
|
2013-03-27 13:04:55 +00:00
|
|
|
BUILD_BUG_ON_NOT_POWER_OF_2(PAGE_SIZE / sizeof(struct hlist_head));
|
2013-01-08 10:53:17 +00:00
|
|
|
while (count > 2*size)
|
|
|
|
count >>= 1;
|
|
|
|
eb = kzalloc(count*sizeof(struct hlist_head) +
|
drm/i915: Convert execbuf code to use vmas
In order to transition more of our code over to using a VMA instead of
an <OBJ, VM> pair - we must have the vma accessible at execbuf time. Up
until now, we've only had a VMA when actually binding an object.
The previous patch helped handle the distinction on bound vs. unbound.
This patch will help us catch leaks, and other issues before we actually
shuffle a bunch of stuff around.
This attempts to convert all the execbuf code to speak in vmas. Since
the execbuf code is very self contained it was a nice isolated
conversion.
The meat of the code is about turning eb_objects into eb_vma, and then
wiring up the rest of the code to use vmas instead of obj, vm pairs.
Unfortunately, to do this, we must move the exec_list link from the obj
structure. This list is reused in the eviction code, so we must also
modify the eviction code to make this work.
WARNING: This patch makes an already hotly profiled path slower. The cost is
unavoidable. In reply to this mail, I will attach the extra data.
v2: Release table lock early, and two a 2 phase vma lookup to avoid
having to use a GFP_ATOMIC. (Chris)
v3: s/obj_exec_list/obj_exec_link/
Updates to address
commit 6d2b888569d366beb4be72cacfde41adee2c25e1
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Wed Aug 7 18:30:54 2013 +0100
drm/i915: List objects allocated from stolen memory in debugfs
v4: Use obj = vma->obj for neatness in some places (Chris)
need_reloc_mappable() should return false if ppgtt (Chris)
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
[danvet: Split out prep patches. Also remove a FIXME comment which is
now taken care of.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2013-08-14 09:38:36 +00:00
|
|
|
sizeof(struct eb_vmas),
|
2013-01-08 10:53:17 +00:00
|
|
|
GFP_TEMPORARY);
|
|
|
|
if (eb == NULL)
|
|
|
|
return eb;
|
|
|
|
|
|
|
|
eb->and = count - 1;
|
|
|
|
} else
|
|
|
|
eb->and = -args->buffer_count;
|
|
|
|
|
2016-08-18 16:16:52 +00:00
|
|
|
eb->i915 = i915;
|
drm/i915: Convert execbuf code to use vmas
In order to transition more of our code over to using a VMA instead of
an <OBJ, VM> pair - we must have the vma accessible at execbuf time. Up
until now, we've only had a VMA when actually binding an object.
The previous patch helped handle the distinction on bound vs. unbound.
This patch will help us catch leaks, and other issues before we actually
shuffle a bunch of stuff around.
This attempts to convert all the execbuf code to speak in vmas. Since
the execbuf code is very self contained it was a nice isolated
conversion.
The meat of the code is about turning eb_objects into eb_vma, and then
wiring up the rest of the code to use vmas instead of obj, vm pairs.
Unfortunately, to do this, we must move the exec_list link from the obj
structure. This list is reused in the eviction code, so we must also
modify the eviction code to make this work.
WARNING: This patch makes an already hotly profiled path slower. The cost is
unavoidable. In reply to this mail, I will attach the extra data.
v2: Release table lock early, and two a 2 phase vma lookup to avoid
having to use a GFP_ATOMIC. (Chris)
v3: s/obj_exec_list/obj_exec_link/
Updates to address
commit 6d2b888569d366beb4be72cacfde41adee2c25e1
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Wed Aug 7 18:30:54 2013 +0100
drm/i915: List objects allocated from stolen memory in debugfs
v4: Use obj = vma->obj for neatness in some places (Chris)
need_reloc_mappable() should return false if ppgtt (Chris)
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
[danvet: Split out prep patches. Also remove a FIXME comment which is
now taken care of.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2013-08-14 09:38:36 +00:00
|
|
|
INIT_LIST_HEAD(&eb->vmas);
|
2010-12-08 10:38:14 +00:00
|
|
|
return eb;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void
|
drm/i915: Convert execbuf code to use vmas
In order to transition more of our code over to using a VMA instead of
an <OBJ, VM> pair - we must have the vma accessible at execbuf time. Up
until now, we've only had a VMA when actually binding an object.
The previous patch helped handle the distinction on bound vs. unbound.
This patch will help us catch leaks, and other issues before we actually
shuffle a bunch of stuff around.
This attempts to convert all the execbuf code to speak in vmas. Since
the execbuf code is very self contained it was a nice isolated
conversion.
The meat of the code is about turning eb_objects into eb_vma, and then
wiring up the rest of the code to use vmas instead of obj, vm pairs.
Unfortunately, to do this, we must move the exec_list link from the obj
structure. This list is reused in the eviction code, so we must also
modify the eviction code to make this work.
WARNING: This patch makes an already hotly profiled path slower. The cost is
unavoidable. In reply to this mail, I will attach the extra data.
v2: Release table lock early, and two a 2 phase vma lookup to avoid
having to use a GFP_ATOMIC. (Chris)
v3: s/obj_exec_list/obj_exec_link/
Updates to address
commit 6d2b888569d366beb4be72cacfde41adee2c25e1
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Wed Aug 7 18:30:54 2013 +0100
drm/i915: List objects allocated from stolen memory in debugfs
v4: Use obj = vma->obj for neatness in some places (Chris)
need_reloc_mappable() should return false if ppgtt (Chris)
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
[danvet: Split out prep patches. Also remove a FIXME comment which is
now taken care of.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2013-08-14 09:38:36 +00:00
|
|
|
eb_reset(struct eb_vmas *eb)
|
2010-12-08 10:38:14 +00:00
|
|
|
{
|
2013-01-08 10:53:17 +00:00
|
|
|
if (eb->and >= 0)
|
|
|
|
memset(eb->buckets, 0, (eb->and+1)*sizeof(struct hlist_head));
|
2010-12-08 10:38:14 +00:00
|
|
|
}
|
|
|
|
|
2016-08-04 15:32:31 +00:00
|
|
|
static struct i915_vma *
|
|
|
|
eb_get_batch(struct eb_vmas *eb)
|
|
|
|
{
|
|
|
|
struct i915_vma *vma = list_entry(eb->vmas.prev, typeof(*vma), exec_list);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* SNA is doing fancy tricks with compressing batch buffers, which leads
|
|
|
|
* to negative relocation deltas. Usually that works out ok since the
|
|
|
|
* relocate address is still positive, except when the batch is placed
|
|
|
|
* very low in the GTT. Ensure this doesn't happen.
|
|
|
|
*
|
|
|
|
* Note that actual hangs have only been observed on gen7, but for
|
|
|
|
* paranoia do it everywhere.
|
|
|
|
*/
|
|
|
|
if ((vma->exec_entry->flags & EXEC_OBJECT_PINNED) == 0)
|
|
|
|
vma->exec_entry->flags |= __EXEC_OBJECT_NEEDS_BIAS;
|
|
|
|
|
|
|
|
return vma;
|
|
|
|
}
|
|
|
|
|
2013-01-08 10:53:14 +00:00
|
|
|
static int
|
drm/i915: Convert execbuf code to use vmas
In order to transition more of our code over to using a VMA instead of
an <OBJ, VM> pair - we must have the vma accessible at execbuf time. Up
until now, we've only had a VMA when actually binding an object.
The previous patch helped handle the distinction on bound vs. unbound.
This patch will help us catch leaks, and other issues before we actually
shuffle a bunch of stuff around.
This attempts to convert all the execbuf code to speak in vmas. Since
the execbuf code is very self contained it was a nice isolated
conversion.
The meat of the code is about turning eb_objects into eb_vma, and then
wiring up the rest of the code to use vmas instead of obj, vm pairs.
Unfortunately, to do this, we must move the exec_list link from the obj
structure. This list is reused in the eviction code, so we must also
modify the eviction code to make this work.
WARNING: This patch makes an already hotly profiled path slower. The cost is
unavoidable. In reply to this mail, I will attach the extra data.
v2: Release table lock early, and two a 2 phase vma lookup to avoid
having to use a GFP_ATOMIC. (Chris)
v3: s/obj_exec_list/obj_exec_link/
Updates to address
commit 6d2b888569d366beb4be72cacfde41adee2c25e1
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Wed Aug 7 18:30:54 2013 +0100
drm/i915: List objects allocated from stolen memory in debugfs
v4: Use obj = vma->obj for neatness in some places (Chris)
need_reloc_mappable() should return false if ppgtt (Chris)
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
[danvet: Split out prep patches. Also remove a FIXME comment which is
now taken care of.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2013-08-14 09:38:36 +00:00
|
|
|
eb_lookup_vmas(struct eb_vmas *eb,
|
|
|
|
struct drm_i915_gem_exec_object2 *exec,
|
|
|
|
const struct drm_i915_gem_execbuffer2 *args,
|
|
|
|
struct i915_address_space *vm,
|
|
|
|
struct drm_file *file)
|
2013-01-08 10:53:14 +00:00
|
|
|
{
|
drm/i915: Convert execbuf code to use vmas
In order to transition more of our code over to using a VMA instead of
an <OBJ, VM> pair - we must have the vma accessible at execbuf time. Up
until now, we've only had a VMA when actually binding an object.
The previous patch helped handle the distinction on bound vs. unbound.
This patch will help us catch leaks, and other issues before we actually
shuffle a bunch of stuff around.
This attempts to convert all the execbuf code to speak in vmas. Since
the execbuf code is very self contained it was a nice isolated
conversion.
The meat of the code is about turning eb_objects into eb_vma, and then
wiring up the rest of the code to use vmas instead of obj, vm pairs.
Unfortunately, to do this, we must move the exec_list link from the obj
structure. This list is reused in the eviction code, so we must also
modify the eviction code to make this work.
WARNING: This patch makes an already hotly profiled path slower. The cost is
unavoidable. In reply to this mail, I will attach the extra data.
v2: Release table lock early, and two a 2 phase vma lookup to avoid
having to use a GFP_ATOMIC. (Chris)
v3: s/obj_exec_list/obj_exec_link/
Updates to address
commit 6d2b888569d366beb4be72cacfde41adee2c25e1
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Wed Aug 7 18:30:54 2013 +0100
drm/i915: List objects allocated from stolen memory in debugfs
v4: Use obj = vma->obj for neatness in some places (Chris)
need_reloc_mappable() should return false if ppgtt (Chris)
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
[danvet: Split out prep patches. Also remove a FIXME comment which is
now taken care of.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2013-08-14 09:38:36 +00:00
|
|
|
struct drm_i915_gem_object *obj;
|
|
|
|
struct list_head objects;
|
2013-12-04 09:52:58 +00:00
|
|
|
int i, ret;
|
2013-01-08 10:53:14 +00:00
|
|
|
|
drm/i915: Convert execbuf code to use vmas
In order to transition more of our code over to using a VMA instead of
an <OBJ, VM> pair - we must have the vma accessible at execbuf time. Up
until now, we've only had a VMA when actually binding an object.
The previous patch helped handle the distinction on bound vs. unbound.
This patch will help us catch leaks, and other issues before we actually
shuffle a bunch of stuff around.
This attempts to convert all the execbuf code to speak in vmas. Since
the execbuf code is very self contained it was a nice isolated
conversion.
The meat of the code is about turning eb_objects into eb_vma, and then
wiring up the rest of the code to use vmas instead of obj, vm pairs.
Unfortunately, to do this, we must move the exec_list link from the obj
structure. This list is reused in the eviction code, so we must also
modify the eviction code to make this work.
WARNING: This patch makes an already hotly profiled path slower. The cost is
unavoidable. In reply to this mail, I will attach the extra data.
v2: Release table lock early, and two a 2 phase vma lookup to avoid
having to use a GFP_ATOMIC. (Chris)
v3: s/obj_exec_list/obj_exec_link/
Updates to address
commit 6d2b888569d366beb4be72cacfde41adee2c25e1
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Wed Aug 7 18:30:54 2013 +0100
drm/i915: List objects allocated from stolen memory in debugfs
v4: Use obj = vma->obj for neatness in some places (Chris)
need_reloc_mappable() should return false if ppgtt (Chris)
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
[danvet: Split out prep patches. Also remove a FIXME comment which is
now taken care of.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2013-08-14 09:38:36 +00:00
|
|
|
INIT_LIST_HEAD(&objects);
|
2013-01-08 10:53:14 +00:00
|
|
|
spin_lock(&file->table_lock);
|
drm/i915: Convert execbuf code to use vmas
In order to transition more of our code over to using a VMA instead of
an <OBJ, VM> pair - we must have the vma accessible at execbuf time. Up
until now, we've only had a VMA when actually binding an object.
The previous patch helped handle the distinction on bound vs. unbound.
This patch will help us catch leaks, and other issues before we actually
shuffle a bunch of stuff around.
This attempts to convert all the execbuf code to speak in vmas. Since
the execbuf code is very self contained it was a nice isolated
conversion.
The meat of the code is about turning eb_objects into eb_vma, and then
wiring up the rest of the code to use vmas instead of obj, vm pairs.
Unfortunately, to do this, we must move the exec_list link from the obj
structure. This list is reused in the eviction code, so we must also
modify the eviction code to make this work.
WARNING: This patch makes an already hotly profiled path slower. The cost is
unavoidable. In reply to this mail, I will attach the extra data.
v2: Release table lock early, and two a 2 phase vma lookup to avoid
having to use a GFP_ATOMIC. (Chris)
v3: s/obj_exec_list/obj_exec_link/
Updates to address
commit 6d2b888569d366beb4be72cacfde41adee2c25e1
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Wed Aug 7 18:30:54 2013 +0100
drm/i915: List objects allocated from stolen memory in debugfs
v4: Use obj = vma->obj for neatness in some places (Chris)
need_reloc_mappable() should return false if ppgtt (Chris)
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
[danvet: Split out prep patches. Also remove a FIXME comment which is
now taken care of.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2013-08-14 09:38:36 +00:00
|
|
|
/* Grab a reference to the object and release the lock so we can lookup
|
|
|
|
* or create the VMA without using GFP_ATOMIC */
|
2013-01-08 10:53:17 +00:00
|
|
|
for (i = 0; i < args->buffer_count; i++) {
|
2013-01-08 10:53:14 +00:00
|
|
|
obj = to_intel_bo(idr_find(&file->object_idr, exec[i].handle));
|
|
|
|
if (obj == NULL) {
|
|
|
|
spin_unlock(&file->table_lock);
|
|
|
|
DRM_DEBUG("Invalid object handle %d at index %d\n",
|
|
|
|
exec[i].handle, i);
|
drm/i915: Convert execbuf code to use vmas
In order to transition more of our code over to using a VMA instead of
an <OBJ, VM> pair - we must have the vma accessible at execbuf time. Up
until now, we've only had a VMA when actually binding an object.
The previous patch helped handle the distinction on bound vs. unbound.
This patch will help us catch leaks, and other issues before we actually
shuffle a bunch of stuff around.
This attempts to convert all the execbuf code to speak in vmas. Since
the execbuf code is very self contained it was a nice isolated
conversion.
The meat of the code is about turning eb_objects into eb_vma, and then
wiring up the rest of the code to use vmas instead of obj, vm pairs.
Unfortunately, to do this, we must move the exec_list link from the obj
structure. This list is reused in the eviction code, so we must also
modify the eviction code to make this work.
WARNING: This patch makes an already hotly profiled path slower. The cost is
unavoidable. In reply to this mail, I will attach the extra data.
v2: Release table lock early, and two a 2 phase vma lookup to avoid
having to use a GFP_ATOMIC. (Chris)
v3: s/obj_exec_list/obj_exec_link/
Updates to address
commit 6d2b888569d366beb4be72cacfde41adee2c25e1
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Wed Aug 7 18:30:54 2013 +0100
drm/i915: List objects allocated from stolen memory in debugfs
v4: Use obj = vma->obj for neatness in some places (Chris)
need_reloc_mappable() should return false if ppgtt (Chris)
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
[danvet: Split out prep patches. Also remove a FIXME comment which is
now taken care of.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2013-08-14 09:38:36 +00:00
|
|
|
ret = -ENOENT;
|
2013-12-04 09:52:58 +00:00
|
|
|
goto err;
|
2013-01-08 10:53:14 +00:00
|
|
|
}
|
|
|
|
|
drm/i915: Convert execbuf code to use vmas
In order to transition more of our code over to using a VMA instead of
an <OBJ, VM> pair - we must have the vma accessible at execbuf time. Up
until now, we've only had a VMA when actually binding an object.
The previous patch helped handle the distinction on bound vs. unbound.
This patch will help us catch leaks, and other issues before we actually
shuffle a bunch of stuff around.
This attempts to convert all the execbuf code to speak in vmas. Since
the execbuf code is very self contained it was a nice isolated
conversion.
The meat of the code is about turning eb_objects into eb_vma, and then
wiring up the rest of the code to use vmas instead of obj, vm pairs.
Unfortunately, to do this, we must move the exec_list link from the obj
structure. This list is reused in the eviction code, so we must also
modify the eviction code to make this work.
WARNING: This patch makes an already hotly profiled path slower. The cost is
unavoidable. In reply to this mail, I will attach the extra data.
v2: Release table lock early, and two a 2 phase vma lookup to avoid
having to use a GFP_ATOMIC. (Chris)
v3: s/obj_exec_list/obj_exec_link/
Updates to address
commit 6d2b888569d366beb4be72cacfde41adee2c25e1
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Wed Aug 7 18:30:54 2013 +0100
drm/i915: List objects allocated from stolen memory in debugfs
v4: Use obj = vma->obj for neatness in some places (Chris)
need_reloc_mappable() should return false if ppgtt (Chris)
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
[danvet: Split out prep patches. Also remove a FIXME comment which is
now taken care of.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2013-08-14 09:38:36 +00:00
|
|
|
if (!list_empty(&obj->obj_exec_link)) {
|
2013-01-08 10:53:14 +00:00
|
|
|
spin_unlock(&file->table_lock);
|
|
|
|
DRM_DEBUG("Object %p [handle %d, index %d] appears more than once in object list\n",
|
|
|
|
obj, exec[i].handle, i);
|
drm/i915: Convert execbuf code to use vmas
In order to transition more of our code over to using a VMA instead of
an <OBJ, VM> pair - we must have the vma accessible at execbuf time. Up
until now, we've only had a VMA when actually binding an object.
The previous patch helped handle the distinction on bound vs. unbound.
This patch will help us catch leaks, and other issues before we actually
shuffle a bunch of stuff around.
This attempts to convert all the execbuf code to speak in vmas. Since
the execbuf code is very self contained it was a nice isolated
conversion.
The meat of the code is about turning eb_objects into eb_vma, and then
wiring up the rest of the code to use vmas instead of obj, vm pairs.
Unfortunately, to do this, we must move the exec_list link from the obj
structure. This list is reused in the eviction code, so we must also
modify the eviction code to make this work.
WARNING: This patch makes an already hotly profiled path slower. The cost is
unavoidable. In reply to this mail, I will attach the extra data.
v2: Release table lock early, and two a 2 phase vma lookup to avoid
having to use a GFP_ATOMIC. (Chris)
v3: s/obj_exec_list/obj_exec_link/
Updates to address
commit 6d2b888569d366beb4be72cacfde41adee2c25e1
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Wed Aug 7 18:30:54 2013 +0100
drm/i915: List objects allocated from stolen memory in debugfs
v4: Use obj = vma->obj for neatness in some places (Chris)
need_reloc_mappable() should return false if ppgtt (Chris)
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
[danvet: Split out prep patches. Also remove a FIXME comment which is
now taken care of.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2013-08-14 09:38:36 +00:00
|
|
|
ret = -EINVAL;
|
2013-12-04 09:52:58 +00:00
|
|
|
goto err;
|
2013-01-08 10:53:14 +00:00
|
|
|
}
|
|
|
|
|
2016-07-20 12:31:52 +00:00
|
|
|
i915_gem_object_get(obj);
|
drm/i915: Convert execbuf code to use vmas
In order to transition more of our code over to using a VMA instead of
an <OBJ, VM> pair - we must have the vma accessible at execbuf time. Up
until now, we've only had a VMA when actually binding an object.
The previous patch helped handle the distinction on bound vs. unbound.
This patch will help us catch leaks, and other issues before we actually
shuffle a bunch of stuff around.
This attempts to convert all the execbuf code to speak in vmas. Since
the execbuf code is very self contained it was a nice isolated
conversion.
The meat of the code is about turning eb_objects into eb_vma, and then
wiring up the rest of the code to use vmas instead of obj, vm pairs.
Unfortunately, to do this, we must move the exec_list link from the obj
structure. This list is reused in the eviction code, so we must also
modify the eviction code to make this work.
WARNING: This patch makes an already hotly profiled path slower. The cost is
unavoidable. In reply to this mail, I will attach the extra data.
v2: Release table lock early, and two a 2 phase vma lookup to avoid
having to use a GFP_ATOMIC. (Chris)
v3: s/obj_exec_list/obj_exec_link/
Updates to address
commit 6d2b888569d366beb4be72cacfde41adee2c25e1
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Wed Aug 7 18:30:54 2013 +0100
drm/i915: List objects allocated from stolen memory in debugfs
v4: Use obj = vma->obj for neatness in some places (Chris)
need_reloc_mappable() should return false if ppgtt (Chris)
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
[danvet: Split out prep patches. Also remove a FIXME comment which is
now taken care of.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2013-08-14 09:38:36 +00:00
|
|
|
list_add_tail(&obj->obj_exec_link, &objects);
|
|
|
|
}
|
|
|
|
spin_unlock(&file->table_lock);
|
2013-01-08 10:53:14 +00:00
|
|
|
|
drm/i915: Convert execbuf code to use vmas
In order to transition more of our code over to using a VMA instead of
an <OBJ, VM> pair - we must have the vma accessible at execbuf time. Up
until now, we've only had a VMA when actually binding an object.
The previous patch helped handle the distinction on bound vs. unbound.
This patch will help us catch leaks, and other issues before we actually
shuffle a bunch of stuff around.
This attempts to convert all the execbuf code to speak in vmas. Since
the execbuf code is very self contained it was a nice isolated
conversion.
The meat of the code is about turning eb_objects into eb_vma, and then
wiring up the rest of the code to use vmas instead of obj, vm pairs.
Unfortunately, to do this, we must move the exec_list link from the obj
structure. This list is reused in the eviction code, so we must also
modify the eviction code to make this work.
WARNING: This patch makes an already hotly profiled path slower. The cost is
unavoidable. In reply to this mail, I will attach the extra data.
v2: Release table lock early, and two a 2 phase vma lookup to avoid
having to use a GFP_ATOMIC. (Chris)
v3: s/obj_exec_list/obj_exec_link/
Updates to address
commit 6d2b888569d366beb4be72cacfde41adee2c25e1
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Wed Aug 7 18:30:54 2013 +0100
drm/i915: List objects allocated from stolen memory in debugfs
v4: Use obj = vma->obj for neatness in some places (Chris)
need_reloc_mappable() should return false if ppgtt (Chris)
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
[danvet: Split out prep patches. Also remove a FIXME comment which is
now taken care of.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2013-08-14 09:38:36 +00:00
|
|
|
i = 0;
|
2013-12-04 09:52:58 +00:00
|
|
|
while (!list_empty(&objects)) {
|
drm/i915: Convert execbuf code to use vmas
In order to transition more of our code over to using a VMA instead of
an <OBJ, VM> pair - we must have the vma accessible at execbuf time. Up
until now, we've only had a VMA when actually binding an object.
The previous patch helped handle the distinction on bound vs. unbound.
This patch will help us catch leaks, and other issues before we actually
shuffle a bunch of stuff around.
This attempts to convert all the execbuf code to speak in vmas. Since
the execbuf code is very self contained it was a nice isolated
conversion.
The meat of the code is about turning eb_objects into eb_vma, and then
wiring up the rest of the code to use vmas instead of obj, vm pairs.
Unfortunately, to do this, we must move the exec_list link from the obj
structure. This list is reused in the eviction code, so we must also
modify the eviction code to make this work.
WARNING: This patch makes an already hotly profiled path slower. The cost is
unavoidable. In reply to this mail, I will attach the extra data.
v2: Release table lock early, and two a 2 phase vma lookup to avoid
having to use a GFP_ATOMIC. (Chris)
v3: s/obj_exec_list/obj_exec_link/
Updates to address
commit 6d2b888569d366beb4be72cacfde41adee2c25e1
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Wed Aug 7 18:30:54 2013 +0100
drm/i915: List objects allocated from stolen memory in debugfs
v4: Use obj = vma->obj for neatness in some places (Chris)
need_reloc_mappable() should return false if ppgtt (Chris)
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
[danvet: Split out prep patches. Also remove a FIXME comment which is
now taken care of.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2013-08-14 09:38:36 +00:00
|
|
|
struct i915_vma *vma;
|
drm/i915: Create bind/unbind abstraction for VMAs
To sum up what goes on here, we abstract the vma binding, similarly to
the previous object binding. This helps for distinguishing legacy
binding, versus modern binding. To keep the code churn as minimal as
possible, I am leaving in insert_entries(). It serves as the per
platform pte writing basically. bind_vma and insert_entries do share a
lot of similarities, and I did have designs to combine the two, but as
mentioned already... too much churn in an already massive patchset.
What follows are the 3 commits which existed discretely in the original
submissions. Upon rebasing on Broadwell support, it became clear that
separation was not good, and only made for more error prone code. Below
are the 3 commit messages with all their history.
drm/i915: Add bind/unbind object functions to VMA
drm/i915: Use the new vm [un]bind functions
drm/i915: reduce vm->insert_entries() usage
drm/i915: Add bind/unbind object functions to VMA
As we plumb the code with more VM information, it has become more
obvious that the easiest way to deal with bind and unbind is to simply
put the function pointers in the vm, and let those choose the correct
way to handle the page table updates. This change allows many places in
the code to simply be vm->bind, and not have to worry about
distinguishing PPGTT vs GGTT.
Notice that this patch has no impact on functionality. I've decided to
save the actual change until the next patch because I think it's easier
to review that way. I'm happy to squash the two, or let Daniel do it on
merge.
v2:
Make ggtt handle the quirky aliasing ppgtt
Add flags to bind object to support above
Don't ever call bind/unbind directly for PPGTT until we have real, full
PPGTT (use NULLs to assert this)
Make sure we rebind the ggtt if there already is a ggtt binding. This
happens on set cache levels.
Use VMA for bind/unbind (Daniel, Ben)
v3: Reorganize ggtt_vma_bind to be more concise and easier to read
(Ville). Change logic in unbind to only unbind ggtt when there is a
global mapping, and to remove a redundant check if the aliasing ppgtt
exists.
v4: Make the bind function a bit smarter about the cache levels to avoid
unnecessary multiple remaps. "I accept it is a wart, I think unifying
the pin_vma / bind_vma could be unified later" (Chris)
Removed the git notes, and put version info here. (Daniel)
v5: Update the comment to not suck (Chris)
v6:
Move bind/unbind to the VMA. It makes more sense in the VMA structure
(always has, but I was previously lazy). With this change, it will allow
us to keep a distinct insert_entries.
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
drm/i915: Use the new vm [un]bind functions
Building on the last patch which created the new function pointers in
the VM for bind/unbind, here we actually put those new function pointers
to use.
Split out as a separate patch to aid in review. I'm fine with squashing
into the previous patch if people request it.
v2: Updated to address the smart ggtt which can do aliasing as needed
Make sure we bind to global gtt when mappable and fenceable. I thought
we could get away without this initialy, but we cannot.
v3: Make the global GTT binding explicitly use the ggtt VM for
bind_vma(). While at it, use the new ggtt_vma helper (Chris)
At this point the original mailing list thread diverges. ie.
v4^:
use target_obj instead of obj for gen6 relocate_entry
vma->bind_vma() can be called safely during pin. So simply do that
instead of the complicated conditionals.
Don't restore PPGTT bound objects on resume path
Bug fix in resume path for globally bound Bos
Properly handle secure dispatch
Rebased on vma bind/unbind conversion
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
drm/i915: reduce vm->insert_entries() usage
FKA: drm/i915: eliminate vm->insert_entries()
With bind/unbind function pointers in place, we no longer need
insert_entries. We could, and want, to remove clear_range, however it's
not totally easy at this point. Since it's used in a couple of place
still that don't only deal in objects: setup, ppgtt init, and restore
gtt mappings.
v2: Don't actually remove insert_entries, just limit its usage. It will
be useful when we introduce gen8. It will always be called from the vma
bind/unbind.
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> (v1)
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2013-12-06 22:10:56 +00:00
|
|
|
|
2013-12-04 09:52:58 +00:00
|
|
|
obj = list_first_entry(&objects,
|
|
|
|
struct drm_i915_gem_object,
|
|
|
|
obj_exec_link);
|
|
|
|
|
2013-08-14 12:14:04 +00:00
|
|
|
/*
|
|
|
|
* NOTE: We can leak any vmas created here when something fails
|
|
|
|
* later on. But that's no issue since vma_unbind can deal with
|
|
|
|
* vmas which are not actually bound. And since only
|
|
|
|
* lookup_or_create exists as an interface to get at the vma
|
|
|
|
* from the (obj, vm) we don't run the risk of creating
|
|
|
|
* duplicated vmas for the same vm.
|
|
|
|
*/
|
2017-01-16 15:21:28 +00:00
|
|
|
vma = i915_vma_instance(obj, vm, NULL);
|
2016-08-15 09:49:06 +00:00
|
|
|
if (unlikely(IS_ERR(vma))) {
|
drm/i915: Convert execbuf code to use vmas
In order to transition more of our code over to using a VMA instead of
an <OBJ, VM> pair - we must have the vma accessible at execbuf time. Up
until now, we've only had a VMA when actually binding an object.
The previous patch helped handle the distinction on bound vs. unbound.
This patch will help us catch leaks, and other issues before we actually
shuffle a bunch of stuff around.
This attempts to convert all the execbuf code to speak in vmas. Since
the execbuf code is very self contained it was a nice isolated
conversion.
The meat of the code is about turning eb_objects into eb_vma, and then
wiring up the rest of the code to use vmas instead of obj, vm pairs.
Unfortunately, to do this, we must move the exec_list link from the obj
structure. This list is reused in the eviction code, so we must also
modify the eviction code to make this work.
WARNING: This patch makes an already hotly profiled path slower. The cost is
unavoidable. In reply to this mail, I will attach the extra data.
v2: Release table lock early, and two a 2 phase vma lookup to avoid
having to use a GFP_ATOMIC. (Chris)
v3: s/obj_exec_list/obj_exec_link/
Updates to address
commit 6d2b888569d366beb4be72cacfde41adee2c25e1
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Wed Aug 7 18:30:54 2013 +0100
drm/i915: List objects allocated from stolen memory in debugfs
v4: Use obj = vma->obj for neatness in some places (Chris)
need_reloc_mappable() should return false if ppgtt (Chris)
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
[danvet: Split out prep patches. Also remove a FIXME comment which is
now taken care of.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2013-08-14 09:38:36 +00:00
|
|
|
DRM_DEBUG("Failed to lookup VMA\n");
|
|
|
|
ret = PTR_ERR(vma);
|
2013-12-04 09:52:58 +00:00
|
|
|
goto err;
|
drm/i915: Convert execbuf code to use vmas
In order to transition more of our code over to using a VMA instead of
an <OBJ, VM> pair - we must have the vma accessible at execbuf time. Up
until now, we've only had a VMA when actually binding an object.
The previous patch helped handle the distinction on bound vs. unbound.
This patch will help us catch leaks, and other issues before we actually
shuffle a bunch of stuff around.
This attempts to convert all the execbuf code to speak in vmas. Since
the execbuf code is very self contained it was a nice isolated
conversion.
The meat of the code is about turning eb_objects into eb_vma, and then
wiring up the rest of the code to use vmas instead of obj, vm pairs.
Unfortunately, to do this, we must move the exec_list link from the obj
structure. This list is reused in the eviction code, so we must also
modify the eviction code to make this work.
WARNING: This patch makes an already hotly profiled path slower. The cost is
unavoidable. In reply to this mail, I will attach the extra data.
v2: Release table lock early, and two a 2 phase vma lookup to avoid
having to use a GFP_ATOMIC. (Chris)
v3: s/obj_exec_list/obj_exec_link/
Updates to address
commit 6d2b888569d366beb4be72cacfde41adee2c25e1
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Wed Aug 7 18:30:54 2013 +0100
drm/i915: List objects allocated from stolen memory in debugfs
v4: Use obj = vma->obj for neatness in some places (Chris)
need_reloc_mappable() should return false if ppgtt (Chris)
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
[danvet: Split out prep patches. Also remove a FIXME comment which is
now taken care of.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2013-08-14 09:38:36 +00:00
|
|
|
}
|
|
|
|
|
2013-12-04 09:52:58 +00:00
|
|
|
/* Transfer ownership from the objects list to the vmas list. */
|
drm/i915: Convert execbuf code to use vmas
In order to transition more of our code over to using a VMA instead of
an <OBJ, VM> pair - we must have the vma accessible at execbuf time. Up
until now, we've only had a VMA when actually binding an object.
The previous patch helped handle the distinction on bound vs. unbound.
This patch will help us catch leaks, and other issues before we actually
shuffle a bunch of stuff around.
This attempts to convert all the execbuf code to speak in vmas. Since
the execbuf code is very self contained it was a nice isolated
conversion.
The meat of the code is about turning eb_objects into eb_vma, and then
wiring up the rest of the code to use vmas instead of obj, vm pairs.
Unfortunately, to do this, we must move the exec_list link from the obj
structure. This list is reused in the eviction code, so we must also
modify the eviction code to make this work.
WARNING: This patch makes an already hotly profiled path slower. The cost is
unavoidable. In reply to this mail, I will attach the extra data.
v2: Release table lock early, and two a 2 phase vma lookup to avoid
having to use a GFP_ATOMIC. (Chris)
v3: s/obj_exec_list/obj_exec_link/
Updates to address
commit 6d2b888569d366beb4be72cacfde41adee2c25e1
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Wed Aug 7 18:30:54 2013 +0100
drm/i915: List objects allocated from stolen memory in debugfs
v4: Use obj = vma->obj for neatness in some places (Chris)
need_reloc_mappable() should return false if ppgtt (Chris)
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
[danvet: Split out prep patches. Also remove a FIXME comment which is
now taken care of.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2013-08-14 09:38:36 +00:00
|
|
|
list_add_tail(&vma->exec_list, &eb->vmas);
|
2013-12-04 09:52:58 +00:00
|
|
|
list_del_init(&obj->obj_exec_link);
|
drm/i915: Convert execbuf code to use vmas
In order to transition more of our code over to using a VMA instead of
an <OBJ, VM> pair - we must have the vma accessible at execbuf time. Up
until now, we've only had a VMA when actually binding an object.
The previous patch helped handle the distinction on bound vs. unbound.
This patch will help us catch leaks, and other issues before we actually
shuffle a bunch of stuff around.
This attempts to convert all the execbuf code to speak in vmas. Since
the execbuf code is very self contained it was a nice isolated
conversion.
The meat of the code is about turning eb_objects into eb_vma, and then
wiring up the rest of the code to use vmas instead of obj, vm pairs.
Unfortunately, to do this, we must move the exec_list link from the obj
structure. This list is reused in the eviction code, so we must also
modify the eviction code to make this work.
WARNING: This patch makes an already hotly profiled path slower. The cost is
unavoidable. In reply to this mail, I will attach the extra data.
v2: Release table lock early, and two a 2 phase vma lookup to avoid
having to use a GFP_ATOMIC. (Chris)
v3: s/obj_exec_list/obj_exec_link/
Updates to address
commit 6d2b888569d366beb4be72cacfde41adee2c25e1
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Wed Aug 7 18:30:54 2013 +0100
drm/i915: List objects allocated from stolen memory in debugfs
v4: Use obj = vma->obj for neatness in some places (Chris)
need_reloc_mappable() should return false if ppgtt (Chris)
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
[danvet: Split out prep patches. Also remove a FIXME comment which is
now taken care of.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2013-08-14 09:38:36 +00:00
|
|
|
|
|
|
|
vma->exec_entry = &exec[i];
|
2013-01-08 10:53:17 +00:00
|
|
|
if (eb->and < 0) {
|
drm/i915: Convert execbuf code to use vmas
In order to transition more of our code over to using a VMA instead of
an <OBJ, VM> pair - we must have the vma accessible at execbuf time. Up
until now, we've only had a VMA when actually binding an object.
The previous patch helped handle the distinction on bound vs. unbound.
This patch will help us catch leaks, and other issues before we actually
shuffle a bunch of stuff around.
This attempts to convert all the execbuf code to speak in vmas. Since
the execbuf code is very self contained it was a nice isolated
conversion.
The meat of the code is about turning eb_objects into eb_vma, and then
wiring up the rest of the code to use vmas instead of obj, vm pairs.
Unfortunately, to do this, we must move the exec_list link from the obj
structure. This list is reused in the eviction code, so we must also
modify the eviction code to make this work.
WARNING: This patch makes an already hotly profiled path slower. The cost is
unavoidable. In reply to this mail, I will attach the extra data.
v2: Release table lock early, and two a 2 phase vma lookup to avoid
having to use a GFP_ATOMIC. (Chris)
v3: s/obj_exec_list/obj_exec_link/
Updates to address
commit 6d2b888569d366beb4be72cacfde41adee2c25e1
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Wed Aug 7 18:30:54 2013 +0100
drm/i915: List objects allocated from stolen memory in debugfs
v4: Use obj = vma->obj for neatness in some places (Chris)
need_reloc_mappable() should return false if ppgtt (Chris)
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
[danvet: Split out prep patches. Also remove a FIXME comment which is
now taken care of.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2013-08-14 09:38:36 +00:00
|
|
|
eb->lut[i] = vma;
|
2013-01-08 10:53:17 +00:00
|
|
|
} else {
|
|
|
|
uint32_t handle = args->flags & I915_EXEC_HANDLE_LUT ? i : exec[i].handle;
|
drm/i915: Convert execbuf code to use vmas
In order to transition more of our code over to using a VMA instead of
an <OBJ, VM> pair - we must have the vma accessible at execbuf time. Up
until now, we've only had a VMA when actually binding an object.
The previous patch helped handle the distinction on bound vs. unbound.
This patch will help us catch leaks, and other issues before we actually
shuffle a bunch of stuff around.
This attempts to convert all the execbuf code to speak in vmas. Since
the execbuf code is very self contained it was a nice isolated
conversion.
The meat of the code is about turning eb_objects into eb_vma, and then
wiring up the rest of the code to use vmas instead of obj, vm pairs.
Unfortunately, to do this, we must move the exec_list link from the obj
structure. This list is reused in the eviction code, so we must also
modify the eviction code to make this work.
WARNING: This patch makes an already hotly profiled path slower. The cost is
unavoidable. In reply to this mail, I will attach the extra data.
v2: Release table lock early, and two a 2 phase vma lookup to avoid
having to use a GFP_ATOMIC. (Chris)
v3: s/obj_exec_list/obj_exec_link/
Updates to address
commit 6d2b888569d366beb4be72cacfde41adee2c25e1
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Wed Aug 7 18:30:54 2013 +0100
drm/i915: List objects allocated from stolen memory in debugfs
v4: Use obj = vma->obj for neatness in some places (Chris)
need_reloc_mappable() should return false if ppgtt (Chris)
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
[danvet: Split out prep patches. Also remove a FIXME comment which is
now taken care of.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2013-08-14 09:38:36 +00:00
|
|
|
vma->exec_handle = handle;
|
|
|
|
hlist_add_head(&vma->exec_node,
|
2013-01-08 10:53:17 +00:00
|
|
|
&eb->buckets[handle & eb->and]);
|
|
|
|
}
|
drm/i915: Convert execbuf code to use vmas
In order to transition more of our code over to using a VMA instead of
an <OBJ, VM> pair - we must have the vma accessible at execbuf time. Up
until now, we've only had a VMA when actually binding an object.
The previous patch helped handle the distinction on bound vs. unbound.
This patch will help us catch leaks, and other issues before we actually
shuffle a bunch of stuff around.
This attempts to convert all the execbuf code to speak in vmas. Since
the execbuf code is very self contained it was a nice isolated
conversion.
The meat of the code is about turning eb_objects into eb_vma, and then
wiring up the rest of the code to use vmas instead of obj, vm pairs.
Unfortunately, to do this, we must move the exec_list link from the obj
structure. This list is reused in the eviction code, so we must also
modify the eviction code to make this work.
WARNING: This patch makes an already hotly profiled path slower. The cost is
unavoidable. In reply to this mail, I will attach the extra data.
v2: Release table lock early, and two a 2 phase vma lookup to avoid
having to use a GFP_ATOMIC. (Chris)
v3: s/obj_exec_list/obj_exec_link/
Updates to address
commit 6d2b888569d366beb4be72cacfde41adee2c25e1
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Wed Aug 7 18:30:54 2013 +0100
drm/i915: List objects allocated from stolen memory in debugfs
v4: Use obj = vma->obj for neatness in some places (Chris)
need_reloc_mappable() should return false if ppgtt (Chris)
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
[danvet: Split out prep patches. Also remove a FIXME comment which is
now taken care of.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2013-08-14 09:38:36 +00:00
|
|
|
++i;
|
2013-01-08 10:53:14 +00:00
|
|
|
}
|
|
|
|
|
2013-12-04 09:52:58 +00:00
|
|
|
return 0;
|
drm/i915: Convert execbuf code to use vmas
In order to transition more of our code over to using a VMA instead of
an <OBJ, VM> pair - we must have the vma accessible at execbuf time. Up
until now, we've only had a VMA when actually binding an object.
The previous patch helped handle the distinction on bound vs. unbound.
This patch will help us catch leaks, and other issues before we actually
shuffle a bunch of stuff around.
This attempts to convert all the execbuf code to speak in vmas. Since
the execbuf code is very self contained it was a nice isolated
conversion.
The meat of the code is about turning eb_objects into eb_vma, and then
wiring up the rest of the code to use vmas instead of obj, vm pairs.
Unfortunately, to do this, we must move the exec_list link from the obj
structure. This list is reused in the eviction code, so we must also
modify the eviction code to make this work.
WARNING: This patch makes an already hotly profiled path slower. The cost is
unavoidable. In reply to this mail, I will attach the extra data.
v2: Release table lock early, and two a 2 phase vma lookup to avoid
having to use a GFP_ATOMIC. (Chris)
v3: s/obj_exec_list/obj_exec_link/
Updates to address
commit 6d2b888569d366beb4be72cacfde41adee2c25e1
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Wed Aug 7 18:30:54 2013 +0100
drm/i915: List objects allocated from stolen memory in debugfs
v4: Use obj = vma->obj for neatness in some places (Chris)
need_reloc_mappable() should return false if ppgtt (Chris)
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
[danvet: Split out prep patches. Also remove a FIXME comment which is
now taken care of.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2013-08-14 09:38:36 +00:00
|
|
|
|
|
|
|
|
2013-12-04 09:52:58 +00:00
|
|
|
err:
|
drm/i915: Convert execbuf code to use vmas
In order to transition more of our code over to using a VMA instead of
an <OBJ, VM> pair - we must have the vma accessible at execbuf time. Up
until now, we've only had a VMA when actually binding an object.
The previous patch helped handle the distinction on bound vs. unbound.
This patch will help us catch leaks, and other issues before we actually
shuffle a bunch of stuff around.
This attempts to convert all the execbuf code to speak in vmas. Since
the execbuf code is very self contained it was a nice isolated
conversion.
The meat of the code is about turning eb_objects into eb_vma, and then
wiring up the rest of the code to use vmas instead of obj, vm pairs.
Unfortunately, to do this, we must move the exec_list link from the obj
structure. This list is reused in the eviction code, so we must also
modify the eviction code to make this work.
WARNING: This patch makes an already hotly profiled path slower. The cost is
unavoidable. In reply to this mail, I will attach the extra data.
v2: Release table lock early, and two a 2 phase vma lookup to avoid
having to use a GFP_ATOMIC. (Chris)
v3: s/obj_exec_list/obj_exec_link/
Updates to address
commit 6d2b888569d366beb4be72cacfde41adee2c25e1
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Wed Aug 7 18:30:54 2013 +0100
drm/i915: List objects allocated from stolen memory in debugfs
v4: Use obj = vma->obj for neatness in some places (Chris)
need_reloc_mappable() should return false if ppgtt (Chris)
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
[danvet: Split out prep patches. Also remove a FIXME comment which is
now taken care of.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2013-08-14 09:38:36 +00:00
|
|
|
while (!list_empty(&objects)) {
|
|
|
|
obj = list_first_entry(&objects,
|
|
|
|
struct drm_i915_gem_object,
|
|
|
|
obj_exec_link);
|
|
|
|
list_del_init(&obj->obj_exec_link);
|
2016-07-20 12:31:53 +00:00
|
|
|
i915_gem_object_put(obj);
|
drm/i915: Convert execbuf code to use vmas
In order to transition more of our code over to using a VMA instead of
an <OBJ, VM> pair - we must have the vma accessible at execbuf time. Up
until now, we've only had a VMA when actually binding an object.
The previous patch helped handle the distinction on bound vs. unbound.
This patch will help us catch leaks, and other issues before we actually
shuffle a bunch of stuff around.
This attempts to convert all the execbuf code to speak in vmas. Since
the execbuf code is very self contained it was a nice isolated
conversion.
The meat of the code is about turning eb_objects into eb_vma, and then
wiring up the rest of the code to use vmas instead of obj, vm pairs.
Unfortunately, to do this, we must move the exec_list link from the obj
structure. This list is reused in the eviction code, so we must also
modify the eviction code to make this work.
WARNING: This patch makes an already hotly profiled path slower. The cost is
unavoidable. In reply to this mail, I will attach the extra data.
v2: Release table lock early, and two a 2 phase vma lookup to avoid
having to use a GFP_ATOMIC. (Chris)
v3: s/obj_exec_list/obj_exec_link/
Updates to address
commit 6d2b888569d366beb4be72cacfde41adee2c25e1
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Wed Aug 7 18:30:54 2013 +0100
drm/i915: List objects allocated from stolen memory in debugfs
v4: Use obj = vma->obj for neatness in some places (Chris)
need_reloc_mappable() should return false if ppgtt (Chris)
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
[danvet: Split out prep patches. Also remove a FIXME comment which is
now taken care of.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2013-08-14 09:38:36 +00:00
|
|
|
}
|
2013-12-04 09:52:58 +00:00
|
|
|
/*
|
|
|
|
* Objects already transfered to the vmas list will be unreferenced by
|
|
|
|
* eb_destroy.
|
|
|
|
*/
|
|
|
|
|
drm/i915: Convert execbuf code to use vmas
In order to transition more of our code over to using a VMA instead of
an <OBJ, VM> pair - we must have the vma accessible at execbuf time. Up
until now, we've only had a VMA when actually binding an object.
The previous patch helped handle the distinction on bound vs. unbound.
This patch will help us catch leaks, and other issues before we actually
shuffle a bunch of stuff around.
This attempts to convert all the execbuf code to speak in vmas. Since
the execbuf code is very self contained it was a nice isolated
conversion.
The meat of the code is about turning eb_objects into eb_vma, and then
wiring up the rest of the code to use vmas instead of obj, vm pairs.
Unfortunately, to do this, we must move the exec_list link from the obj
structure. This list is reused in the eviction code, so we must also
modify the eviction code to make this work.
WARNING: This patch makes an already hotly profiled path slower. The cost is
unavoidable. In reply to this mail, I will attach the extra data.
v2: Release table lock early, and two a 2 phase vma lookup to avoid
having to use a GFP_ATOMIC. (Chris)
v3: s/obj_exec_list/obj_exec_link/
Updates to address
commit 6d2b888569d366beb4be72cacfde41adee2c25e1
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Wed Aug 7 18:30:54 2013 +0100
drm/i915: List objects allocated from stolen memory in debugfs
v4: Use obj = vma->obj for neatness in some places (Chris)
need_reloc_mappable() should return false if ppgtt (Chris)
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
[danvet: Split out prep patches. Also remove a FIXME comment which is
now taken care of.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2013-08-14 09:38:36 +00:00
|
|
|
return ret;
|
2013-01-08 10:53:14 +00:00
|
|
|
}
|
|
|
|
|
drm/i915: Convert execbuf code to use vmas
In order to transition more of our code over to using a VMA instead of
an <OBJ, VM> pair - we must have the vma accessible at execbuf time. Up
until now, we've only had a VMA when actually binding an object.
The previous patch helped handle the distinction on bound vs. unbound.
This patch will help us catch leaks, and other issues before we actually
shuffle a bunch of stuff around.
This attempts to convert all the execbuf code to speak in vmas. Since
the execbuf code is very self contained it was a nice isolated
conversion.
The meat of the code is about turning eb_objects into eb_vma, and then
wiring up the rest of the code to use vmas instead of obj, vm pairs.
Unfortunately, to do this, we must move the exec_list link from the obj
structure. This list is reused in the eviction code, so we must also
modify the eviction code to make this work.
WARNING: This patch makes an already hotly profiled path slower. The cost is
unavoidable. In reply to this mail, I will attach the extra data.
v2: Release table lock early, and two a 2 phase vma lookup to avoid
having to use a GFP_ATOMIC. (Chris)
v3: s/obj_exec_list/obj_exec_link/
Updates to address
commit 6d2b888569d366beb4be72cacfde41adee2c25e1
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Wed Aug 7 18:30:54 2013 +0100
drm/i915: List objects allocated from stolen memory in debugfs
v4: Use obj = vma->obj for neatness in some places (Chris)
need_reloc_mappable() should return false if ppgtt (Chris)
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
[danvet: Split out prep patches. Also remove a FIXME comment which is
now taken care of.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2013-08-14 09:38:36 +00:00
|
|
|
static struct i915_vma *eb_get_vma(struct eb_vmas *eb, unsigned long handle)
|
2010-12-08 10:38:14 +00:00
|
|
|
{
|
2013-01-08 10:53:17 +00:00
|
|
|
if (eb->and < 0) {
|
|
|
|
if (handle >= -eb->and)
|
|
|
|
return NULL;
|
|
|
|
return eb->lut[handle];
|
|
|
|
} else {
|
|
|
|
struct hlist_head *head;
|
2016-01-18 15:54:20 +00:00
|
|
|
struct i915_vma *vma;
|
2010-12-08 10:38:14 +00:00
|
|
|
|
2013-01-08 10:53:17 +00:00
|
|
|
head = &eb->buckets[handle & eb->and];
|
2016-01-18 15:54:20 +00:00
|
|
|
hlist_for_each_entry(vma, head, exec_node) {
|
drm/i915: Convert execbuf code to use vmas
In order to transition more of our code over to using a VMA instead of
an <OBJ, VM> pair - we must have the vma accessible at execbuf time. Up
until now, we've only had a VMA when actually binding an object.
The previous patch helped handle the distinction on bound vs. unbound.
This patch will help us catch leaks, and other issues before we actually
shuffle a bunch of stuff around.
This attempts to convert all the execbuf code to speak in vmas. Since
the execbuf code is very self contained it was a nice isolated
conversion.
The meat of the code is about turning eb_objects into eb_vma, and then
wiring up the rest of the code to use vmas instead of obj, vm pairs.
Unfortunately, to do this, we must move the exec_list link from the obj
structure. This list is reused in the eviction code, so we must also
modify the eviction code to make this work.
WARNING: This patch makes an already hotly profiled path slower. The cost is
unavoidable. In reply to this mail, I will attach the extra data.
v2: Release table lock early, and two a 2 phase vma lookup to avoid
having to use a GFP_ATOMIC. (Chris)
v3: s/obj_exec_list/obj_exec_link/
Updates to address
commit 6d2b888569d366beb4be72cacfde41adee2c25e1
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Wed Aug 7 18:30:54 2013 +0100
drm/i915: List objects allocated from stolen memory in debugfs
v4: Use obj = vma->obj for neatness in some places (Chris)
need_reloc_mappable() should return false if ppgtt (Chris)
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
[danvet: Split out prep patches. Also remove a FIXME comment which is
now taken care of.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2013-08-14 09:38:36 +00:00
|
|
|
if (vma->exec_handle == handle)
|
|
|
|
return vma;
|
2013-01-08 10:53:17 +00:00
|
|
|
}
|
|
|
|
return NULL;
|
|
|
|
}
|
2010-12-08 10:38:14 +00:00
|
|
|
}
|
|
|
|
|
2013-11-26 11:23:15 +00:00
|
|
|
static void
|
|
|
|
i915_gem_execbuffer_unreserve_vma(struct i915_vma *vma)
|
|
|
|
{
|
|
|
|
struct drm_i915_gem_exec_object2 *entry;
|
|
|
|
|
|
|
|
if (!drm_mm_node_allocated(&vma->node))
|
|
|
|
return;
|
|
|
|
|
|
|
|
entry = vma->exec_entry;
|
|
|
|
|
|
|
|
if (entry->flags & __EXEC_OBJECT_HAS_FENCE)
|
2016-08-18 16:17:00 +00:00
|
|
|
i915_vma_unpin_fence(vma);
|
2013-11-26 11:23:15 +00:00
|
|
|
|
|
|
|
if (entry->flags & __EXEC_OBJECT_HAS_PIN)
|
2016-08-04 15:32:30 +00:00
|
|
|
__i915_vma_unpin(vma);
|
2013-11-26 11:23:15 +00:00
|
|
|
|
2015-04-07 15:20:35 +00:00
|
|
|
entry->flags &= ~(__EXEC_OBJECT_HAS_FENCE | __EXEC_OBJECT_HAS_PIN);
|
2013-11-26 11:23:15 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static void eb_destroy(struct eb_vmas *eb)
|
|
|
|
{
|
drm/i915: Convert execbuf code to use vmas
In order to transition more of our code over to using a VMA instead of
an <OBJ, VM> pair - we must have the vma accessible at execbuf time. Up
until now, we've only had a VMA when actually binding an object.
The previous patch helped handle the distinction on bound vs. unbound.
This patch will help us catch leaks, and other issues before we actually
shuffle a bunch of stuff around.
This attempts to convert all the execbuf code to speak in vmas. Since
the execbuf code is very self contained it was a nice isolated
conversion.
The meat of the code is about turning eb_objects into eb_vma, and then
wiring up the rest of the code to use vmas instead of obj, vm pairs.
Unfortunately, to do this, we must move the exec_list link from the obj
structure. This list is reused in the eviction code, so we must also
modify the eviction code to make this work.
WARNING: This patch makes an already hotly profiled path slower. The cost is
unavoidable. In reply to this mail, I will attach the extra data.
v2: Release table lock early, and two a 2 phase vma lookup to avoid
having to use a GFP_ATOMIC. (Chris)
v3: s/obj_exec_list/obj_exec_link/
Updates to address
commit 6d2b888569d366beb4be72cacfde41adee2c25e1
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Wed Aug 7 18:30:54 2013 +0100
drm/i915: List objects allocated from stolen memory in debugfs
v4: Use obj = vma->obj for neatness in some places (Chris)
need_reloc_mappable() should return false if ppgtt (Chris)
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
[danvet: Split out prep patches. Also remove a FIXME comment which is
now taken care of.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2013-08-14 09:38:36 +00:00
|
|
|
while (!list_empty(&eb->vmas)) {
|
|
|
|
struct i915_vma *vma;
|
2013-01-08 10:53:15 +00:00
|
|
|
|
drm/i915: Convert execbuf code to use vmas
In order to transition more of our code over to using a VMA instead of
an <OBJ, VM> pair - we must have the vma accessible at execbuf time. Up
until now, we've only had a VMA when actually binding an object.
The previous patch helped handle the distinction on bound vs. unbound.
This patch will help us catch leaks, and other issues before we actually
shuffle a bunch of stuff around.
This attempts to convert all the execbuf code to speak in vmas. Since
the execbuf code is very self contained it was a nice isolated
conversion.
The meat of the code is about turning eb_objects into eb_vma, and then
wiring up the rest of the code to use vmas instead of obj, vm pairs.
Unfortunately, to do this, we must move the exec_list link from the obj
structure. This list is reused in the eviction code, so we must also
modify the eviction code to make this work.
WARNING: This patch makes an already hotly profiled path slower. The cost is
unavoidable. In reply to this mail, I will attach the extra data.
v2: Release table lock early, and two a 2 phase vma lookup to avoid
having to use a GFP_ATOMIC. (Chris)
v3: s/obj_exec_list/obj_exec_link/
Updates to address
commit 6d2b888569d366beb4be72cacfde41adee2c25e1
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Wed Aug 7 18:30:54 2013 +0100
drm/i915: List objects allocated from stolen memory in debugfs
v4: Use obj = vma->obj for neatness in some places (Chris)
need_reloc_mappable() should return false if ppgtt (Chris)
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
[danvet: Split out prep patches. Also remove a FIXME comment which is
now taken care of.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2013-08-14 09:38:36 +00:00
|
|
|
vma = list_first_entry(&eb->vmas,
|
|
|
|
struct i915_vma,
|
2013-01-08 10:53:15 +00:00
|
|
|
exec_list);
|
drm/i915: Convert execbuf code to use vmas
In order to transition more of our code over to using a VMA instead of
an <OBJ, VM> pair - we must have the vma accessible at execbuf time. Up
until now, we've only had a VMA when actually binding an object.
The previous patch helped handle the distinction on bound vs. unbound.
This patch will help us catch leaks, and other issues before we actually
shuffle a bunch of stuff around.
This attempts to convert all the execbuf code to speak in vmas. Since
the execbuf code is very self contained it was a nice isolated
conversion.
The meat of the code is about turning eb_objects into eb_vma, and then
wiring up the rest of the code to use vmas instead of obj, vm pairs.
Unfortunately, to do this, we must move the exec_list link from the obj
structure. This list is reused in the eviction code, so we must also
modify the eviction code to make this work.
WARNING: This patch makes an already hotly profiled path slower. The cost is
unavoidable. In reply to this mail, I will attach the extra data.
v2: Release table lock early, and two a 2 phase vma lookup to avoid
having to use a GFP_ATOMIC. (Chris)
v3: s/obj_exec_list/obj_exec_link/
Updates to address
commit 6d2b888569d366beb4be72cacfde41adee2c25e1
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Wed Aug 7 18:30:54 2013 +0100
drm/i915: List objects allocated from stolen memory in debugfs
v4: Use obj = vma->obj for neatness in some places (Chris)
need_reloc_mappable() should return false if ppgtt (Chris)
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
[danvet: Split out prep patches. Also remove a FIXME comment which is
now taken care of.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2013-08-14 09:38:36 +00:00
|
|
|
list_del_init(&vma->exec_list);
|
2013-11-26 11:23:15 +00:00
|
|
|
i915_gem_execbuffer_unreserve_vma(vma);
|
2016-12-05 14:29:37 +00:00
|
|
|
vma->exec_entry = NULL;
|
2016-08-15 09:48:50 +00:00
|
|
|
i915_vma_put(vma);
|
2013-01-08 10:53:15 +00:00
|
|
|
}
|
2010-12-08 10:38:14 +00:00
|
|
|
kfree(eb);
|
|
|
|
}
|
|
|
|
|
2012-03-26 08:10:27 +00:00
|
|
|
static inline int use_cpu_reloc(struct drm_i915_gem_object *obj)
|
|
|
|
{
|
2016-08-18 16:16:54 +00:00
|
|
|
if (!i915_gem_object_has_struct_page(obj))
|
|
|
|
return false;
|
|
|
|
|
2016-08-18 16:16:52 +00:00
|
|
|
if (DBG_USE_CPU_RELOC)
|
|
|
|
return DBG_USE_CPU_RELOC > 0;
|
|
|
|
|
2016-11-04 14:42:44 +00:00
|
|
|
return (HAS_LLC(to_i915(obj->base.dev)) ||
|
2013-08-26 22:51:00 +00:00
|
|
|
obj->base.write_domain == I915_GEM_DOMAIN_CPU ||
|
2012-03-26 08:10:27 +00:00
|
|
|
obj->cache_level != I915_CACHE_NONE);
|
|
|
|
}
|
|
|
|
|
2015-12-29 17:24:52 +00:00
|
|
|
/* Used to convert any address to canonical form.
|
|
|
|
* Starting from gen8, some commands (e.g. STATE_BASE_ADDRESS,
|
|
|
|
* MI_LOAD_REGISTER_MEM and others, see Broadwell PRM Vol2a) require the
|
|
|
|
* addresses to be in a canonical form:
|
|
|
|
* "GraphicsAddress[63:48] are ignored by the HW and assumed to be in correct
|
|
|
|
* canonical form [63:48] == [47]."
|
|
|
|
*/
|
|
|
|
#define GEN8_HIGH_ADDRESS_BIT 47
|
|
|
|
static inline uint64_t gen8_canonical_addr(uint64_t address)
|
|
|
|
{
|
|
|
|
return sign_extend64(address, GEN8_HIGH_ADDRESS_BIT);
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline uint64_t gen8_noncanonical_addr(uint64_t address)
|
|
|
|
{
|
|
|
|
return address & ((1ULL << (GEN8_HIGH_ADDRESS_BIT + 1)) - 1);
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline uint64_t
|
2016-08-18 16:16:52 +00:00
|
|
|
relocation_target(const struct drm_i915_gem_relocation_entry *reloc,
|
2015-12-29 17:24:52 +00:00
|
|
|
uint64_t target_offset)
|
|
|
|
{
|
|
|
|
return gen8_canonical_addr((int)reloc->delta + target_offset);
|
|
|
|
}
|
|
|
|
|
2016-08-18 16:16:46 +00:00
|
|
|
struct reloc_cache {
|
2016-08-18 16:16:52 +00:00
|
|
|
struct drm_i915_private *i915;
|
|
|
|
struct drm_mm_node node;
|
|
|
|
unsigned long vaddr;
|
2016-08-18 16:16:46 +00:00
|
|
|
unsigned int page;
|
2016-08-18 16:16:52 +00:00
|
|
|
bool use_64bit_reloc;
|
2016-08-18 16:16:46 +00:00
|
|
|
};
|
|
|
|
|
2016-08-18 16:16:52 +00:00
|
|
|
static void reloc_cache_init(struct reloc_cache *cache,
|
|
|
|
struct drm_i915_private *i915)
|
2013-08-21 16:10:51 +00:00
|
|
|
{
|
2016-08-18 16:16:46 +00:00
|
|
|
cache->page = -1;
|
2016-08-18 16:16:52 +00:00
|
|
|
cache->vaddr = 0;
|
|
|
|
cache->i915 = i915;
|
2016-11-03 08:39:46 +00:00
|
|
|
/* Must be a variable in the struct to allow GCC to unroll. */
|
|
|
|
cache->use_64bit_reloc = HAS_64BIT_RELOC(i915);
|
2016-08-18 16:16:53 +00:00
|
|
|
cache->node.allocated = false;
|
2016-08-18 16:16:52 +00:00
|
|
|
}
|
2013-08-21 16:10:51 +00:00
|
|
|
|
2016-08-18 16:16:52 +00:00
|
|
|
static inline void *unmask_page(unsigned long p)
|
|
|
|
{
|
|
|
|
return (void *)(uintptr_t)(p & PAGE_MASK);
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline unsigned int unmask_flags(unsigned long p)
|
|
|
|
{
|
|
|
|
return p & ~PAGE_MASK;
|
2016-08-18 16:16:46 +00:00
|
|
|
}
|
|
|
|
|
2016-08-18 16:16:52 +00:00
|
|
|
#define KMAP 0x4 /* after CLFLUSH_FLAGS */
|
|
|
|
|
2016-08-18 16:16:46 +00:00
|
|
|
static void reloc_cache_fini(struct reloc_cache *cache)
|
|
|
|
{
|
2016-08-18 16:16:52 +00:00
|
|
|
void *vaddr;
|
2013-08-21 16:10:51 +00:00
|
|
|
|
2016-08-18 16:16:46 +00:00
|
|
|
if (!cache->vaddr)
|
|
|
|
return;
|
2013-11-03 04:07:11 +00:00
|
|
|
|
2016-08-18 16:16:52 +00:00
|
|
|
vaddr = unmask_page(cache->vaddr);
|
|
|
|
if (cache->vaddr & KMAP) {
|
|
|
|
if (cache->vaddr & CLFLUSH_AFTER)
|
|
|
|
mb();
|
2013-11-03 04:07:11 +00:00
|
|
|
|
2016-08-18 16:16:52 +00:00
|
|
|
kunmap_atomic(vaddr);
|
|
|
|
i915_gem_obj_finish_shmem_access((struct drm_i915_gem_object *)cache->node.mm);
|
|
|
|
} else {
|
2016-08-18 16:16:53 +00:00
|
|
|
wmb();
|
2016-08-18 16:16:52 +00:00
|
|
|
io_mapping_unmap_atomic((void __iomem *)vaddr);
|
2016-08-18 16:16:53 +00:00
|
|
|
if (cache->node.allocated) {
|
|
|
|
struct i915_ggtt *ggtt = &cache->i915->ggtt;
|
|
|
|
|
|
|
|
ggtt->base.clear_range(&ggtt->base,
|
|
|
|
cache->node.start,
|
2016-10-13 12:02:40 +00:00
|
|
|
cache->node.size);
|
2016-08-18 16:16:53 +00:00
|
|
|
drm_mm_remove_node(&cache->node);
|
|
|
|
} else {
|
|
|
|
i915_vma_unpin((struct i915_vma *)cache->node.mm);
|
2013-11-03 04:07:11 +00:00
|
|
|
}
|
2016-08-18 16:16:46 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static void *reloc_kmap(struct drm_i915_gem_object *obj,
|
|
|
|
struct reloc_cache *cache,
|
|
|
|
int page)
|
|
|
|
{
|
2016-08-18 16:16:52 +00:00
|
|
|
void *vaddr;
|
|
|
|
|
|
|
|
if (cache->vaddr) {
|
|
|
|
kunmap_atomic(unmask_page(cache->vaddr));
|
|
|
|
} else {
|
|
|
|
unsigned int flushes;
|
|
|
|
int ret;
|
2016-08-18 16:16:46 +00:00
|
|
|
|
2016-08-18 16:16:52 +00:00
|
|
|
ret = i915_gem_obj_prepare_shmem_write(obj, &flushes);
|
|
|
|
if (ret)
|
|
|
|
return ERR_PTR(ret);
|
|
|
|
|
|
|
|
BUILD_BUG_ON(KMAP & CLFLUSH_FLAGS);
|
|
|
|
BUILD_BUG_ON((KMAP | CLFLUSH_FLAGS) & PAGE_MASK);
|
2013-11-03 04:07:11 +00:00
|
|
|
|
2016-08-18 16:16:52 +00:00
|
|
|
cache->vaddr = flushes | KMAP;
|
|
|
|
cache->node.mm = (void *)obj;
|
|
|
|
if (flushes)
|
|
|
|
mb();
|
2013-11-03 04:07:11 +00:00
|
|
|
}
|
|
|
|
|
2016-08-18 16:16:52 +00:00
|
|
|
vaddr = kmap_atomic(i915_gem_object_get_dirty_page(obj, page));
|
|
|
|
cache->vaddr = unmask_flags(cache->vaddr) | (unsigned long)vaddr;
|
2016-08-18 16:16:46 +00:00
|
|
|
cache->page = page;
|
2013-08-21 16:10:51 +00:00
|
|
|
|
2016-08-18 16:16:52 +00:00
|
|
|
return vaddr;
|
2013-08-21 16:10:51 +00:00
|
|
|
}
|
|
|
|
|
2016-08-18 16:16:52 +00:00
|
|
|
static void *reloc_iomap(struct drm_i915_gem_object *obj,
|
|
|
|
struct reloc_cache *cache,
|
|
|
|
int page)
|
2013-08-21 16:10:51 +00:00
|
|
|
{
|
2016-08-18 16:16:53 +00:00
|
|
|
struct i915_ggtt *ggtt = &cache->i915->ggtt;
|
|
|
|
unsigned long offset;
|
2016-08-18 16:16:52 +00:00
|
|
|
void *vaddr;
|
2013-08-21 16:10:51 +00:00
|
|
|
|
2016-08-18 16:16:52 +00:00
|
|
|
if (cache->vaddr) {
|
2016-10-04 09:54:13 +00:00
|
|
|
io_mapping_unmap_atomic((void __force __iomem *) unmask_page(cache->vaddr));
|
2016-08-18 16:16:52 +00:00
|
|
|
} else {
|
|
|
|
struct i915_vma *vma;
|
|
|
|
int ret;
|
2013-08-21 16:10:51 +00:00
|
|
|
|
2016-08-18 16:16:52 +00:00
|
|
|
if (use_cpu_reloc(obj))
|
|
|
|
return NULL;
|
2013-11-03 04:07:11 +00:00
|
|
|
|
2016-08-18 16:16:52 +00:00
|
|
|
ret = i915_gem_object_set_to_gtt_domain(obj, true);
|
|
|
|
if (ret)
|
|
|
|
return ERR_PTR(ret);
|
2013-11-03 04:07:11 +00:00
|
|
|
|
2016-08-18 16:16:52 +00:00
|
|
|
vma = i915_gem_object_ggtt_pin(obj, NULL, 0, 0,
|
|
|
|
PIN_MAPPABLE | PIN_NONBLOCK);
|
2016-08-18 16:16:53 +00:00
|
|
|
if (IS_ERR(vma)) {
|
|
|
|
memset(&cache->node, 0, sizeof(cache->node));
|
2017-02-02 21:04:38 +00:00
|
|
|
ret = drm_mm_insert_node_in_range
|
2016-08-18 16:16:53 +00:00
|
|
|
(&ggtt->base.mm, &cache->node,
|
2017-01-10 14:47:34 +00:00
|
|
|
PAGE_SIZE, 0, I915_COLOR_UNEVICTABLE,
|
2016-08-18 16:16:53 +00:00
|
|
|
0, ggtt->mappable_end,
|
2017-02-02 21:04:38 +00:00
|
|
|
DRM_MM_INSERT_LOW);
|
2016-10-07 06:53:25 +00:00
|
|
|
if (ret) /* no inactive aperture space, use cpu reloc */
|
|
|
|
return NULL;
|
2016-08-18 16:16:53 +00:00
|
|
|
} else {
|
2016-08-18 16:17:00 +00:00
|
|
|
ret = i915_vma_put_fence(vma);
|
2016-08-18 16:16:53 +00:00
|
|
|
if (ret) {
|
|
|
|
i915_vma_unpin(vma);
|
|
|
|
return ERR_PTR(ret);
|
|
|
|
}
|
2013-08-21 16:10:51 +00:00
|
|
|
|
2016-08-18 16:16:53 +00:00
|
|
|
cache->node.start = vma->node.start;
|
|
|
|
cache->node.mm = (void *)vma;
|
2013-11-03 04:07:11 +00:00
|
|
|
}
|
2016-08-18 16:16:53 +00:00
|
|
|
}
|
2013-11-03 04:07:11 +00:00
|
|
|
|
2016-08-18 16:16:53 +00:00
|
|
|
offset = cache->node.start;
|
|
|
|
if (cache->node.allocated) {
|
2016-10-28 14:27:56 +00:00
|
|
|
wmb();
|
2016-08-18 16:16:53 +00:00
|
|
|
ggtt->base.insert_page(&ggtt->base,
|
|
|
|
i915_gem_object_get_dma_address(obj, page),
|
|
|
|
offset, I915_CACHE_NONE, 0);
|
|
|
|
} else {
|
|
|
|
offset += page << PAGE_SHIFT;
|
2013-11-03 04:07:11 +00:00
|
|
|
}
|
|
|
|
|
2016-10-04 09:54:13 +00:00
|
|
|
vaddr = (void __force *) io_mapping_map_atomic_wc(&cache->i915->ggtt.mappable, offset);
|
2016-08-18 16:16:52 +00:00
|
|
|
cache->page = page;
|
|
|
|
cache->vaddr = (unsigned long)vaddr;
|
2013-08-21 16:10:51 +00:00
|
|
|
|
2016-08-18 16:16:52 +00:00
|
|
|
return vaddr;
|
2013-08-21 16:10:51 +00:00
|
|
|
}
|
|
|
|
|
2016-08-18 16:16:52 +00:00
|
|
|
static void *reloc_vaddr(struct drm_i915_gem_object *obj,
|
|
|
|
struct reloc_cache *cache,
|
|
|
|
int page)
|
drm/i915: Fallback to using CPU relocations for large batch buffers
If the batch buffer is too large to fit into the aperture and we need a
GTT mapping for relocations, we currently fail. This only applies to a
subset of machines for a subset of environments, quite undesirable. We
can simply check after failing to insert the batch into the GTT as to
whether we only need a mappable binding for relocation and, if so, we can
revert to using a non-mappable binding and an alternate relocation
method. However, using relocate_entry_cpu() is excruciatingly slow for
large buffers on non-LLC as the entire buffer requires clflushing before
and after the relocation handling. Alternatively, we can implement a
third relocation method that only clflushes around the relocation entry.
This is still slower than updating through the GTT, so we prefer using
the GTT where possible, but is orders of magnitude faster as we
typically do not have to then clflush the entire buffer.
An alternative idea of using a temporary WC mapping of the backing store
is promising (it should be faster than using the GTT itself), but
requires fairly extensive arch/x86 support - along the lines of
kmap_atomic_prof_pfn() (which is not universally implemented even for
x86).
Testcase: igt/gem_exec_big #pnv,byt
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=88392
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
[danvet: Add a WARN_ONCE for the impossible reloc case and explain in
a short comment why we want to avoid ping-pong.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-01-14 11:20:56 +00:00
|
|
|
{
|
2016-08-18 16:16:52 +00:00
|
|
|
void *vaddr;
|
2013-08-21 16:10:51 +00:00
|
|
|
|
2016-08-18 16:16:52 +00:00
|
|
|
if (cache->page == page) {
|
|
|
|
vaddr = unmask_page(cache->vaddr);
|
|
|
|
} else {
|
|
|
|
vaddr = NULL;
|
|
|
|
if ((cache->vaddr & KMAP) == 0)
|
|
|
|
vaddr = reloc_iomap(obj, cache, page);
|
|
|
|
if (!vaddr)
|
|
|
|
vaddr = reloc_kmap(obj, cache, page);
|
2013-11-03 04:07:11 +00:00
|
|
|
}
|
|
|
|
|
2016-08-18 16:16:52 +00:00
|
|
|
return vaddr;
|
drm/i915: Fallback to using CPU relocations for large batch buffers
If the batch buffer is too large to fit into the aperture and we need a
GTT mapping for relocations, we currently fail. This only applies to a
subset of machines for a subset of environments, quite undesirable. We
can simply check after failing to insert the batch into the GTT as to
whether we only need a mappable binding for relocation and, if so, we can
revert to using a non-mappable binding and an alternate relocation
method. However, using relocate_entry_cpu() is excruciatingly slow for
large buffers on non-LLC as the entire buffer requires clflushing before
and after the relocation handling. Alternatively, we can implement a
third relocation method that only clflushes around the relocation entry.
This is still slower than updating through the GTT, so we prefer using
the GTT where possible, but is orders of magnitude faster as we
typically do not have to then clflush the entire buffer.
An alternative idea of using a temporary WC mapping of the backing store
is promising (it should be faster than using the GTT itself), but
requires fairly extensive arch/x86 support - along the lines of
kmap_atomic_prof_pfn() (which is not universally implemented even for
x86).
Testcase: igt/gem_exec_big #pnv,byt
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=88392
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
[danvet: Add a WARN_ONCE for the impossible reloc case and explain in
a short comment why we want to avoid ping-pong.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-01-14 11:20:56 +00:00
|
|
|
}
|
|
|
|
|
2016-08-18 16:16:52 +00:00
|
|
|
static void clflush_write32(u32 *addr, u32 value, unsigned int flushes)
|
drm/i915: Fallback to using CPU relocations for large batch buffers
If the batch buffer is too large to fit into the aperture and we need a
GTT mapping for relocations, we currently fail. This only applies to a
subset of machines for a subset of environments, quite undesirable. We
can simply check after failing to insert the batch into the GTT as to
whether we only need a mappable binding for relocation and, if so, we can
revert to using a non-mappable binding and an alternate relocation
method. However, using relocate_entry_cpu() is excruciatingly slow for
large buffers on non-LLC as the entire buffer requires clflushing before
and after the relocation handling. Alternatively, we can implement a
third relocation method that only clflushes around the relocation entry.
This is still slower than updating through the GTT, so we prefer using
the GTT where possible, but is orders of magnitude faster as we
typically do not have to then clflush the entire buffer.
An alternative idea of using a temporary WC mapping of the backing store
is promising (it should be faster than using the GTT itself), but
requires fairly extensive arch/x86 support - along the lines of
kmap_atomic_prof_pfn() (which is not universally implemented even for
x86).
Testcase: igt/gem_exec_big #pnv,byt
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=88392
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
[danvet: Add a WARN_ONCE for the impossible reloc case and explain in
a short comment why we want to avoid ping-pong.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-01-14 11:20:56 +00:00
|
|
|
{
|
2016-08-18 16:16:52 +00:00
|
|
|
if (unlikely(flushes & (CLFLUSH_BEFORE | CLFLUSH_AFTER))) {
|
|
|
|
if (flushes & CLFLUSH_BEFORE) {
|
|
|
|
clflushopt(addr);
|
|
|
|
mb();
|
|
|
|
}
|
drm/i915: Fallback to using CPU relocations for large batch buffers
If the batch buffer is too large to fit into the aperture and we need a
GTT mapping for relocations, we currently fail. This only applies to a
subset of machines for a subset of environments, quite undesirable. We
can simply check after failing to insert the batch into the GTT as to
whether we only need a mappable binding for relocation and, if so, we can
revert to using a non-mappable binding and an alternate relocation
method. However, using relocate_entry_cpu() is excruciatingly slow for
large buffers on non-LLC as the entire buffer requires clflushing before
and after the relocation handling. Alternatively, we can implement a
third relocation method that only clflushes around the relocation entry.
This is still slower than updating through the GTT, so we prefer using
the GTT where possible, but is orders of magnitude faster as we
typically do not have to then clflush the entire buffer.
An alternative idea of using a temporary WC mapping of the backing store
is promising (it should be faster than using the GTT itself), but
requires fairly extensive arch/x86 support - along the lines of
kmap_atomic_prof_pfn() (which is not universally implemented even for
x86).
Testcase: igt/gem_exec_big #pnv,byt
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=88392
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
[danvet: Add a WARN_ONCE for the impossible reloc case and explain in
a short comment why we want to avoid ping-pong.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-01-14 11:20:56 +00:00
|
|
|
|
2016-08-18 16:16:52 +00:00
|
|
|
*addr = value;
|
drm/i915: Fallback to using CPU relocations for large batch buffers
If the batch buffer is too large to fit into the aperture and we need a
GTT mapping for relocations, we currently fail. This only applies to a
subset of machines for a subset of environments, quite undesirable. We
can simply check after failing to insert the batch into the GTT as to
whether we only need a mappable binding for relocation and, if so, we can
revert to using a non-mappable binding and an alternate relocation
method. However, using relocate_entry_cpu() is excruciatingly slow for
large buffers on non-LLC as the entire buffer requires clflushing before
and after the relocation handling. Alternatively, we can implement a
third relocation method that only clflushes around the relocation entry.
This is still slower than updating through the GTT, so we prefer using
the GTT where possible, but is orders of magnitude faster as we
typically do not have to then clflush the entire buffer.
An alternative idea of using a temporary WC mapping of the backing store
is promising (it should be faster than using the GTT itself), but
requires fairly extensive arch/x86 support - along the lines of
kmap_atomic_prof_pfn() (which is not universally implemented even for
x86).
Testcase: igt/gem_exec_big #pnv,byt
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=88392
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
[danvet: Add a WARN_ONCE for the impossible reloc case and explain in
a short comment why we want to avoid ping-pong.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-01-14 11:20:56 +00:00
|
|
|
|
2016-08-18 16:16:52 +00:00
|
|
|
/* Writes to the same cacheline are serialised by the CPU
|
|
|
|
* (including clflush). On the write path, we only require
|
|
|
|
* that it hits memory in an orderly fashion and place
|
|
|
|
* mb barriers at the start and end of the relocation phase
|
|
|
|
* to ensure ordering of clflush wrt to the system.
|
|
|
|
*/
|
|
|
|
if (flushes & CLFLUSH_AFTER)
|
|
|
|
clflushopt(addr);
|
|
|
|
} else
|
|
|
|
*addr = value;
|
drm/i915: Fallback to using CPU relocations for large batch buffers
If the batch buffer is too large to fit into the aperture and we need a
GTT mapping for relocations, we currently fail. This only applies to a
subset of machines for a subset of environments, quite undesirable. We
can simply check after failing to insert the batch into the GTT as to
whether we only need a mappable binding for relocation and, if so, we can
revert to using a non-mappable binding and an alternate relocation
method. However, using relocate_entry_cpu() is excruciatingly slow for
large buffers on non-LLC as the entire buffer requires clflushing before
and after the relocation handling. Alternatively, we can implement a
third relocation method that only clflushes around the relocation entry.
This is still slower than updating through the GTT, so we prefer using
the GTT where possible, but is orders of magnitude faster as we
typically do not have to then clflush the entire buffer.
An alternative idea of using a temporary WC mapping of the backing store
is promising (it should be faster than using the GTT itself), but
requires fairly extensive arch/x86 support - along the lines of
kmap_atomic_prof_pfn() (which is not universally implemented even for
x86).
Testcase: igt/gem_exec_big #pnv,byt
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=88392
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
[danvet: Add a WARN_ONCE for the impossible reloc case and explain in
a short comment why we want to avoid ping-pong.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-01-14 11:20:56 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static int
|
2016-08-18 16:16:52 +00:00
|
|
|
relocate_entry(struct drm_i915_gem_object *obj,
|
|
|
|
const struct drm_i915_gem_relocation_entry *reloc,
|
|
|
|
struct reloc_cache *cache,
|
|
|
|
u64 target_offset)
|
drm/i915: Fallback to using CPU relocations for large batch buffers
If the batch buffer is too large to fit into the aperture and we need a
GTT mapping for relocations, we currently fail. This only applies to a
subset of machines for a subset of environments, quite undesirable. We
can simply check after failing to insert the batch into the GTT as to
whether we only need a mappable binding for relocation and, if so, we can
revert to using a non-mappable binding and an alternate relocation
method. However, using relocate_entry_cpu() is excruciatingly slow for
large buffers on non-LLC as the entire buffer requires clflushing before
and after the relocation handling. Alternatively, we can implement a
third relocation method that only clflushes around the relocation entry.
This is still slower than updating through the GTT, so we prefer using
the GTT where possible, but is orders of magnitude faster as we
typically do not have to then clflush the entire buffer.
An alternative idea of using a temporary WC mapping of the backing store
is promising (it should be faster than using the GTT itself), but
requires fairly extensive arch/x86 support - along the lines of
kmap_atomic_prof_pfn() (which is not universally implemented even for
x86).
Testcase: igt/gem_exec_big #pnv,byt
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=88392
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
[danvet: Add a WARN_ONCE for the impossible reloc case and explain in
a short comment why we want to avoid ping-pong.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-01-14 11:20:56 +00:00
|
|
|
{
|
2016-08-18 16:16:52 +00:00
|
|
|
u64 offset = reloc->offset;
|
|
|
|
bool wide = cache->use_64bit_reloc;
|
|
|
|
void *vaddr;
|
drm/i915: Fallback to using CPU relocations for large batch buffers
If the batch buffer is too large to fit into the aperture and we need a
GTT mapping for relocations, we currently fail. This only applies to a
subset of machines for a subset of environments, quite undesirable. We
can simply check after failing to insert the batch into the GTT as to
whether we only need a mappable binding for relocation and, if so, we can
revert to using a non-mappable binding and an alternate relocation
method. However, using relocate_entry_cpu() is excruciatingly slow for
large buffers on non-LLC as the entire buffer requires clflushing before
and after the relocation handling. Alternatively, we can implement a
third relocation method that only clflushes around the relocation entry.
This is still slower than updating through the GTT, so we prefer using
the GTT where possible, but is orders of magnitude faster as we
typically do not have to then clflush the entire buffer.
An alternative idea of using a temporary WC mapping of the backing store
is promising (it should be faster than using the GTT itself), but
requires fairly extensive arch/x86 support - along the lines of
kmap_atomic_prof_pfn() (which is not universally implemented even for
x86).
Testcase: igt/gem_exec_big #pnv,byt
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=88392
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
[danvet: Add a WARN_ONCE for the impossible reloc case and explain in
a short comment why we want to avoid ping-pong.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-01-14 11:20:56 +00:00
|
|
|
|
2016-08-18 16:16:52 +00:00
|
|
|
target_offset = relocation_target(reloc, target_offset);
|
|
|
|
repeat:
|
|
|
|
vaddr = reloc_vaddr(obj, cache, offset >> PAGE_SHIFT);
|
|
|
|
if (IS_ERR(vaddr))
|
|
|
|
return PTR_ERR(vaddr);
|
|
|
|
|
|
|
|
clflush_write32(vaddr + offset_in_page(offset),
|
|
|
|
lower_32_bits(target_offset),
|
|
|
|
cache->vaddr);
|
|
|
|
|
|
|
|
if (wide) {
|
|
|
|
offset += sizeof(u32);
|
|
|
|
target_offset >>= 32;
|
|
|
|
wide = false;
|
|
|
|
goto repeat;
|
drm/i915: Fallback to using CPU relocations for large batch buffers
If the batch buffer is too large to fit into the aperture and we need a
GTT mapping for relocations, we currently fail. This only applies to a
subset of machines for a subset of environments, quite undesirable. We
can simply check after failing to insert the batch into the GTT as to
whether we only need a mappable binding for relocation and, if so, we can
revert to using a non-mappable binding and an alternate relocation
method. However, using relocate_entry_cpu() is excruciatingly slow for
large buffers on non-LLC as the entire buffer requires clflushing before
and after the relocation handling. Alternatively, we can implement a
third relocation method that only clflushes around the relocation entry.
This is still slower than updating through the GTT, so we prefer using
the GTT where possible, but is orders of magnitude faster as we
typically do not have to then clflush the entire buffer.
An alternative idea of using a temporary WC mapping of the backing store
is promising (it should be faster than using the GTT itself), but
requires fairly extensive arch/x86 support - along the lines of
kmap_atomic_prof_pfn() (which is not universally implemented even for
x86).
Testcase: igt/gem_exec_big #pnv,byt
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=88392
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
[danvet: Add a WARN_ONCE for the impossible reloc case and explain in
a short comment why we want to avoid ping-pong.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-01-14 11:20:56 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2010-11-25 18:00:26 +00:00
|
|
|
static int
|
|
|
|
i915_gem_execbuffer_relocate_entry(struct drm_i915_gem_object *obj,
|
drm/i915: Convert execbuf code to use vmas
In order to transition more of our code over to using a VMA instead of
an <OBJ, VM> pair - we must have the vma accessible at execbuf time. Up
until now, we've only had a VMA when actually binding an object.
The previous patch helped handle the distinction on bound vs. unbound.
This patch will help us catch leaks, and other issues before we actually
shuffle a bunch of stuff around.
This attempts to convert all the execbuf code to speak in vmas. Since
the execbuf code is very self contained it was a nice isolated
conversion.
The meat of the code is about turning eb_objects into eb_vma, and then
wiring up the rest of the code to use vmas instead of obj, vm pairs.
Unfortunately, to do this, we must move the exec_list link from the obj
structure. This list is reused in the eviction code, so we must also
modify the eviction code to make this work.
WARNING: This patch makes an already hotly profiled path slower. The cost is
unavoidable. In reply to this mail, I will attach the extra data.
v2: Release table lock early, and two a 2 phase vma lookup to avoid
having to use a GFP_ATOMIC. (Chris)
v3: s/obj_exec_list/obj_exec_link/
Updates to address
commit 6d2b888569d366beb4be72cacfde41adee2c25e1
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Wed Aug 7 18:30:54 2013 +0100
drm/i915: List objects allocated from stolen memory in debugfs
v4: Use obj = vma->obj for neatness in some places (Chris)
need_reloc_mappable() should return false if ppgtt (Chris)
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
[danvet: Split out prep patches. Also remove a FIXME comment which is
now taken care of.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2013-08-14 09:38:36 +00:00
|
|
|
struct eb_vmas *eb,
|
2016-08-18 16:16:46 +00:00
|
|
|
struct drm_i915_gem_relocation_entry *reloc,
|
|
|
|
struct reloc_cache *cache)
|
2010-11-25 18:00:26 +00:00
|
|
|
{
|
2016-10-13 10:03:10 +00:00
|
|
|
struct drm_i915_private *dev_priv = to_i915(obj->base.dev);
|
2010-11-25 18:00:26 +00:00
|
|
|
struct drm_gem_object *target_obj;
|
2012-02-15 22:50:23 +00:00
|
|
|
struct drm_i915_gem_object *target_i915_obj;
|
drm/i915: Convert execbuf code to use vmas
In order to transition more of our code over to using a VMA instead of
an <OBJ, VM> pair - we must have the vma accessible at execbuf time. Up
until now, we've only had a VMA when actually binding an object.
The previous patch helped handle the distinction on bound vs. unbound.
This patch will help us catch leaks, and other issues before we actually
shuffle a bunch of stuff around.
This attempts to convert all the execbuf code to speak in vmas. Since
the execbuf code is very self contained it was a nice isolated
conversion.
The meat of the code is about turning eb_objects into eb_vma, and then
wiring up the rest of the code to use vmas instead of obj, vm pairs.
Unfortunately, to do this, we must move the exec_list link from the obj
structure. This list is reused in the eviction code, so we must also
modify the eviction code to make this work.
WARNING: This patch makes an already hotly profiled path slower. The cost is
unavoidable. In reply to this mail, I will attach the extra data.
v2: Release table lock early, and two a 2 phase vma lookup to avoid
having to use a GFP_ATOMIC. (Chris)
v3: s/obj_exec_list/obj_exec_link/
Updates to address
commit 6d2b888569d366beb4be72cacfde41adee2c25e1
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Wed Aug 7 18:30:54 2013 +0100
drm/i915: List objects allocated from stolen memory in debugfs
v4: Use obj = vma->obj for neatness in some places (Chris)
need_reloc_mappable() should return false if ppgtt (Chris)
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
[danvet: Split out prep patches. Also remove a FIXME comment which is
now taken care of.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2013-08-14 09:38:36 +00:00
|
|
|
struct i915_vma *target_vma;
|
2014-04-29 00:18:28 +00:00
|
|
|
uint64_t target_offset;
|
2013-12-26 21:39:50 +00:00
|
|
|
int ret;
|
2010-11-25 18:00:26 +00:00
|
|
|
|
2010-12-08 10:38:14 +00:00
|
|
|
/* we've already hold a reference to all valid objects */
|
drm/i915: Convert execbuf code to use vmas
In order to transition more of our code over to using a VMA instead of
an <OBJ, VM> pair - we must have the vma accessible at execbuf time. Up
until now, we've only had a VMA when actually binding an object.
The previous patch helped handle the distinction on bound vs. unbound.
This patch will help us catch leaks, and other issues before we actually
shuffle a bunch of stuff around.
This attempts to convert all the execbuf code to speak in vmas. Since
the execbuf code is very self contained it was a nice isolated
conversion.
The meat of the code is about turning eb_objects into eb_vma, and then
wiring up the rest of the code to use vmas instead of obj, vm pairs.
Unfortunately, to do this, we must move the exec_list link from the obj
structure. This list is reused in the eviction code, so we must also
modify the eviction code to make this work.
WARNING: This patch makes an already hotly profiled path slower. The cost is
unavoidable. In reply to this mail, I will attach the extra data.
v2: Release table lock early, and two a 2 phase vma lookup to avoid
having to use a GFP_ATOMIC. (Chris)
v3: s/obj_exec_list/obj_exec_link/
Updates to address
commit 6d2b888569d366beb4be72cacfde41adee2c25e1
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Wed Aug 7 18:30:54 2013 +0100
drm/i915: List objects allocated from stolen memory in debugfs
v4: Use obj = vma->obj for neatness in some places (Chris)
need_reloc_mappable() should return false if ppgtt (Chris)
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
[danvet: Split out prep patches. Also remove a FIXME comment which is
now taken care of.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2013-08-14 09:38:36 +00:00
|
|
|
target_vma = eb_get_vma(eb, reloc->target_handle);
|
|
|
|
if (unlikely(target_vma == NULL))
|
2010-11-25 18:00:26 +00:00
|
|
|
return -ENOENT;
|
drm/i915: Convert execbuf code to use vmas
In order to transition more of our code over to using a VMA instead of
an <OBJ, VM> pair - we must have the vma accessible at execbuf time. Up
until now, we've only had a VMA when actually binding an object.
The previous patch helped handle the distinction on bound vs. unbound.
This patch will help us catch leaks, and other issues before we actually
shuffle a bunch of stuff around.
This attempts to convert all the execbuf code to speak in vmas. Since
the execbuf code is very self contained it was a nice isolated
conversion.
The meat of the code is about turning eb_objects into eb_vma, and then
wiring up the rest of the code to use vmas instead of obj, vm pairs.
Unfortunately, to do this, we must move the exec_list link from the obj
structure. This list is reused in the eviction code, so we must also
modify the eviction code to make this work.
WARNING: This patch makes an already hotly profiled path slower. The cost is
unavoidable. In reply to this mail, I will attach the extra data.
v2: Release table lock early, and two a 2 phase vma lookup to avoid
having to use a GFP_ATOMIC. (Chris)
v3: s/obj_exec_list/obj_exec_link/
Updates to address
commit 6d2b888569d366beb4be72cacfde41adee2c25e1
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Wed Aug 7 18:30:54 2013 +0100
drm/i915: List objects allocated from stolen memory in debugfs
v4: Use obj = vma->obj for neatness in some places (Chris)
need_reloc_mappable() should return false if ppgtt (Chris)
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
[danvet: Split out prep patches. Also remove a FIXME comment which is
now taken care of.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2013-08-14 09:38:36 +00:00
|
|
|
target_i915_obj = target_vma->obj;
|
|
|
|
target_obj = &target_vma->obj->base;
|
2010-11-25 18:00:26 +00:00
|
|
|
|
2015-12-29 17:24:52 +00:00
|
|
|
target_offset = gen8_canonical_addr(target_vma->node.start);
|
2010-11-25 18:00:26 +00:00
|
|
|
|
2012-07-31 22:35:01 +00:00
|
|
|
/* Sandybridge PPGTT errata: We need a global gtt mapping for MI and
|
|
|
|
* pipe_control writes because the gpu doesn't properly redirect them
|
|
|
|
* through the ppgtt for non_secure batchbuffers. */
|
2016-10-13 10:03:10 +00:00
|
|
|
if (unlikely(IS_GEN6(dev_priv) &&
|
2015-04-20 16:04:05 +00:00
|
|
|
reloc->write_domain == I915_GEM_DOMAIN_INSTRUCTION)) {
|
2014-12-10 17:27:58 +00:00
|
|
|
ret = i915_vma_bind(target_vma, target_i915_obj->cache_level,
|
2015-04-20 16:04:05 +00:00
|
|
|
PIN_GLOBAL);
|
2014-12-10 17:27:58 +00:00
|
|
|
if (WARN_ONCE(ret, "Unexpected failure to bind target VMA!"))
|
|
|
|
return ret;
|
|
|
|
}
|
2012-07-31 22:35:01 +00:00
|
|
|
|
2010-11-25 18:00:26 +00:00
|
|
|
/* Validate that the target is in a valid r/w GPU domain */
|
2010-12-08 10:43:06 +00:00
|
|
|
if (unlikely(reloc->write_domain & (reloc->write_domain - 1))) {
|
2012-01-31 20:08:14 +00:00
|
|
|
DRM_DEBUG("reloc with multiple write domains: "
|
2010-11-25 18:00:26 +00:00
|
|
|
"obj %p target %d offset %d "
|
|
|
|
"read %08x write %08x",
|
|
|
|
obj, reloc->target_handle,
|
|
|
|
(int) reloc->offset,
|
|
|
|
reloc->read_domains,
|
|
|
|
reloc->write_domain);
|
2013-12-26 21:39:50 +00:00
|
|
|
return -EINVAL;
|
2010-11-25 18:00:26 +00:00
|
|
|
}
|
2011-12-14 12:57:27 +00:00
|
|
|
if (unlikely((reloc->write_domain | reloc->read_domains)
|
|
|
|
& ~I915_GEM_GPU_DOMAINS)) {
|
2012-01-31 20:08:14 +00:00
|
|
|
DRM_DEBUG("reloc with read/write non-GPU domains: "
|
2010-11-25 18:00:26 +00:00
|
|
|
"obj %p target %d offset %d "
|
|
|
|
"read %08x write %08x",
|
|
|
|
obj, reloc->target_handle,
|
|
|
|
(int) reloc->offset,
|
|
|
|
reloc->read_domains,
|
|
|
|
reloc->write_domain);
|
2013-12-26 21:39:50 +00:00
|
|
|
return -EINVAL;
|
2010-11-25 18:00:26 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
target_obj->pending_read_domains |= reloc->read_domains;
|
|
|
|
target_obj->pending_write_domain |= reloc->write_domain;
|
|
|
|
|
|
|
|
/* If the relocation already has the right value in it, no
|
|
|
|
* more work needs to be done.
|
|
|
|
*/
|
|
|
|
if (target_offset == reloc->presumed_offset)
|
2010-12-08 10:38:14 +00:00
|
|
|
return 0;
|
2010-11-25 18:00:26 +00:00
|
|
|
|
|
|
|
/* Check that the relocation address is valid... */
|
2013-11-03 04:07:11 +00:00
|
|
|
if (unlikely(reloc->offset >
|
2016-08-18 16:16:52 +00:00
|
|
|
obj->base.size - (cache->use_64bit_reloc ? 8 : 4))) {
|
2012-01-31 20:08:14 +00:00
|
|
|
DRM_DEBUG("Relocation beyond object bounds: "
|
2010-11-25 18:00:26 +00:00
|
|
|
"obj %p target %d offset %d size %d.\n",
|
|
|
|
obj, reloc->target_handle,
|
|
|
|
(int) reloc->offset,
|
|
|
|
(int) obj->base.size);
|
2013-12-26 21:39:50 +00:00
|
|
|
return -EINVAL;
|
2010-11-25 18:00:26 +00:00
|
|
|
}
|
2010-12-08 10:43:06 +00:00
|
|
|
if (unlikely(reloc->offset & 3)) {
|
2012-01-31 20:08:14 +00:00
|
|
|
DRM_DEBUG("Relocation not 4-byte aligned: "
|
2010-11-25 18:00:26 +00:00
|
|
|
"obj %p target %d offset %d.\n",
|
|
|
|
obj, reloc->target_handle,
|
|
|
|
(int) reloc->offset);
|
2013-12-26 21:39:50 +00:00
|
|
|
return -EINVAL;
|
2010-11-25 18:00:26 +00:00
|
|
|
}
|
|
|
|
|
2016-08-18 16:16:52 +00:00
|
|
|
ret = relocate_entry(obj, reloc, cache, target_offset);
|
2013-09-02 18:56:23 +00:00
|
|
|
if (ret)
|
|
|
|
return ret;
|
|
|
|
|
2010-11-25 18:00:26 +00:00
|
|
|
/* and update the user's relocation entry */
|
|
|
|
reloc->presumed_offset = target_offset;
|
2010-12-08 10:38:14 +00:00
|
|
|
return 0;
|
2010-11-25 18:00:26 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static int
|
drm/i915: Convert execbuf code to use vmas
In order to transition more of our code over to using a VMA instead of
an <OBJ, VM> pair - we must have the vma accessible at execbuf time. Up
until now, we've only had a VMA when actually binding an object.
The previous patch helped handle the distinction on bound vs. unbound.
This patch will help us catch leaks, and other issues before we actually
shuffle a bunch of stuff around.
This attempts to convert all the execbuf code to speak in vmas. Since
the execbuf code is very self contained it was a nice isolated
conversion.
The meat of the code is about turning eb_objects into eb_vma, and then
wiring up the rest of the code to use vmas instead of obj, vm pairs.
Unfortunately, to do this, we must move the exec_list link from the obj
structure. This list is reused in the eviction code, so we must also
modify the eviction code to make this work.
WARNING: This patch makes an already hotly profiled path slower. The cost is
unavoidable. In reply to this mail, I will attach the extra data.
v2: Release table lock early, and two a 2 phase vma lookup to avoid
having to use a GFP_ATOMIC. (Chris)
v3: s/obj_exec_list/obj_exec_link/
Updates to address
commit 6d2b888569d366beb4be72cacfde41adee2c25e1
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Wed Aug 7 18:30:54 2013 +0100
drm/i915: List objects allocated from stolen memory in debugfs
v4: Use obj = vma->obj for neatness in some places (Chris)
need_reloc_mappable() should return false if ppgtt (Chris)
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
[danvet: Split out prep patches. Also remove a FIXME comment which is
now taken care of.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2013-08-14 09:38:36 +00:00
|
|
|
i915_gem_execbuffer_relocate_vma(struct i915_vma *vma,
|
|
|
|
struct eb_vmas *eb)
|
2010-11-25 18:00:26 +00:00
|
|
|
{
|
2012-03-24 20:12:53 +00:00
|
|
|
#define N_RELOC(x) ((x) / sizeof(struct drm_i915_gem_relocation_entry))
|
|
|
|
struct drm_i915_gem_relocation_entry stack_reloc[N_RELOC(512)];
|
2010-11-25 18:00:26 +00:00
|
|
|
struct drm_i915_gem_relocation_entry __user *user_relocs;
|
drm/i915: Convert execbuf code to use vmas
In order to transition more of our code over to using a VMA instead of
an <OBJ, VM> pair - we must have the vma accessible at execbuf time. Up
until now, we've only had a VMA when actually binding an object.
The previous patch helped handle the distinction on bound vs. unbound.
This patch will help us catch leaks, and other issues before we actually
shuffle a bunch of stuff around.
This attempts to convert all the execbuf code to speak in vmas. Since
the execbuf code is very self contained it was a nice isolated
conversion.
The meat of the code is about turning eb_objects into eb_vma, and then
wiring up the rest of the code to use vmas instead of obj, vm pairs.
Unfortunately, to do this, we must move the exec_list link from the obj
structure. This list is reused in the eviction code, so we must also
modify the eviction code to make this work.
WARNING: This patch makes an already hotly profiled path slower. The cost is
unavoidable. In reply to this mail, I will attach the extra data.
v2: Release table lock early, and two a 2 phase vma lookup to avoid
having to use a GFP_ATOMIC. (Chris)
v3: s/obj_exec_list/obj_exec_link/
Updates to address
commit 6d2b888569d366beb4be72cacfde41adee2c25e1
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Wed Aug 7 18:30:54 2013 +0100
drm/i915: List objects allocated from stolen memory in debugfs
v4: Use obj = vma->obj for neatness in some places (Chris)
need_reloc_mappable() should return false if ppgtt (Chris)
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
[danvet: Split out prep patches. Also remove a FIXME comment which is
now taken care of.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2013-08-14 09:38:36 +00:00
|
|
|
struct drm_i915_gem_exec_object2 *entry = vma->exec_entry;
|
2016-08-18 16:16:46 +00:00
|
|
|
struct reloc_cache cache;
|
|
|
|
int remain, ret = 0;
|
2010-11-25 18:00:26 +00:00
|
|
|
|
2016-04-26 15:32:27 +00:00
|
|
|
user_relocs = u64_to_user_ptr(entry->relocs_ptr);
|
2016-08-18 16:16:52 +00:00
|
|
|
reloc_cache_init(&cache, eb->i915);
|
2010-11-25 18:00:26 +00:00
|
|
|
|
2012-03-24 20:12:53 +00:00
|
|
|
remain = entry->relocation_count;
|
|
|
|
while (remain) {
|
|
|
|
struct drm_i915_gem_relocation_entry *r = stack_reloc;
|
2016-10-18 12:02:51 +00:00
|
|
|
unsigned long unwritten;
|
|
|
|
unsigned int count;
|
|
|
|
|
|
|
|
count = min_t(unsigned int, remain, ARRAY_SIZE(stack_reloc));
|
2012-03-24 20:12:53 +00:00
|
|
|
remain -= count;
|
|
|
|
|
2016-10-18 12:02:51 +00:00
|
|
|
/* This is the fast path and we cannot handle a pagefault
|
|
|
|
* whilst holding the struct mutex lest the user pass in the
|
|
|
|
* relocations contained within a mmaped bo. For in such a case
|
|
|
|
* we, the page fault handler would call i915_gem_fault() and
|
|
|
|
* we would try to acquire the struct mutex again. Obviously
|
|
|
|
* this is bad and so lockdep complains vehemently.
|
|
|
|
*/
|
|
|
|
pagefault_disable();
|
|
|
|
unwritten = __copy_from_user_inatomic(r, user_relocs, count*sizeof(r[0]));
|
|
|
|
pagefault_enable();
|
|
|
|
if (unlikely(unwritten)) {
|
2016-08-18 16:16:46 +00:00
|
|
|
ret = -EFAULT;
|
|
|
|
goto out;
|
|
|
|
}
|
2010-11-25 18:00:26 +00:00
|
|
|
|
2012-03-24 20:12:53 +00:00
|
|
|
do {
|
|
|
|
u64 offset = r->presumed_offset;
|
2010-11-25 18:00:26 +00:00
|
|
|
|
2016-08-18 16:16:46 +00:00
|
|
|
ret = i915_gem_execbuffer_relocate_entry(vma->obj, eb, r, &cache);
|
2012-03-24 20:12:53 +00:00
|
|
|
if (ret)
|
2016-08-18 16:16:46 +00:00
|
|
|
goto out;
|
2012-03-24 20:12:53 +00:00
|
|
|
|
2016-10-18 12:02:51 +00:00
|
|
|
if (r->presumed_offset != offset) {
|
|
|
|
pagefault_disable();
|
|
|
|
unwritten = __put_user(r->presumed_offset,
|
|
|
|
&user_relocs->presumed_offset);
|
|
|
|
pagefault_enable();
|
|
|
|
if (unlikely(unwritten)) {
|
|
|
|
/* Note that reporting an error now
|
|
|
|
* leaves everything in an inconsistent
|
|
|
|
* state as we have *already* changed
|
|
|
|
* the relocation value inside the
|
|
|
|
* object. As we have not changed the
|
|
|
|
* reloc.presumed_offset or will not
|
|
|
|
* change the execobject.offset, on the
|
|
|
|
* call we may not rewrite the value
|
|
|
|
* inside the object, leaving it
|
|
|
|
* dangling and causing a GPU hang.
|
|
|
|
*/
|
|
|
|
ret = -EFAULT;
|
|
|
|
goto out;
|
|
|
|
}
|
2012-03-24 20:12:53 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
user_relocs++;
|
|
|
|
r++;
|
|
|
|
} while (--count);
|
2010-11-25 18:00:26 +00:00
|
|
|
}
|
|
|
|
|
2016-08-18 16:16:46 +00:00
|
|
|
out:
|
|
|
|
reloc_cache_fini(&cache);
|
|
|
|
return ret;
|
2012-03-24 20:12:53 +00:00
|
|
|
#undef N_RELOC
|
2010-11-25 18:00:26 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static int
|
drm/i915: Convert execbuf code to use vmas
In order to transition more of our code over to using a VMA instead of
an <OBJ, VM> pair - we must have the vma accessible at execbuf time. Up
until now, we've only had a VMA when actually binding an object.
The previous patch helped handle the distinction on bound vs. unbound.
This patch will help us catch leaks, and other issues before we actually
shuffle a bunch of stuff around.
This attempts to convert all the execbuf code to speak in vmas. Since
the execbuf code is very self contained it was a nice isolated
conversion.
The meat of the code is about turning eb_objects into eb_vma, and then
wiring up the rest of the code to use vmas instead of obj, vm pairs.
Unfortunately, to do this, we must move the exec_list link from the obj
structure. This list is reused in the eviction code, so we must also
modify the eviction code to make this work.
WARNING: This patch makes an already hotly profiled path slower. The cost is
unavoidable. In reply to this mail, I will attach the extra data.
v2: Release table lock early, and two a 2 phase vma lookup to avoid
having to use a GFP_ATOMIC. (Chris)
v3: s/obj_exec_list/obj_exec_link/
Updates to address
commit 6d2b888569d366beb4be72cacfde41adee2c25e1
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Wed Aug 7 18:30:54 2013 +0100
drm/i915: List objects allocated from stolen memory in debugfs
v4: Use obj = vma->obj for neatness in some places (Chris)
need_reloc_mappable() should return false if ppgtt (Chris)
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
[danvet: Split out prep patches. Also remove a FIXME comment which is
now taken care of.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2013-08-14 09:38:36 +00:00
|
|
|
i915_gem_execbuffer_relocate_vma_slow(struct i915_vma *vma,
|
|
|
|
struct eb_vmas *eb,
|
|
|
|
struct drm_i915_gem_relocation_entry *relocs)
|
2010-11-25 18:00:26 +00:00
|
|
|
{
|
drm/i915: Convert execbuf code to use vmas
In order to transition more of our code over to using a VMA instead of
an <OBJ, VM> pair - we must have the vma accessible at execbuf time. Up
until now, we've only had a VMA when actually binding an object.
The previous patch helped handle the distinction on bound vs. unbound.
This patch will help us catch leaks, and other issues before we actually
shuffle a bunch of stuff around.
This attempts to convert all the execbuf code to speak in vmas. Since
the execbuf code is very self contained it was a nice isolated
conversion.
The meat of the code is about turning eb_objects into eb_vma, and then
wiring up the rest of the code to use vmas instead of obj, vm pairs.
Unfortunately, to do this, we must move the exec_list link from the obj
structure. This list is reused in the eviction code, so we must also
modify the eviction code to make this work.
WARNING: This patch makes an already hotly profiled path slower. The cost is
unavoidable. In reply to this mail, I will attach the extra data.
v2: Release table lock early, and two a 2 phase vma lookup to avoid
having to use a GFP_ATOMIC. (Chris)
v3: s/obj_exec_list/obj_exec_link/
Updates to address
commit 6d2b888569d366beb4be72cacfde41adee2c25e1
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Wed Aug 7 18:30:54 2013 +0100
drm/i915: List objects allocated from stolen memory in debugfs
v4: Use obj = vma->obj for neatness in some places (Chris)
need_reloc_mappable() should return false if ppgtt (Chris)
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
[danvet: Split out prep patches. Also remove a FIXME comment which is
now taken care of.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2013-08-14 09:38:36 +00:00
|
|
|
const struct drm_i915_gem_exec_object2 *entry = vma->exec_entry;
|
2016-08-18 16:16:46 +00:00
|
|
|
struct reloc_cache cache;
|
|
|
|
int i, ret = 0;
|
2010-11-25 18:00:26 +00:00
|
|
|
|
2016-08-18 16:16:52 +00:00
|
|
|
reloc_cache_init(&cache, eb->i915);
|
2010-11-25 18:00:26 +00:00
|
|
|
for (i = 0; i < entry->relocation_count; i++) {
|
2016-08-18 16:16:46 +00:00
|
|
|
ret = i915_gem_execbuffer_relocate_entry(vma->obj, eb, &relocs[i], &cache);
|
2010-11-25 18:00:26 +00:00
|
|
|
if (ret)
|
2016-08-18 16:16:46 +00:00
|
|
|
break;
|
2010-11-25 18:00:26 +00:00
|
|
|
}
|
2016-08-18 16:16:46 +00:00
|
|
|
reloc_cache_fini(&cache);
|
2010-11-25 18:00:26 +00:00
|
|
|
|
2016-08-18 16:16:46 +00:00
|
|
|
return ret;
|
2010-11-25 18:00:26 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static int
|
2013-11-25 17:54:38 +00:00
|
|
|
i915_gem_execbuffer_relocate(struct eb_vmas *eb)
|
2010-11-25 18:00:26 +00:00
|
|
|
{
|
drm/i915: Convert execbuf code to use vmas
In order to transition more of our code over to using a VMA instead of
an <OBJ, VM> pair - we must have the vma accessible at execbuf time. Up
until now, we've only had a VMA when actually binding an object.
The previous patch helped handle the distinction on bound vs. unbound.
This patch will help us catch leaks, and other issues before we actually
shuffle a bunch of stuff around.
This attempts to convert all the execbuf code to speak in vmas. Since
the execbuf code is very self contained it was a nice isolated
conversion.
The meat of the code is about turning eb_objects into eb_vma, and then
wiring up the rest of the code to use vmas instead of obj, vm pairs.
Unfortunately, to do this, we must move the exec_list link from the obj
structure. This list is reused in the eviction code, so we must also
modify the eviction code to make this work.
WARNING: This patch makes an already hotly profiled path slower. The cost is
unavoidable. In reply to this mail, I will attach the extra data.
v2: Release table lock early, and two a 2 phase vma lookup to avoid
having to use a GFP_ATOMIC. (Chris)
v3: s/obj_exec_list/obj_exec_link/
Updates to address
commit 6d2b888569d366beb4be72cacfde41adee2c25e1
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Wed Aug 7 18:30:54 2013 +0100
drm/i915: List objects allocated from stolen memory in debugfs
v4: Use obj = vma->obj for neatness in some places (Chris)
need_reloc_mappable() should return false if ppgtt (Chris)
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
[danvet: Split out prep patches. Also remove a FIXME comment which is
now taken care of.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2013-08-14 09:38:36 +00:00
|
|
|
struct i915_vma *vma;
|
2011-03-14 15:11:24 +00:00
|
|
|
int ret = 0;
|
|
|
|
|
drm/i915: Convert execbuf code to use vmas
In order to transition more of our code over to using a VMA instead of
an <OBJ, VM> pair - we must have the vma accessible at execbuf time. Up
until now, we've only had a VMA when actually binding an object.
The previous patch helped handle the distinction on bound vs. unbound.
This patch will help us catch leaks, and other issues before we actually
shuffle a bunch of stuff around.
This attempts to convert all the execbuf code to speak in vmas. Since
the execbuf code is very self contained it was a nice isolated
conversion.
The meat of the code is about turning eb_objects into eb_vma, and then
wiring up the rest of the code to use vmas instead of obj, vm pairs.
Unfortunately, to do this, we must move the exec_list link from the obj
structure. This list is reused in the eviction code, so we must also
modify the eviction code to make this work.
WARNING: This patch makes an already hotly profiled path slower. The cost is
unavoidable. In reply to this mail, I will attach the extra data.
v2: Release table lock early, and two a 2 phase vma lookup to avoid
having to use a GFP_ATOMIC. (Chris)
v3: s/obj_exec_list/obj_exec_link/
Updates to address
commit 6d2b888569d366beb4be72cacfde41adee2c25e1
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Wed Aug 7 18:30:54 2013 +0100
drm/i915: List objects allocated from stolen memory in debugfs
v4: Use obj = vma->obj for neatness in some places (Chris)
need_reloc_mappable() should return false if ppgtt (Chris)
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
[danvet: Split out prep patches. Also remove a FIXME comment which is
now taken care of.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2013-08-14 09:38:36 +00:00
|
|
|
list_for_each_entry(vma, &eb->vmas, exec_list) {
|
|
|
|
ret = i915_gem_execbuffer_relocate_vma(vma, eb);
|
2010-11-25 18:00:26 +00:00
|
|
|
if (ret)
|
2011-03-14 15:11:24 +00:00
|
|
|
break;
|
2010-11-25 18:00:26 +00:00
|
|
|
}
|
|
|
|
|
2011-03-14 15:11:24 +00:00
|
|
|
return ret;
|
2010-11-25 18:00:26 +00:00
|
|
|
}
|
|
|
|
|
drm/i915: Fallback to using CPU relocations for large batch buffers
If the batch buffer is too large to fit into the aperture and we need a
GTT mapping for relocations, we currently fail. This only applies to a
subset of machines for a subset of environments, quite undesirable. We
can simply check after failing to insert the batch into the GTT as to
whether we only need a mappable binding for relocation and, if so, we can
revert to using a non-mappable binding and an alternate relocation
method. However, using relocate_entry_cpu() is excruciatingly slow for
large buffers on non-LLC as the entire buffer requires clflushing before
and after the relocation handling. Alternatively, we can implement a
third relocation method that only clflushes around the relocation entry.
This is still slower than updating through the GTT, so we prefer using
the GTT where possible, but is orders of magnitude faster as we
typically do not have to then clflush the entire buffer.
An alternative idea of using a temporary WC mapping of the backing store
is promising (it should be faster than using the GTT itself), but
requires fairly extensive arch/x86 support - along the lines of
kmap_atomic_prof_pfn() (which is not universally implemented even for
x86).
Testcase: igt/gem_exec_big #pnv,byt
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=88392
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
[danvet: Add a WARN_ONCE for the impossible reloc case and explain in
a short comment why we want to avoid ping-pong.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-01-14 11:20:56 +00:00
|
|
|
static bool only_mappable_for_reloc(unsigned int flags)
|
|
|
|
{
|
|
|
|
return (flags & (EXEC_OBJECT_NEEDS_FENCE | __EXEC_OBJECT_NEEDS_MAP)) ==
|
|
|
|
__EXEC_OBJECT_NEEDS_MAP;
|
|
|
|
}
|
|
|
|
|
2011-12-14 12:57:08 +00:00
|
|
|
static int
|
drm/i915: Convert execbuf code to use vmas
In order to transition more of our code over to using a VMA instead of
an <OBJ, VM> pair - we must have the vma accessible at execbuf time. Up
until now, we've only had a VMA when actually binding an object.
The previous patch helped handle the distinction on bound vs. unbound.
This patch will help us catch leaks, and other issues before we actually
shuffle a bunch of stuff around.
This attempts to convert all the execbuf code to speak in vmas. Since
the execbuf code is very self contained it was a nice isolated
conversion.
The meat of the code is about turning eb_objects into eb_vma, and then
wiring up the rest of the code to use vmas instead of obj, vm pairs.
Unfortunately, to do this, we must move the exec_list link from the obj
structure. This list is reused in the eviction code, so we must also
modify the eviction code to make this work.
WARNING: This patch makes an already hotly profiled path slower. The cost is
unavoidable. In reply to this mail, I will attach the extra data.
v2: Release table lock early, and two a 2 phase vma lookup to avoid
having to use a GFP_ATOMIC. (Chris)
v3: s/obj_exec_list/obj_exec_link/
Updates to address
commit 6d2b888569d366beb4be72cacfde41adee2c25e1
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Wed Aug 7 18:30:54 2013 +0100
drm/i915: List objects allocated from stolen memory in debugfs
v4: Use obj = vma->obj for neatness in some places (Chris)
need_reloc_mappable() should return false if ppgtt (Chris)
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
[danvet: Split out prep patches. Also remove a FIXME comment which is
now taken care of.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2013-08-14 09:38:36 +00:00
|
|
|
i915_gem_execbuffer_reserve_vma(struct i915_vma *vma,
|
2016-03-16 11:00:37 +00:00
|
|
|
struct intel_engine_cs *engine,
|
drm/i915: Convert execbuf code to use vmas
In order to transition more of our code over to using a VMA instead of
an <OBJ, VM> pair - we must have the vma accessible at execbuf time. Up
until now, we've only had a VMA when actually binding an object.
The previous patch helped handle the distinction on bound vs. unbound.
This patch will help us catch leaks, and other issues before we actually
shuffle a bunch of stuff around.
This attempts to convert all the execbuf code to speak in vmas. Since
the execbuf code is very self contained it was a nice isolated
conversion.
The meat of the code is about turning eb_objects into eb_vma, and then
wiring up the rest of the code to use vmas instead of obj, vm pairs.
Unfortunately, to do this, we must move the exec_list link from the obj
structure. This list is reused in the eviction code, so we must also
modify the eviction code to make this work.
WARNING: This patch makes an already hotly profiled path slower. The cost is
unavoidable. In reply to this mail, I will attach the extra data.
v2: Release table lock early, and two a 2 phase vma lookup to avoid
having to use a GFP_ATOMIC. (Chris)
v3: s/obj_exec_list/obj_exec_link/
Updates to address
commit 6d2b888569d366beb4be72cacfde41adee2c25e1
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Wed Aug 7 18:30:54 2013 +0100
drm/i915: List objects allocated from stolen memory in debugfs
v4: Use obj = vma->obj for neatness in some places (Chris)
need_reloc_mappable() should return false if ppgtt (Chris)
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
[danvet: Split out prep patches. Also remove a FIXME comment which is
now taken care of.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2013-08-14 09:38:36 +00:00
|
|
|
bool *need_reloc)
|
2011-12-14 12:57:08 +00:00
|
|
|
{
|
drm/i915: Create bind/unbind abstraction for VMAs
To sum up what goes on here, we abstract the vma binding, similarly to
the previous object binding. This helps for distinguishing legacy
binding, versus modern binding. To keep the code churn as minimal as
possible, I am leaving in insert_entries(). It serves as the per
platform pte writing basically. bind_vma and insert_entries do share a
lot of similarities, and I did have designs to combine the two, but as
mentioned already... too much churn in an already massive patchset.
What follows are the 3 commits which existed discretely in the original
submissions. Upon rebasing on Broadwell support, it became clear that
separation was not good, and only made for more error prone code. Below
are the 3 commit messages with all their history.
drm/i915: Add bind/unbind object functions to VMA
drm/i915: Use the new vm [un]bind functions
drm/i915: reduce vm->insert_entries() usage
drm/i915: Add bind/unbind object functions to VMA
As we plumb the code with more VM information, it has become more
obvious that the easiest way to deal with bind and unbind is to simply
put the function pointers in the vm, and let those choose the correct
way to handle the page table updates. This change allows many places in
the code to simply be vm->bind, and not have to worry about
distinguishing PPGTT vs GGTT.
Notice that this patch has no impact on functionality. I've decided to
save the actual change until the next patch because I think it's easier
to review that way. I'm happy to squash the two, or let Daniel do it on
merge.
v2:
Make ggtt handle the quirky aliasing ppgtt
Add flags to bind object to support above
Don't ever call bind/unbind directly for PPGTT until we have real, full
PPGTT (use NULLs to assert this)
Make sure we rebind the ggtt if there already is a ggtt binding. This
happens on set cache levels.
Use VMA for bind/unbind (Daniel, Ben)
v3: Reorganize ggtt_vma_bind to be more concise and easier to read
(Ville). Change logic in unbind to only unbind ggtt when there is a
global mapping, and to remove a redundant check if the aliasing ppgtt
exists.
v4: Make the bind function a bit smarter about the cache levels to avoid
unnecessary multiple remaps. "I accept it is a wart, I think unifying
the pin_vma / bind_vma could be unified later" (Chris)
Removed the git notes, and put version info here. (Daniel)
v5: Update the comment to not suck (Chris)
v6:
Move bind/unbind to the VMA. It makes more sense in the VMA structure
(always has, but I was previously lazy). With this change, it will allow
us to keep a distinct insert_entries.
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
drm/i915: Use the new vm [un]bind functions
Building on the last patch which created the new function pointers in
the VM for bind/unbind, here we actually put those new function pointers
to use.
Split out as a separate patch to aid in review. I'm fine with squashing
into the previous patch if people request it.
v2: Updated to address the smart ggtt which can do aliasing as needed
Make sure we bind to global gtt when mappable and fenceable. I thought
we could get away without this initialy, but we cannot.
v3: Make the global GTT binding explicitly use the ggtt VM for
bind_vma(). While at it, use the new ggtt_vma helper (Chris)
At this point the original mailing list thread diverges. ie.
v4^:
use target_obj instead of obj for gen6 relocate_entry
vma->bind_vma() can be called safely during pin. So simply do that
instead of the complicated conditionals.
Don't restore PPGTT bound objects on resume path
Bug fix in resume path for globally bound Bos
Properly handle secure dispatch
Rebased on vma bind/unbind conversion
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
drm/i915: reduce vm->insert_entries() usage
FKA: drm/i915: eliminate vm->insert_entries()
With bind/unbind function pointers in place, we no longer need
insert_entries. We could, and want, to remove clear_range, however it's
not totally easy at this point. Since it's used in a couple of place
still that don't only deal in objects: setup, ppgtt init, and restore
gtt mappings.
v2: Don't actually remove insert_entries, just limit its usage. It will
be useful when we introduce gen8. It will always be called from the vma
bind/unbind.
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> (v1)
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2013-12-06 22:10:56 +00:00
|
|
|
struct drm_i915_gem_object *obj = vma->obj;
|
drm/i915: Convert execbuf code to use vmas
In order to transition more of our code over to using a VMA instead of
an <OBJ, VM> pair - we must have the vma accessible at execbuf time. Up
until now, we've only had a VMA when actually binding an object.
The previous patch helped handle the distinction on bound vs. unbound.
This patch will help us catch leaks, and other issues before we actually
shuffle a bunch of stuff around.
This attempts to convert all the execbuf code to speak in vmas. Since
the execbuf code is very self contained it was a nice isolated
conversion.
The meat of the code is about turning eb_objects into eb_vma, and then
wiring up the rest of the code to use vmas instead of obj, vm pairs.
Unfortunately, to do this, we must move the exec_list link from the obj
structure. This list is reused in the eviction code, so we must also
modify the eviction code to make this work.
WARNING: This patch makes an already hotly profiled path slower. The cost is
unavoidable. In reply to this mail, I will attach the extra data.
v2: Release table lock early, and two a 2 phase vma lookup to avoid
having to use a GFP_ATOMIC. (Chris)
v3: s/obj_exec_list/obj_exec_link/
Updates to address
commit 6d2b888569d366beb4be72cacfde41adee2c25e1
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Wed Aug 7 18:30:54 2013 +0100
drm/i915: List objects allocated from stolen memory in debugfs
v4: Use obj = vma->obj for neatness in some places (Chris)
need_reloc_mappable() should return false if ppgtt (Chris)
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
[danvet: Split out prep patches. Also remove a FIXME comment which is
now taken care of.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2013-08-14 09:38:36 +00:00
|
|
|
struct drm_i915_gem_exec_object2 *entry = vma->exec_entry;
|
drm/i915: Prevent negative relocation deltas from wrapping
This is pure evil. Userspace, I'm looking at you SNA, repacks batch
buffers on the fly after generation as they are being passed to the
kernel for execution. These batches also contain self-referenced
relocations as a single buffer encompasses the state commands, kernels,
vertices and sampler. During generation the buffers are placed at known
offsets within the full batch, and then the relocation deltas (as passed
to the kernel) are tweaked as the batch is repacked into a smaller buffer.
This means that userspace is passing negative relocations deltas, which
subsequently wrap to large values if the batch is at a low address. The
GPU hangs when it then tries to use the large value as a base for its
address offsets, rather than wrapping back to the real value (as one
would hope). As the GPU uses positive offsets from the base, we can
treat the relocation address as the minimum address read by the GPU.
For the upper bound, we trust that userspace will not read beyond the
end of the buffer.
So, how do we fix negative relocations from wrapping? We can either
check that every relocation looks valid when we write it, and then
position each object such that we prevent the offset wraparound, or we
just special-case the self-referential behaviour of SNA and force all
batches to be above 256k. Daniel prefers the latter approach.
This fixes a GPU hang when it tries to use an address (relocation +
offset) greater than the GTT size. The issue would occur quite easily
with full-ppgtt as each fd gets its own VM space, so low offsets would
often be handed out. However, with the rearrangement of the low GTT due
to capturing the BIOS framebuffer, it is already affecting kernels 3.15
onwards. I think only IVB+ is susceptible to this bug, but the workaround
should only kick in rarely, so it seems sensible to always apply it.
v3: Use a bias for batch buffers to prevent small negative delta relocations
from wrapping.
v4 from Daniel:
- s/BIAS/BATCH_OFFSET_BIAS/
- Extract eb_vma_misplaced/i915_vma_misplaced since the conditions
were growing rather cumbersome.
- Add a comment to eb_get_batch explaining why we do this.
- Apply the batch offset bias everywhere but mention that we've only
observed it on gen7 gpus.
- Drop PIN_OFFSET_FIX for now, that slipped in from a feature patch.
v5: Add static to eb_get_batch, spotted by 0-day tester.
Testcase: igt/gem_bad_reloc
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=78533
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> (v3)
Cc: stable@vger.kernel.org
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-05-23 06:48:08 +00:00
|
|
|
uint64_t flags;
|
2011-12-14 12:57:08 +00:00
|
|
|
int ret;
|
|
|
|
|
2015-04-20 16:04:05 +00:00
|
|
|
flags = PIN_USER;
|
2015-04-14 17:01:54 +00:00
|
|
|
if (entry->flags & EXEC_OBJECT_NEEDS_GTT)
|
|
|
|
flags |= PIN_GLOBAL;
|
|
|
|
|
drm/i915: Fallback to using CPU relocations for large batch buffers
If the batch buffer is too large to fit into the aperture and we need a
GTT mapping for relocations, we currently fail. This only applies to a
subset of machines for a subset of environments, quite undesirable. We
can simply check after failing to insert the batch into the GTT as to
whether we only need a mappable binding for relocation and, if so, we can
revert to using a non-mappable binding and an alternate relocation
method. However, using relocate_entry_cpu() is excruciatingly slow for
large buffers on non-LLC as the entire buffer requires clflushing before
and after the relocation handling. Alternatively, we can implement a
third relocation method that only clflushes around the relocation entry.
This is still slower than updating through the GTT, so we prefer using
the GTT where possible, but is orders of magnitude faster as we
typically do not have to then clflush the entire buffer.
An alternative idea of using a temporary WC mapping of the backing store
is promising (it should be faster than using the GTT itself), but
requires fairly extensive arch/x86 support - along the lines of
kmap_atomic_prof_pfn() (which is not universally implemented even for
x86).
Testcase: igt/gem_exec_big #pnv,byt
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=88392
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
[danvet: Add a WARN_ONCE for the impossible reloc case and explain in
a short comment why we want to avoid ping-pong.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-01-14 11:20:56 +00:00
|
|
|
if (!drm_mm_node_allocated(&vma->node)) {
|
2015-10-01 12:33:57 +00:00
|
|
|
/* Wa32bitGeneralStateOffset & Wa32bitInstructionBaseOffset,
|
|
|
|
* limit address to the first 4GBs for unflagged objects.
|
|
|
|
*/
|
|
|
|
if ((entry->flags & EXEC_OBJECT_SUPPORTS_48B_ADDRESS) == 0)
|
|
|
|
flags |= PIN_ZONE_4G;
|
drm/i915: Fallback to using CPU relocations for large batch buffers
If the batch buffer is too large to fit into the aperture and we need a
GTT mapping for relocations, we currently fail. This only applies to a
subset of machines for a subset of environments, quite undesirable. We
can simply check after failing to insert the batch into the GTT as to
whether we only need a mappable binding for relocation and, if so, we can
revert to using a non-mappable binding and an alternate relocation
method. However, using relocate_entry_cpu() is excruciatingly slow for
large buffers on non-LLC as the entire buffer requires clflushing before
and after the relocation handling. Alternatively, we can implement a
third relocation method that only clflushes around the relocation entry.
This is still slower than updating through the GTT, so we prefer using
the GTT where possible, but is orders of magnitude faster as we
typically do not have to then clflush the entire buffer.
An alternative idea of using a temporary WC mapping of the backing store
is promising (it should be faster than using the GTT itself), but
requires fairly extensive arch/x86 support - along the lines of
kmap_atomic_prof_pfn() (which is not universally implemented even for
x86).
Testcase: igt/gem_exec_big #pnv,byt
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=88392
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
[danvet: Add a WARN_ONCE for the impossible reloc case and explain in
a short comment why we want to avoid ping-pong.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-01-14 11:20:56 +00:00
|
|
|
if (entry->flags & __EXEC_OBJECT_NEEDS_MAP)
|
|
|
|
flags |= PIN_GLOBAL | PIN_MAPPABLE;
|
|
|
|
if (entry->flags & __EXEC_OBJECT_NEEDS_BIAS)
|
|
|
|
flags |= BATCH_OFFSET_BIAS | PIN_OFFSET_BIAS;
|
2015-12-08 11:55:07 +00:00
|
|
|
if (entry->flags & EXEC_OBJECT_PINNED)
|
|
|
|
flags |= entry->offset | PIN_OFFSET_FIXED;
|
2015-10-01 12:33:57 +00:00
|
|
|
if ((flags & PIN_MAPPABLE) == 0)
|
|
|
|
flags |= PIN_HIGH;
|
drm/i915: Fallback to using CPU relocations for large batch buffers
If the batch buffer is too large to fit into the aperture and we need a
GTT mapping for relocations, we currently fail. This only applies to a
subset of machines for a subset of environments, quite undesirable. We
can simply check after failing to insert the batch into the GTT as to
whether we only need a mappable binding for relocation and, if so, we can
revert to using a non-mappable binding and an alternate relocation
method. However, using relocate_entry_cpu() is excruciatingly slow for
large buffers on non-LLC as the entire buffer requires clflushing before
and after the relocation handling. Alternatively, we can implement a
third relocation method that only clflushes around the relocation entry.
This is still slower than updating through the GTT, so we prefer using
the GTT where possible, but is orders of magnitude faster as we
typically do not have to then clflush the entire buffer.
An alternative idea of using a temporary WC mapping of the backing store
is promising (it should be faster than using the GTT itself), but
requires fairly extensive arch/x86 support - along the lines of
kmap_atomic_prof_pfn() (which is not universally implemented even for
x86).
Testcase: igt/gem_exec_big #pnv,byt
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=88392
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
[danvet: Add a WARN_ONCE for the impossible reloc case and explain in
a short comment why we want to avoid ping-pong.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-01-14 11:20:56 +00:00
|
|
|
}
|
2014-02-14 13:01:11 +00:00
|
|
|
|
2016-08-04 15:32:31 +00:00
|
|
|
ret = i915_vma_pin(vma,
|
|
|
|
entry->pad_to_size,
|
|
|
|
entry->alignment,
|
|
|
|
flags);
|
|
|
|
if ((ret == -ENOSPC || ret == -E2BIG) &&
|
drm/i915: Fallback to using CPU relocations for large batch buffers
If the batch buffer is too large to fit into the aperture and we need a
GTT mapping for relocations, we currently fail. This only applies to a
subset of machines for a subset of environments, quite undesirable. We
can simply check after failing to insert the batch into the GTT as to
whether we only need a mappable binding for relocation and, if so, we can
revert to using a non-mappable binding and an alternate relocation
method. However, using relocate_entry_cpu() is excruciatingly slow for
large buffers on non-LLC as the entire buffer requires clflushing before
and after the relocation handling. Alternatively, we can implement a
third relocation method that only clflushes around the relocation entry.
This is still slower than updating through the GTT, so we prefer using
the GTT where possible, but is orders of magnitude faster as we
typically do not have to then clflush the entire buffer.
An alternative idea of using a temporary WC mapping of the backing store
is promising (it should be faster than using the GTT itself), but
requires fairly extensive arch/x86 support - along the lines of
kmap_atomic_prof_pfn() (which is not universally implemented even for
x86).
Testcase: igt/gem_exec_big #pnv,byt
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=88392
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
[danvet: Add a WARN_ONCE for the impossible reloc case and explain in
a short comment why we want to avoid ping-pong.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-01-14 11:20:56 +00:00
|
|
|
only_mappable_for_reloc(entry->flags))
|
2016-08-04 15:32:31 +00:00
|
|
|
ret = i915_vma_pin(vma,
|
|
|
|
entry->pad_to_size,
|
|
|
|
entry->alignment,
|
|
|
|
flags & ~PIN_MAPPABLE);
|
2011-12-14 12:57:08 +00:00
|
|
|
if (ret)
|
|
|
|
return ret;
|
|
|
|
|
2012-08-24 18:18:18 +00:00
|
|
|
entry->flags |= __EXEC_OBJECT_HAS_PIN;
|
|
|
|
|
2014-08-09 16:37:24 +00:00
|
|
|
if (entry->flags & EXEC_OBJECT_NEEDS_FENCE) {
|
2016-08-18 16:17:00 +00:00
|
|
|
ret = i915_vma_get_fence(vma);
|
2014-08-09 16:37:24 +00:00
|
|
|
if (ret)
|
|
|
|
return ret;
|
2012-03-22 15:10:00 +00:00
|
|
|
|
2016-08-18 16:17:00 +00:00
|
|
|
if (i915_vma_pin_fence(vma))
|
2014-08-09 16:37:24 +00:00
|
|
|
entry->flags |= __EXEC_OBJECT_HAS_FENCE;
|
2011-12-14 12:57:08 +00:00
|
|
|
}
|
|
|
|
|
drm/i915: Convert execbuf code to use vmas
In order to transition more of our code over to using a VMA instead of
an <OBJ, VM> pair - we must have the vma accessible at execbuf time. Up
until now, we've only had a VMA when actually binding an object.
The previous patch helped handle the distinction on bound vs. unbound.
This patch will help us catch leaks, and other issues before we actually
shuffle a bunch of stuff around.
This attempts to convert all the execbuf code to speak in vmas. Since
the execbuf code is very self contained it was a nice isolated
conversion.
The meat of the code is about turning eb_objects into eb_vma, and then
wiring up the rest of the code to use vmas instead of obj, vm pairs.
Unfortunately, to do this, we must move the exec_list link from the obj
structure. This list is reused in the eviction code, so we must also
modify the eviction code to make this work.
WARNING: This patch makes an already hotly profiled path slower. The cost is
unavoidable. In reply to this mail, I will attach the extra data.
v2: Release table lock early, and two a 2 phase vma lookup to avoid
having to use a GFP_ATOMIC. (Chris)
v3: s/obj_exec_list/obj_exec_link/
Updates to address
commit 6d2b888569d366beb4be72cacfde41adee2c25e1
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Wed Aug 7 18:30:54 2013 +0100
drm/i915: List objects allocated from stolen memory in debugfs
v4: Use obj = vma->obj for neatness in some places (Chris)
need_reloc_mappable() should return false if ppgtt (Chris)
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
[danvet: Split out prep patches. Also remove a FIXME comment which is
now taken care of.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2013-08-14 09:38:36 +00:00
|
|
|
if (entry->offset != vma->node.start) {
|
|
|
|
entry->offset = vma->node.start;
|
2013-01-17 21:23:36 +00:00
|
|
|
*need_reloc = true;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (entry->flags & EXEC_OBJECT_WRITE) {
|
|
|
|
obj->base.pending_read_domains = I915_GEM_DOMAIN_RENDER;
|
|
|
|
obj->base.pending_write_domain = I915_GEM_DOMAIN_RENDER;
|
|
|
|
}
|
|
|
|
|
2011-12-14 12:57:08 +00:00
|
|
|
return 0;
|
2012-08-24 18:18:18 +00:00
|
|
|
}
|
2011-12-14 12:57:08 +00:00
|
|
|
|
drm/i915: Prevent negative relocation deltas from wrapping
This is pure evil. Userspace, I'm looking at you SNA, repacks batch
buffers on the fly after generation as they are being passed to the
kernel for execution. These batches also contain self-referenced
relocations as a single buffer encompasses the state commands, kernels,
vertices and sampler. During generation the buffers are placed at known
offsets within the full batch, and then the relocation deltas (as passed
to the kernel) are tweaked as the batch is repacked into a smaller buffer.
This means that userspace is passing negative relocations deltas, which
subsequently wrap to large values if the batch is at a low address. The
GPU hangs when it then tries to use the large value as a base for its
address offsets, rather than wrapping back to the real value (as one
would hope). As the GPU uses positive offsets from the base, we can
treat the relocation address as the minimum address read by the GPU.
For the upper bound, we trust that userspace will not read beyond the
end of the buffer.
So, how do we fix negative relocations from wrapping? We can either
check that every relocation looks valid when we write it, and then
position each object such that we prevent the offset wraparound, or we
just special-case the self-referential behaviour of SNA and force all
batches to be above 256k. Daniel prefers the latter approach.
This fixes a GPU hang when it tries to use an address (relocation +
offset) greater than the GTT size. The issue would occur quite easily
with full-ppgtt as each fd gets its own VM space, so low offsets would
often be handed out. However, with the rearrangement of the low GTT due
to capturing the BIOS framebuffer, it is already affecting kernels 3.15
onwards. I think only IVB+ is susceptible to this bug, but the workaround
should only kick in rarely, so it seems sensible to always apply it.
v3: Use a bias for batch buffers to prevent small negative delta relocations
from wrapping.
v4 from Daniel:
- s/BIAS/BATCH_OFFSET_BIAS/
- Extract eb_vma_misplaced/i915_vma_misplaced since the conditions
were growing rather cumbersome.
- Add a comment to eb_get_batch explaining why we do this.
- Apply the batch offset bias everywhere but mention that we've only
observed it on gen7 gpus.
- Drop PIN_OFFSET_FIX for now, that slipped in from a feature patch.
v5: Add static to eb_get_batch, spotted by 0-day tester.
Testcase: igt/gem_bad_reloc
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=78533
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> (v3)
Cc: stable@vger.kernel.org
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-05-23 06:48:08 +00:00
|
|
|
static bool
|
2014-08-11 10:00:12 +00:00
|
|
|
need_reloc_mappable(struct i915_vma *vma)
|
drm/i915: Prevent negative relocation deltas from wrapping
This is pure evil. Userspace, I'm looking at you SNA, repacks batch
buffers on the fly after generation as they are being passed to the
kernel for execution. These batches also contain self-referenced
relocations as a single buffer encompasses the state commands, kernels,
vertices and sampler. During generation the buffers are placed at known
offsets within the full batch, and then the relocation deltas (as passed
to the kernel) are tweaked as the batch is repacked into a smaller buffer.
This means that userspace is passing negative relocations deltas, which
subsequently wrap to large values if the batch is at a low address. The
GPU hangs when it then tries to use the large value as a base for its
address offsets, rather than wrapping back to the real value (as one
would hope). As the GPU uses positive offsets from the base, we can
treat the relocation address as the minimum address read by the GPU.
For the upper bound, we trust that userspace will not read beyond the
end of the buffer.
So, how do we fix negative relocations from wrapping? We can either
check that every relocation looks valid when we write it, and then
position each object such that we prevent the offset wraparound, or we
just special-case the self-referential behaviour of SNA and force all
batches to be above 256k. Daniel prefers the latter approach.
This fixes a GPU hang when it tries to use an address (relocation +
offset) greater than the GTT size. The issue would occur quite easily
with full-ppgtt as each fd gets its own VM space, so low offsets would
often be handed out. However, with the rearrangement of the low GTT due
to capturing the BIOS framebuffer, it is already affecting kernels 3.15
onwards. I think only IVB+ is susceptible to this bug, but the workaround
should only kick in rarely, so it seems sensible to always apply it.
v3: Use a bias for batch buffers to prevent small negative delta relocations
from wrapping.
v4 from Daniel:
- s/BIAS/BATCH_OFFSET_BIAS/
- Extract eb_vma_misplaced/i915_vma_misplaced since the conditions
were growing rather cumbersome.
- Add a comment to eb_get_batch explaining why we do this.
- Apply the batch offset bias everywhere but mention that we've only
observed it on gen7 gpus.
- Drop PIN_OFFSET_FIX for now, that slipped in from a feature patch.
v5: Add static to eb_get_batch, spotted by 0-day tester.
Testcase: igt/gem_bad_reloc
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=78533
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> (v3)
Cc: stable@vger.kernel.org
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-05-23 06:48:08 +00:00
|
|
|
{
|
|
|
|
struct drm_i915_gem_exec_object2 *entry = vma->exec_entry;
|
|
|
|
|
2014-08-11 10:00:12 +00:00
|
|
|
if (entry->relocation_count == 0)
|
|
|
|
return false;
|
|
|
|
|
2016-08-04 15:32:32 +00:00
|
|
|
if (!i915_vma_is_ggtt(vma))
|
2014-08-11 10:00:12 +00:00
|
|
|
return false;
|
|
|
|
|
|
|
|
/* See also use_cpu_reloc() */
|
2016-11-04 14:42:44 +00:00
|
|
|
if (HAS_LLC(to_i915(vma->obj->base.dev)))
|
2014-08-11 10:00:12 +00:00
|
|
|
return false;
|
|
|
|
|
|
|
|
if (vma->obj->base.write_domain == I915_GEM_DOMAIN_CPU)
|
|
|
|
return false;
|
|
|
|
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
|
|
|
static bool
|
|
|
|
eb_vma_misplaced(struct i915_vma *vma)
|
|
|
|
{
|
|
|
|
struct drm_i915_gem_exec_object2 *entry = vma->exec_entry;
|
drm/i915: Prevent negative relocation deltas from wrapping
This is pure evil. Userspace, I'm looking at you SNA, repacks batch
buffers on the fly after generation as they are being passed to the
kernel for execution. These batches also contain self-referenced
relocations as a single buffer encompasses the state commands, kernels,
vertices and sampler. During generation the buffers are placed at known
offsets within the full batch, and then the relocation deltas (as passed
to the kernel) are tweaked as the batch is repacked into a smaller buffer.
This means that userspace is passing negative relocations deltas, which
subsequently wrap to large values if the batch is at a low address. The
GPU hangs when it then tries to use the large value as a base for its
address offsets, rather than wrapping back to the real value (as one
would hope). As the GPU uses positive offsets from the base, we can
treat the relocation address as the minimum address read by the GPU.
For the upper bound, we trust that userspace will not read beyond the
end of the buffer.
So, how do we fix negative relocations from wrapping? We can either
check that every relocation looks valid when we write it, and then
position each object such that we prevent the offset wraparound, or we
just special-case the self-referential behaviour of SNA and force all
batches to be above 256k. Daniel prefers the latter approach.
This fixes a GPU hang when it tries to use an address (relocation +
offset) greater than the GTT size. The issue would occur quite easily
with full-ppgtt as each fd gets its own VM space, so low offsets would
often be handed out. However, with the rearrangement of the low GTT due
to capturing the BIOS framebuffer, it is already affecting kernels 3.15
onwards. I think only IVB+ is susceptible to this bug, but the workaround
should only kick in rarely, so it seems sensible to always apply it.
v3: Use a bias for batch buffers to prevent small negative delta relocations
from wrapping.
v4 from Daniel:
- s/BIAS/BATCH_OFFSET_BIAS/
- Extract eb_vma_misplaced/i915_vma_misplaced since the conditions
were growing rather cumbersome.
- Add a comment to eb_get_batch explaining why we do this.
- Apply the batch offset bias everywhere but mention that we've only
observed it on gen7 gpus.
- Drop PIN_OFFSET_FIX for now, that slipped in from a feature patch.
v5: Add static to eb_get_batch, spotted by 0-day tester.
Testcase: igt/gem_bad_reloc
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=78533
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> (v3)
Cc: stable@vger.kernel.org
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-05-23 06:48:08 +00:00
|
|
|
|
2016-08-04 15:32:32 +00:00
|
|
|
WARN_ON(entry->flags & __EXEC_OBJECT_NEEDS_MAP &&
|
|
|
|
!i915_vma_is_ggtt(vma));
|
drm/i915: Prevent negative relocation deltas from wrapping
This is pure evil. Userspace, I'm looking at you SNA, repacks batch
buffers on the fly after generation as they are being passed to the
kernel for execution. These batches also contain self-referenced
relocations as a single buffer encompasses the state commands, kernels,
vertices and sampler. During generation the buffers are placed at known
offsets within the full batch, and then the relocation deltas (as passed
to the kernel) are tweaked as the batch is repacked into a smaller buffer.
This means that userspace is passing negative relocations deltas, which
subsequently wrap to large values if the batch is at a low address. The
GPU hangs when it then tries to use the large value as a base for its
address offsets, rather than wrapping back to the real value (as one
would hope). As the GPU uses positive offsets from the base, we can
treat the relocation address as the minimum address read by the GPU.
For the upper bound, we trust that userspace will not read beyond the
end of the buffer.
So, how do we fix negative relocations from wrapping? We can either
check that every relocation looks valid when we write it, and then
position each object such that we prevent the offset wraparound, or we
just special-case the self-referential behaviour of SNA and force all
batches to be above 256k. Daniel prefers the latter approach.
This fixes a GPU hang when it tries to use an address (relocation +
offset) greater than the GTT size. The issue would occur quite easily
with full-ppgtt as each fd gets its own VM space, so low offsets would
often be handed out. However, with the rearrangement of the low GTT due
to capturing the BIOS framebuffer, it is already affecting kernels 3.15
onwards. I think only IVB+ is susceptible to this bug, but the workaround
should only kick in rarely, so it seems sensible to always apply it.
v3: Use a bias for batch buffers to prevent small negative delta relocations
from wrapping.
v4 from Daniel:
- s/BIAS/BATCH_OFFSET_BIAS/
- Extract eb_vma_misplaced/i915_vma_misplaced since the conditions
were growing rather cumbersome.
- Add a comment to eb_get_batch explaining why we do this.
- Apply the batch offset bias everywhere but mention that we've only
observed it on gen7 gpus.
- Drop PIN_OFFSET_FIX for now, that slipped in from a feature patch.
v5: Add static to eb_get_batch, spotted by 0-day tester.
Testcase: igt/gem_bad_reloc
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=78533
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> (v3)
Cc: stable@vger.kernel.org
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-05-23 06:48:08 +00:00
|
|
|
|
2017-01-10 14:47:34 +00:00
|
|
|
if (entry->alignment && !IS_ALIGNED(vma->node.start, entry->alignment))
|
drm/i915: Prevent negative relocation deltas from wrapping
This is pure evil. Userspace, I'm looking at you SNA, repacks batch
buffers on the fly after generation as they are being passed to the
kernel for execution. These batches also contain self-referenced
relocations as a single buffer encompasses the state commands, kernels,
vertices and sampler. During generation the buffers are placed at known
offsets within the full batch, and then the relocation deltas (as passed
to the kernel) are tweaked as the batch is repacked into a smaller buffer.
This means that userspace is passing negative relocations deltas, which
subsequently wrap to large values if the batch is at a low address. The
GPU hangs when it then tries to use the large value as a base for its
address offsets, rather than wrapping back to the real value (as one
would hope). As the GPU uses positive offsets from the base, we can
treat the relocation address as the minimum address read by the GPU.
For the upper bound, we trust that userspace will not read beyond the
end of the buffer.
So, how do we fix negative relocations from wrapping? We can either
check that every relocation looks valid when we write it, and then
position each object such that we prevent the offset wraparound, or we
just special-case the self-referential behaviour of SNA and force all
batches to be above 256k. Daniel prefers the latter approach.
This fixes a GPU hang when it tries to use an address (relocation +
offset) greater than the GTT size. The issue would occur quite easily
with full-ppgtt as each fd gets its own VM space, so low offsets would
often be handed out. However, with the rearrangement of the low GTT due
to capturing the BIOS framebuffer, it is already affecting kernels 3.15
onwards. I think only IVB+ is susceptible to this bug, but the workaround
should only kick in rarely, so it seems sensible to always apply it.
v3: Use a bias for batch buffers to prevent small negative delta relocations
from wrapping.
v4 from Daniel:
- s/BIAS/BATCH_OFFSET_BIAS/
- Extract eb_vma_misplaced/i915_vma_misplaced since the conditions
were growing rather cumbersome.
- Add a comment to eb_get_batch explaining why we do this.
- Apply the batch offset bias everywhere but mention that we've only
observed it on gen7 gpus.
- Drop PIN_OFFSET_FIX for now, that slipped in from a feature patch.
v5: Add static to eb_get_batch, spotted by 0-day tester.
Testcase: igt/gem_bad_reloc
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=78533
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> (v3)
Cc: stable@vger.kernel.org
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-05-23 06:48:08 +00:00
|
|
|
return true;
|
|
|
|
|
2016-08-04 15:32:23 +00:00
|
|
|
if (vma->node.size < entry->pad_to_size)
|
|
|
|
return true;
|
|
|
|
|
2015-12-08 11:55:07 +00:00
|
|
|
if (entry->flags & EXEC_OBJECT_PINNED &&
|
|
|
|
vma->node.start != entry->offset)
|
|
|
|
return true;
|
|
|
|
|
drm/i915: Prevent negative relocation deltas from wrapping
This is pure evil. Userspace, I'm looking at you SNA, repacks batch
buffers on the fly after generation as they are being passed to the
kernel for execution. These batches also contain self-referenced
relocations as a single buffer encompasses the state commands, kernels,
vertices and sampler. During generation the buffers are placed at known
offsets within the full batch, and then the relocation deltas (as passed
to the kernel) are tweaked as the batch is repacked into a smaller buffer.
This means that userspace is passing negative relocations deltas, which
subsequently wrap to large values if the batch is at a low address. The
GPU hangs when it then tries to use the large value as a base for its
address offsets, rather than wrapping back to the real value (as one
would hope). As the GPU uses positive offsets from the base, we can
treat the relocation address as the minimum address read by the GPU.
For the upper bound, we trust that userspace will not read beyond the
end of the buffer.
So, how do we fix negative relocations from wrapping? We can either
check that every relocation looks valid when we write it, and then
position each object such that we prevent the offset wraparound, or we
just special-case the self-referential behaviour of SNA and force all
batches to be above 256k. Daniel prefers the latter approach.
This fixes a GPU hang when it tries to use an address (relocation +
offset) greater than the GTT size. The issue would occur quite easily
with full-ppgtt as each fd gets its own VM space, so low offsets would
often be handed out. However, with the rearrangement of the low GTT due
to capturing the BIOS framebuffer, it is already affecting kernels 3.15
onwards. I think only IVB+ is susceptible to this bug, but the workaround
should only kick in rarely, so it seems sensible to always apply it.
v3: Use a bias for batch buffers to prevent small negative delta relocations
from wrapping.
v4 from Daniel:
- s/BIAS/BATCH_OFFSET_BIAS/
- Extract eb_vma_misplaced/i915_vma_misplaced since the conditions
were growing rather cumbersome.
- Add a comment to eb_get_batch explaining why we do this.
- Apply the batch offset bias everywhere but mention that we've only
observed it on gen7 gpus.
- Drop PIN_OFFSET_FIX for now, that slipped in from a feature patch.
v5: Add static to eb_get_batch, spotted by 0-day tester.
Testcase: igt/gem_bad_reloc
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=78533
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> (v3)
Cc: stable@vger.kernel.org
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-05-23 06:48:08 +00:00
|
|
|
if (entry->flags & __EXEC_OBJECT_NEEDS_BIAS &&
|
|
|
|
vma->node.start < BATCH_OFFSET_BIAS)
|
|
|
|
return true;
|
|
|
|
|
drm/i915: Fallback to using CPU relocations for large batch buffers
If the batch buffer is too large to fit into the aperture and we need a
GTT mapping for relocations, we currently fail. This only applies to a
subset of machines for a subset of environments, quite undesirable. We
can simply check after failing to insert the batch into the GTT as to
whether we only need a mappable binding for relocation and, if so, we can
revert to using a non-mappable binding and an alternate relocation
method. However, using relocate_entry_cpu() is excruciatingly slow for
large buffers on non-LLC as the entire buffer requires clflushing before
and after the relocation handling. Alternatively, we can implement a
third relocation method that only clflushes around the relocation entry.
This is still slower than updating through the GTT, so we prefer using
the GTT where possible, but is orders of magnitude faster as we
typically do not have to then clflush the entire buffer.
An alternative idea of using a temporary WC mapping of the backing store
is promising (it should be faster than using the GTT itself), but
requires fairly extensive arch/x86 support - along the lines of
kmap_atomic_prof_pfn() (which is not universally implemented even for
x86).
Testcase: igt/gem_exec_big #pnv,byt
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=88392
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
[danvet: Add a WARN_ONCE for the impossible reloc case and explain in
a short comment why we want to avoid ping-pong.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-01-14 11:20:56 +00:00
|
|
|
/* avoid costly ping-pong once a batch bo ended up non-mappable */
|
2016-08-18 16:16:55 +00:00
|
|
|
if (entry->flags & __EXEC_OBJECT_NEEDS_MAP &&
|
|
|
|
!i915_vma_is_map_and_fenceable(vma))
|
drm/i915: Fallback to using CPU relocations for large batch buffers
If the batch buffer is too large to fit into the aperture and we need a
GTT mapping for relocations, we currently fail. This only applies to a
subset of machines for a subset of environments, quite undesirable. We
can simply check after failing to insert the batch into the GTT as to
whether we only need a mappable binding for relocation and, if so, we can
revert to using a non-mappable binding and an alternate relocation
method. However, using relocate_entry_cpu() is excruciatingly slow for
large buffers on non-LLC as the entire buffer requires clflushing before
and after the relocation handling. Alternatively, we can implement a
third relocation method that only clflushes around the relocation entry.
This is still slower than updating through the GTT, so we prefer using
the GTT where possible, but is orders of magnitude faster as we
typically do not have to then clflush the entire buffer.
An alternative idea of using a temporary WC mapping of the backing store
is promising (it should be faster than using the GTT itself), but
requires fairly extensive arch/x86 support - along the lines of
kmap_atomic_prof_pfn() (which is not universally implemented even for
x86).
Testcase: igt/gem_exec_big #pnv,byt
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=88392
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
[danvet: Add a WARN_ONCE for the impossible reloc case and explain in
a short comment why we want to avoid ping-pong.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2015-01-14 11:20:56 +00:00
|
|
|
return !only_mappable_for_reloc(entry->flags);
|
|
|
|
|
2015-10-01 12:33:57 +00:00
|
|
|
if ((entry->flags & EXEC_OBJECT_SUPPORTS_48B_ADDRESS) == 0 &&
|
|
|
|
(vma->node.start + vma->node.size - 1) >> 32)
|
|
|
|
return true;
|
|
|
|
|
drm/i915: Prevent negative relocation deltas from wrapping
This is pure evil. Userspace, I'm looking at you SNA, repacks batch
buffers on the fly after generation as they are being passed to the
kernel for execution. These batches also contain self-referenced
relocations as a single buffer encompasses the state commands, kernels,
vertices and sampler. During generation the buffers are placed at known
offsets within the full batch, and then the relocation deltas (as passed
to the kernel) are tweaked as the batch is repacked into a smaller buffer.
This means that userspace is passing negative relocations deltas, which
subsequently wrap to large values if the batch is at a low address. The
GPU hangs when it then tries to use the large value as a base for its
address offsets, rather than wrapping back to the real value (as one
would hope). As the GPU uses positive offsets from the base, we can
treat the relocation address as the minimum address read by the GPU.
For the upper bound, we trust that userspace will not read beyond the
end of the buffer.
So, how do we fix negative relocations from wrapping? We can either
check that every relocation looks valid when we write it, and then
position each object such that we prevent the offset wraparound, or we
just special-case the self-referential behaviour of SNA and force all
batches to be above 256k. Daniel prefers the latter approach.
This fixes a GPU hang when it tries to use an address (relocation +
offset) greater than the GTT size. The issue would occur quite easily
with full-ppgtt as each fd gets its own VM space, so low offsets would
often be handed out. However, with the rearrangement of the low GTT due
to capturing the BIOS framebuffer, it is already affecting kernels 3.15
onwards. I think only IVB+ is susceptible to this bug, but the workaround
should only kick in rarely, so it seems sensible to always apply it.
v3: Use a bias for batch buffers to prevent small negative delta relocations
from wrapping.
v4 from Daniel:
- s/BIAS/BATCH_OFFSET_BIAS/
- Extract eb_vma_misplaced/i915_vma_misplaced since the conditions
were growing rather cumbersome.
- Add a comment to eb_get_batch explaining why we do this.
- Apply the batch offset bias everywhere but mention that we've only
observed it on gen7 gpus.
- Drop PIN_OFFSET_FIX for now, that slipped in from a feature patch.
v5: Add static to eb_get_batch, spotted by 0-day tester.
Testcase: igt/gem_bad_reloc
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=78533
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> (v3)
Cc: stable@vger.kernel.org
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-05-23 06:48:08 +00:00
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
2010-11-25 18:00:26 +00:00
|
|
|
static int
|
2016-03-16 11:00:37 +00:00
|
|
|
i915_gem_execbuffer_reserve(struct intel_engine_cs *engine,
|
drm/i915: Convert execbuf code to use vmas
In order to transition more of our code over to using a VMA instead of
an <OBJ, VM> pair - we must have the vma accessible at execbuf time. Up
until now, we've only had a VMA when actually binding an object.
The previous patch helped handle the distinction on bound vs. unbound.
This patch will help us catch leaks, and other issues before we actually
shuffle a bunch of stuff around.
This attempts to convert all the execbuf code to speak in vmas. Since
the execbuf code is very self contained it was a nice isolated
conversion.
The meat of the code is about turning eb_objects into eb_vma, and then
wiring up the rest of the code to use vmas instead of obj, vm pairs.
Unfortunately, to do this, we must move the exec_list link from the obj
structure. This list is reused in the eviction code, so we must also
modify the eviction code to make this work.
WARNING: This patch makes an already hotly profiled path slower. The cost is
unavoidable. In reply to this mail, I will attach the extra data.
v2: Release table lock early, and two a 2 phase vma lookup to avoid
having to use a GFP_ATOMIC. (Chris)
v3: s/obj_exec_list/obj_exec_link/
Updates to address
commit 6d2b888569d366beb4be72cacfde41adee2c25e1
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Wed Aug 7 18:30:54 2013 +0100
drm/i915: List objects allocated from stolen memory in debugfs
v4: Use obj = vma->obj for neatness in some places (Chris)
need_reloc_mappable() should return false if ppgtt (Chris)
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
[danvet: Split out prep patches. Also remove a FIXME comment which is
now taken care of.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2013-08-14 09:38:36 +00:00
|
|
|
struct list_head *vmas,
|
2016-05-24 13:53:34 +00:00
|
|
|
struct i915_gem_context *ctx,
|
2013-01-17 21:23:36 +00:00
|
|
|
bool *need_relocs)
|
2010-11-25 18:00:26 +00:00
|
|
|
{
|
2010-11-25 19:32:06 +00:00
|
|
|
struct drm_i915_gem_object *obj;
|
drm/i915: Convert execbuf code to use vmas
In order to transition more of our code over to using a VMA instead of
an <OBJ, VM> pair - we must have the vma accessible at execbuf time. Up
until now, we've only had a VMA when actually binding an object.
The previous patch helped handle the distinction on bound vs. unbound.
This patch will help us catch leaks, and other issues before we actually
shuffle a bunch of stuff around.
This attempts to convert all the execbuf code to speak in vmas. Since
the execbuf code is very self contained it was a nice isolated
conversion.
The meat of the code is about turning eb_objects into eb_vma, and then
wiring up the rest of the code to use vmas instead of obj, vm pairs.
Unfortunately, to do this, we must move the exec_list link from the obj
structure. This list is reused in the eviction code, so we must also
modify the eviction code to make this work.
WARNING: This patch makes an already hotly profiled path slower. The cost is
unavoidable. In reply to this mail, I will attach the extra data.
v2: Release table lock early, and two a 2 phase vma lookup to avoid
having to use a GFP_ATOMIC. (Chris)
v3: s/obj_exec_list/obj_exec_link/
Updates to address
commit 6d2b888569d366beb4be72cacfde41adee2c25e1
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Wed Aug 7 18:30:54 2013 +0100
drm/i915: List objects allocated from stolen memory in debugfs
v4: Use obj = vma->obj for neatness in some places (Chris)
need_reloc_mappable() should return false if ppgtt (Chris)
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
[danvet: Split out prep patches. Also remove a FIXME comment which is
now taken care of.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2013-08-14 09:38:36 +00:00
|
|
|
struct i915_vma *vma;
|
2013-09-11 21:57:50 +00:00
|
|
|
struct i915_address_space *vm;
|
drm/i915: Convert execbuf code to use vmas
In order to transition more of our code over to using a VMA instead of
an <OBJ, VM> pair - we must have the vma accessible at execbuf time. Up
until now, we've only had a VMA when actually binding an object.
The previous patch helped handle the distinction on bound vs. unbound.
This patch will help us catch leaks, and other issues before we actually
shuffle a bunch of stuff around.
This attempts to convert all the execbuf code to speak in vmas. Since
the execbuf code is very self contained it was a nice isolated
conversion.
The meat of the code is about turning eb_objects into eb_vma, and then
wiring up the rest of the code to use vmas instead of obj, vm pairs.
Unfortunately, to do this, we must move the exec_list link from the obj
structure. This list is reused in the eviction code, so we must also
modify the eviction code to make this work.
WARNING: This patch makes an already hotly profiled path slower. The cost is
unavoidable. In reply to this mail, I will attach the extra data.
v2: Release table lock early, and two a 2 phase vma lookup to avoid
having to use a GFP_ATOMIC. (Chris)
v3: s/obj_exec_list/obj_exec_link/
Updates to address
commit 6d2b888569d366beb4be72cacfde41adee2c25e1
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Wed Aug 7 18:30:54 2013 +0100
drm/i915: List objects allocated from stolen memory in debugfs
v4: Use obj = vma->obj for neatness in some places (Chris)
need_reloc_mappable() should return false if ppgtt (Chris)
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
[danvet: Split out prep patches. Also remove a FIXME comment which is
now taken care of.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2013-08-14 09:38:36 +00:00
|
|
|
struct list_head ordered_vmas;
|
2015-12-08 11:55:07 +00:00
|
|
|
struct list_head pinned_vmas;
|
2016-05-06 14:40:21 +00:00
|
|
|
bool has_fenced_gpu_access = INTEL_GEN(engine->i915) < 4;
|
2017-03-25 11:32:43 +00:00
|
|
|
bool needs_unfenced_map = INTEL_INFO(engine->i915)->unfenced_needs_alignment;
|
2012-08-24 18:18:18 +00:00
|
|
|
int retry;
|
2011-01-10 17:35:37 +00:00
|
|
|
|
2013-09-11 21:57:50 +00:00
|
|
|
vm = list_first_entry(vmas, struct i915_vma, exec_list)->vm;
|
|
|
|
|
drm/i915: Convert execbuf code to use vmas
In order to transition more of our code over to using a VMA instead of
an <OBJ, VM> pair - we must have the vma accessible at execbuf time. Up
until now, we've only had a VMA when actually binding an object.
The previous patch helped handle the distinction on bound vs. unbound.
This patch will help us catch leaks, and other issues before we actually
shuffle a bunch of stuff around.
This attempts to convert all the execbuf code to speak in vmas. Since
the execbuf code is very self contained it was a nice isolated
conversion.
The meat of the code is about turning eb_objects into eb_vma, and then
wiring up the rest of the code to use vmas instead of obj, vm pairs.
Unfortunately, to do this, we must move the exec_list link from the obj
structure. This list is reused in the eviction code, so we must also
modify the eviction code to make this work.
WARNING: This patch makes an already hotly profiled path slower. The cost is
unavoidable. In reply to this mail, I will attach the extra data.
v2: Release table lock early, and two a 2 phase vma lookup to avoid
having to use a GFP_ATOMIC. (Chris)
v3: s/obj_exec_list/obj_exec_link/
Updates to address
commit 6d2b888569d366beb4be72cacfde41adee2c25e1
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Wed Aug 7 18:30:54 2013 +0100
drm/i915: List objects allocated from stolen memory in debugfs
v4: Use obj = vma->obj for neatness in some places (Chris)
need_reloc_mappable() should return false if ppgtt (Chris)
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
[danvet: Split out prep patches. Also remove a FIXME comment which is
now taken care of.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2013-08-14 09:38:36 +00:00
|
|
|
INIT_LIST_HEAD(&ordered_vmas);
|
2015-12-08 11:55:07 +00:00
|
|
|
INIT_LIST_HEAD(&pinned_vmas);
|
drm/i915: Convert execbuf code to use vmas
In order to transition more of our code over to using a VMA instead of
an <OBJ, VM> pair - we must have the vma accessible at execbuf time. Up
until now, we've only had a VMA when actually binding an object.
The previous patch helped handle the distinction on bound vs. unbound.
This patch will help us catch leaks, and other issues before we actually
shuffle a bunch of stuff around.
This attempts to convert all the execbuf code to speak in vmas. Since
the execbuf code is very self contained it was a nice isolated
conversion.
The meat of the code is about turning eb_objects into eb_vma, and then
wiring up the rest of the code to use vmas instead of obj, vm pairs.
Unfortunately, to do this, we must move the exec_list link from the obj
structure. This list is reused in the eviction code, so we must also
modify the eviction code to make this work.
WARNING: This patch makes an already hotly profiled path slower. The cost is
unavoidable. In reply to this mail, I will attach the extra data.
v2: Release table lock early, and two a 2 phase vma lookup to avoid
having to use a GFP_ATOMIC. (Chris)
v3: s/obj_exec_list/obj_exec_link/
Updates to address
commit 6d2b888569d366beb4be72cacfde41adee2c25e1
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Wed Aug 7 18:30:54 2013 +0100
drm/i915: List objects allocated from stolen memory in debugfs
v4: Use obj = vma->obj for neatness in some places (Chris)
need_reloc_mappable() should return false if ppgtt (Chris)
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
[danvet: Split out prep patches. Also remove a FIXME comment which is
now taken care of.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2013-08-14 09:38:36 +00:00
|
|
|
while (!list_empty(vmas)) {
|
2011-01-10 17:35:37 +00:00
|
|
|
struct drm_i915_gem_exec_object2 *entry;
|
|
|
|
bool need_fence, need_mappable;
|
|
|
|
|
drm/i915: Convert execbuf code to use vmas
In order to transition more of our code over to using a VMA instead of
an <OBJ, VM> pair - we must have the vma accessible at execbuf time. Up
until now, we've only had a VMA when actually binding an object.
The previous patch helped handle the distinction on bound vs. unbound.
This patch will help us catch leaks, and other issues before we actually
shuffle a bunch of stuff around.
This attempts to convert all the execbuf code to speak in vmas. Since
the execbuf code is very self contained it was a nice isolated
conversion.
The meat of the code is about turning eb_objects into eb_vma, and then
wiring up the rest of the code to use vmas instead of obj, vm pairs.
Unfortunately, to do this, we must move the exec_list link from the obj
structure. This list is reused in the eviction code, so we must also
modify the eviction code to make this work.
WARNING: This patch makes an already hotly profiled path slower. The cost is
unavoidable. In reply to this mail, I will attach the extra data.
v2: Release table lock early, and two a 2 phase vma lookup to avoid
having to use a GFP_ATOMIC. (Chris)
v3: s/obj_exec_list/obj_exec_link/
Updates to address
commit 6d2b888569d366beb4be72cacfde41adee2c25e1
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Wed Aug 7 18:30:54 2013 +0100
drm/i915: List objects allocated from stolen memory in debugfs
v4: Use obj = vma->obj for neatness in some places (Chris)
need_reloc_mappable() should return false if ppgtt (Chris)
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
[danvet: Split out prep patches. Also remove a FIXME comment which is
now taken care of.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2013-08-14 09:38:36 +00:00
|
|
|
vma = list_first_entry(vmas, struct i915_vma, exec_list);
|
|
|
|
obj = vma->obj;
|
|
|
|
entry = vma->exec_entry;
|
2011-01-10 17:35:37 +00:00
|
|
|
|
2015-05-20 14:00:13 +00:00
|
|
|
if (ctx->flags & CONTEXT_NO_ZEROMAP)
|
|
|
|
entry->flags |= __EXEC_OBJECT_NEEDS_BIAS;
|
|
|
|
|
2014-08-09 16:37:24 +00:00
|
|
|
if (!has_fenced_gpu_access)
|
|
|
|
entry->flags &= ~EXEC_OBJECT_NEEDS_FENCE;
|
2011-01-10 17:35:37 +00:00
|
|
|
need_fence =
|
2017-03-25 11:32:43 +00:00
|
|
|
(entry->flags & EXEC_OBJECT_NEEDS_FENCE ||
|
|
|
|
needs_unfenced_map) &&
|
2016-08-05 09:14:23 +00:00
|
|
|
i915_gem_object_is_tiled(obj);
|
drm/i915: Convert execbuf code to use vmas
In order to transition more of our code over to using a VMA instead of
an <OBJ, VM> pair - we must have the vma accessible at execbuf time. Up
until now, we've only had a VMA when actually binding an object.
The previous patch helped handle the distinction on bound vs. unbound.
This patch will help us catch leaks, and other issues before we actually
shuffle a bunch of stuff around.
This attempts to convert all the execbuf code to speak in vmas. Since
the execbuf code is very self contained it was a nice isolated
conversion.
The meat of the code is about turning eb_objects into eb_vma, and then
wiring up the rest of the code to use vmas instead of obj, vm pairs.
Unfortunately, to do this, we must move the exec_list link from the obj
structure. This list is reused in the eviction code, so we must also
modify the eviction code to make this work.
WARNING: This patch makes an already hotly profiled path slower. The cost is
unavoidable. In reply to this mail, I will attach the extra data.
v2: Release table lock early, and two a 2 phase vma lookup to avoid
having to use a GFP_ATOMIC. (Chris)
v3: s/obj_exec_list/obj_exec_link/
Updates to address
commit 6d2b888569d366beb4be72cacfde41adee2c25e1
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Wed Aug 7 18:30:54 2013 +0100
drm/i915: List objects allocated from stolen memory in debugfs
v4: Use obj = vma->obj for neatness in some places (Chris)
need_reloc_mappable() should return false if ppgtt (Chris)
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
[danvet: Split out prep patches. Also remove a FIXME comment which is
now taken care of.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2013-08-14 09:38:36 +00:00
|
|
|
need_mappable = need_fence || need_reloc_mappable(vma);
|
2011-01-10 17:35:37 +00:00
|
|
|
|
2015-12-08 11:55:07 +00:00
|
|
|
if (entry->flags & EXEC_OBJECT_PINNED)
|
|
|
|
list_move_tail(&vma->exec_list, &pinned_vmas);
|
|
|
|
else if (need_mappable) {
|
2014-08-11 10:00:12 +00:00
|
|
|
entry->flags |= __EXEC_OBJECT_NEEDS_MAP;
|
drm/i915: Convert execbuf code to use vmas
In order to transition more of our code over to using a VMA instead of
an <OBJ, VM> pair - we must have the vma accessible at execbuf time. Up
until now, we've only had a VMA when actually binding an object.
The previous patch helped handle the distinction on bound vs. unbound.
This patch will help us catch leaks, and other issues before we actually
shuffle a bunch of stuff around.
This attempts to convert all the execbuf code to speak in vmas. Since
the execbuf code is very self contained it was a nice isolated
conversion.
The meat of the code is about turning eb_objects into eb_vma, and then
wiring up the rest of the code to use vmas instead of obj, vm pairs.
Unfortunately, to do this, we must move the exec_list link from the obj
structure. This list is reused in the eviction code, so we must also
modify the eviction code to make this work.
WARNING: This patch makes an already hotly profiled path slower. The cost is
unavoidable. In reply to this mail, I will attach the extra data.
v2: Release table lock early, and two a 2 phase vma lookup to avoid
having to use a GFP_ATOMIC. (Chris)
v3: s/obj_exec_list/obj_exec_link/
Updates to address
commit 6d2b888569d366beb4be72cacfde41adee2c25e1
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Wed Aug 7 18:30:54 2013 +0100
drm/i915: List objects allocated from stolen memory in debugfs
v4: Use obj = vma->obj for neatness in some places (Chris)
need_reloc_mappable() should return false if ppgtt (Chris)
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
[danvet: Split out prep patches. Also remove a FIXME comment which is
now taken care of.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2013-08-14 09:38:36 +00:00
|
|
|
list_move(&vma->exec_list, &ordered_vmas);
|
2014-08-11 10:00:12 +00:00
|
|
|
} else
|
drm/i915: Convert execbuf code to use vmas
In order to transition more of our code over to using a VMA instead of
an <OBJ, VM> pair - we must have the vma accessible at execbuf time. Up
until now, we've only had a VMA when actually binding an object.
The previous patch helped handle the distinction on bound vs. unbound.
This patch will help us catch leaks, and other issues before we actually
shuffle a bunch of stuff around.
This attempts to convert all the execbuf code to speak in vmas. Since
the execbuf code is very self contained it was a nice isolated
conversion.
The meat of the code is about turning eb_objects into eb_vma, and then
wiring up the rest of the code to use vmas instead of obj, vm pairs.
Unfortunately, to do this, we must move the exec_list link from the obj
structure. This list is reused in the eviction code, so we must also
modify the eviction code to make this work.
WARNING: This patch makes an already hotly profiled path slower. The cost is
unavoidable. In reply to this mail, I will attach the extra data.
v2: Release table lock early, and two a 2 phase vma lookup to avoid
having to use a GFP_ATOMIC. (Chris)
v3: s/obj_exec_list/obj_exec_link/
Updates to address
commit 6d2b888569d366beb4be72cacfde41adee2c25e1
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Wed Aug 7 18:30:54 2013 +0100
drm/i915: List objects allocated from stolen memory in debugfs
v4: Use obj = vma->obj for neatness in some places (Chris)
need_reloc_mappable() should return false if ppgtt (Chris)
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
[danvet: Split out prep patches. Also remove a FIXME comment which is
now taken care of.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2013-08-14 09:38:36 +00:00
|
|
|
list_move_tail(&vma->exec_list, &ordered_vmas);
|
2011-01-13 11:03:48 +00:00
|
|
|
|
2013-01-17 21:23:36 +00:00
|
|
|
obj->base.pending_read_domains = I915_GEM_GPU_DOMAINS & ~I915_GEM_DOMAIN_COMMAND;
|
2011-01-13 11:03:48 +00:00
|
|
|
obj->base.pending_write_domain = 0;
|
2011-01-10 17:35:37 +00:00
|
|
|
}
|
drm/i915: Convert execbuf code to use vmas
In order to transition more of our code over to using a VMA instead of
an <OBJ, VM> pair - we must have the vma accessible at execbuf time. Up
until now, we've only had a VMA when actually binding an object.
The previous patch helped handle the distinction on bound vs. unbound.
This patch will help us catch leaks, and other issues before we actually
shuffle a bunch of stuff around.
This attempts to convert all the execbuf code to speak in vmas. Since
the execbuf code is very self contained it was a nice isolated
conversion.
The meat of the code is about turning eb_objects into eb_vma, and then
wiring up the rest of the code to use vmas instead of obj, vm pairs.
Unfortunately, to do this, we must move the exec_list link from the obj
structure. This list is reused in the eviction code, so we must also
modify the eviction code to make this work.
WARNING: This patch makes an already hotly profiled path slower. The cost is
unavoidable. In reply to this mail, I will attach the extra data.
v2: Release table lock early, and two a 2 phase vma lookup to avoid
having to use a GFP_ATOMIC. (Chris)
v3: s/obj_exec_list/obj_exec_link/
Updates to address
commit 6d2b888569d366beb4be72cacfde41adee2c25e1
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Wed Aug 7 18:30:54 2013 +0100
drm/i915: List objects allocated from stolen memory in debugfs
v4: Use obj = vma->obj for neatness in some places (Chris)
need_reloc_mappable() should return false if ppgtt (Chris)
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
[danvet: Split out prep patches. Also remove a FIXME comment which is
now taken care of.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2013-08-14 09:38:36 +00:00
|
|
|
list_splice(&ordered_vmas, vmas);
|
2015-12-08 11:55:07 +00:00
|
|
|
list_splice(&pinned_vmas, vmas);
|
2010-11-25 18:00:26 +00:00
|
|
|
|
|
|
|
/* Attempt to pin all of the buffers into the GTT.
|
|
|
|
* This is done in 3 phases:
|
|
|
|
*
|
|
|
|
* 1a. Unbind all objects that do not match the GTT constraints for
|
|
|
|
* the execbuffer (fenceable, mappable, alignment etc).
|
|
|
|
* 1b. Increment pin count for already bound objects.
|
|
|
|
* 2. Bind new objects.
|
|
|
|
* 3. Decrement pin count.
|
|
|
|
*
|
2012-08-24 18:18:18 +00:00
|
|
|
* This avoid unnecessary unbinding of later objects in order to make
|
2010-11-25 18:00:26 +00:00
|
|
|
* room for the earlier objects *unless* we need to defragment.
|
|
|
|
*/
|
|
|
|
retry = 0;
|
|
|
|
do {
|
2012-08-24 18:18:18 +00:00
|
|
|
int ret = 0;
|
2010-11-25 18:00:26 +00:00
|
|
|
|
|
|
|
/* Unbind any ill-fitting objects or pin. */
|
drm/i915: Convert execbuf code to use vmas
In order to transition more of our code over to using a VMA instead of
an <OBJ, VM> pair - we must have the vma accessible at execbuf time. Up
until now, we've only had a VMA when actually binding an object.
The previous patch helped handle the distinction on bound vs. unbound.
This patch will help us catch leaks, and other issues before we actually
shuffle a bunch of stuff around.
This attempts to convert all the execbuf code to speak in vmas. Since
the execbuf code is very self contained it was a nice isolated
conversion.
The meat of the code is about turning eb_objects into eb_vma, and then
wiring up the rest of the code to use vmas instead of obj, vm pairs.
Unfortunately, to do this, we must move the exec_list link from the obj
structure. This list is reused in the eviction code, so we must also
modify the eviction code to make this work.
WARNING: This patch makes an already hotly profiled path slower. The cost is
unavoidable. In reply to this mail, I will attach the extra data.
v2: Release table lock early, and two a 2 phase vma lookup to avoid
having to use a GFP_ATOMIC. (Chris)
v3: s/obj_exec_list/obj_exec_link/
Updates to address
commit 6d2b888569d366beb4be72cacfde41adee2c25e1
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Wed Aug 7 18:30:54 2013 +0100
drm/i915: List objects allocated from stolen memory in debugfs
v4: Use obj = vma->obj for neatness in some places (Chris)
need_reloc_mappable() should return false if ppgtt (Chris)
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
[danvet: Split out prep patches. Also remove a FIXME comment which is
now taken care of.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2013-08-14 09:38:36 +00:00
|
|
|
list_for_each_entry(vma, vmas, exec_list) {
|
|
|
|
if (!drm_mm_node_allocated(&vma->node))
|
2010-11-25 18:00:26 +00:00
|
|
|
continue;
|
|
|
|
|
2014-08-11 10:00:12 +00:00
|
|
|
if (eb_vma_misplaced(vma))
|
drm/i915: Convert execbuf code to use vmas
In order to transition more of our code over to using a VMA instead of
an <OBJ, VM> pair - we must have the vma accessible at execbuf time. Up
until now, we've only had a VMA when actually binding an object.
The previous patch helped handle the distinction on bound vs. unbound.
This patch will help us catch leaks, and other issues before we actually
shuffle a bunch of stuff around.
This attempts to convert all the execbuf code to speak in vmas. Since
the execbuf code is very self contained it was a nice isolated
conversion.
The meat of the code is about turning eb_objects into eb_vma, and then
wiring up the rest of the code to use vmas instead of obj, vm pairs.
Unfortunately, to do this, we must move the exec_list link from the obj
structure. This list is reused in the eviction code, so we must also
modify the eviction code to make this work.
WARNING: This patch makes an already hotly profiled path slower. The cost is
unavoidable. In reply to this mail, I will attach the extra data.
v2: Release table lock early, and two a 2 phase vma lookup to avoid
having to use a GFP_ATOMIC. (Chris)
v3: s/obj_exec_list/obj_exec_link/
Updates to address
commit 6d2b888569d366beb4be72cacfde41adee2c25e1
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Wed Aug 7 18:30:54 2013 +0100
drm/i915: List objects allocated from stolen memory in debugfs
v4: Use obj = vma->obj for neatness in some places (Chris)
need_reloc_mappable() should return false if ppgtt (Chris)
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
[danvet: Split out prep patches. Also remove a FIXME comment which is
now taken care of.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2013-08-14 09:38:36 +00:00
|
|
|
ret = i915_vma_unbind(vma);
|
2010-11-25 18:00:26 +00:00
|
|
|
else
|
2016-03-16 11:00:37 +00:00
|
|
|
ret = i915_gem_execbuffer_reserve_vma(vma,
|
|
|
|
engine,
|
|
|
|
need_relocs);
|
2010-11-25 19:32:06 +00:00
|
|
|
if (ret)
|
2010-11-25 18:00:26 +00:00
|
|
|
goto err;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Bind fresh objects */
|
drm/i915: Convert execbuf code to use vmas
In order to transition more of our code over to using a VMA instead of
an <OBJ, VM> pair - we must have the vma accessible at execbuf time. Up
until now, we've only had a VMA when actually binding an object.
The previous patch helped handle the distinction on bound vs. unbound.
This patch will help us catch leaks, and other issues before we actually
shuffle a bunch of stuff around.
This attempts to convert all the execbuf code to speak in vmas. Since
the execbuf code is very self contained it was a nice isolated
conversion.
The meat of the code is about turning eb_objects into eb_vma, and then
wiring up the rest of the code to use vmas instead of obj, vm pairs.
Unfortunately, to do this, we must move the exec_list link from the obj
structure. This list is reused in the eviction code, so we must also
modify the eviction code to make this work.
WARNING: This patch makes an already hotly profiled path slower. The cost is
unavoidable. In reply to this mail, I will attach the extra data.
v2: Release table lock early, and two a 2 phase vma lookup to avoid
having to use a GFP_ATOMIC. (Chris)
v3: s/obj_exec_list/obj_exec_link/
Updates to address
commit 6d2b888569d366beb4be72cacfde41adee2c25e1
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Wed Aug 7 18:30:54 2013 +0100
drm/i915: List objects allocated from stolen memory in debugfs
v4: Use obj = vma->obj for neatness in some places (Chris)
need_reloc_mappable() should return false if ppgtt (Chris)
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
[danvet: Split out prep patches. Also remove a FIXME comment which is
now taken care of.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2013-08-14 09:38:36 +00:00
|
|
|
list_for_each_entry(vma, vmas, exec_list) {
|
|
|
|
if (drm_mm_node_allocated(&vma->node))
|
2011-12-14 12:57:08 +00:00
|
|
|
continue;
|
2010-11-25 18:00:26 +00:00
|
|
|
|
2016-03-16 11:00:37 +00:00
|
|
|
ret = i915_gem_execbuffer_reserve_vma(vma, engine,
|
|
|
|
need_relocs);
|
2012-08-24 18:18:18 +00:00
|
|
|
if (ret)
|
|
|
|
goto err;
|
2010-11-25 18:00:26 +00:00
|
|
|
}
|
|
|
|
|
2013-11-26 11:23:15 +00:00
|
|
|
err:
|
drm/i915: Track unbound pages
When dealing with a working set larger than the GATT, or even the
mappable aperture when touching through the GTT, we end up with evicting
objects only to rebind them at a new offset again later. Moving an
object into and out of the GTT requires clflushing the pages, thus
causing a double-clflush penalty for rebinding.
To avoid having to clflush on rebinding, we can track the pages as they
are evicted from the GTT and only relinquish those pages on memory
pressure.
As usual, if it were not for the handling of out-of-memory condition and
having to manually shrink our own bo caches, it would be a net reduction
of code. Alas.
Note: The patch also contains a few changes to the last-hope
evict_everything logic in i916_gem_execbuffer.c - we no longer try to
only evict the purgeable stuff in a first try (since that's superflous
and only helps in OOM corner-cases, not fragmented-gtt trashing
situations).
Also, the extraction of the get_pages retry loop from bind_to_gtt (and
other callsites) to get_pages should imo have been a separate patch.
v2: Ditch the newly added put_pages (for unbound objects only) in
i915_gem_reset. A quick irc discussion hasn't revealed any important
reason for this, so if we need this, I'd like to have a git blame'able
explanation for it.
v3: Undo the s/drm_malloc_ab/kmalloc/ in get_pages that Chris noticed.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
[danvet: Split out code movements and rant a bit in the commit message
with a few Notes. Done v2]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2012-08-20 09:40:46 +00:00
|
|
|
if (ret != -ENOSPC || retry++)
|
2010-11-25 18:00:26 +00:00
|
|
|
return ret;
|
|
|
|
|
2013-11-26 11:23:15 +00:00
|
|
|
/* Decrement pin count for bound objects */
|
|
|
|
list_for_each_entry(vma, vmas, exec_list)
|
|
|
|
i915_gem_execbuffer_unreserve_vma(vma);
|
|
|
|
|
2013-09-11 21:57:50 +00:00
|
|
|
ret = i915_gem_evict_vm(vm, true);
|
2010-11-25 18:00:26 +00:00
|
|
|
if (ret)
|
|
|
|
return ret;
|
|
|
|
} while (1);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int
|
|
|
|
i915_gem_execbuffer_relocate_slow(struct drm_device *dev,
|
2013-01-17 21:23:36 +00:00
|
|
|
struct drm_i915_gem_execbuffer2 *args,
|
2010-11-25 18:00:26 +00:00
|
|
|
struct drm_file *file,
|
2016-03-16 11:00:37 +00:00
|
|
|
struct intel_engine_cs *engine,
|
drm/i915: Convert execbuf code to use vmas
In order to transition more of our code over to using a VMA instead of
an <OBJ, VM> pair - we must have the vma accessible at execbuf time. Up
until now, we've only had a VMA when actually binding an object.
The previous patch helped handle the distinction on bound vs. unbound.
This patch will help us catch leaks, and other issues before we actually
shuffle a bunch of stuff around.
This attempts to convert all the execbuf code to speak in vmas. Since
the execbuf code is very self contained it was a nice isolated
conversion.
The meat of the code is about turning eb_objects into eb_vma, and then
wiring up the rest of the code to use vmas instead of obj, vm pairs.
Unfortunately, to do this, we must move the exec_list link from the obj
structure. This list is reused in the eviction code, so we must also
modify the eviction code to make this work.
WARNING: This patch makes an already hotly profiled path slower. The cost is
unavoidable. In reply to this mail, I will attach the extra data.
v2: Release table lock early, and two a 2 phase vma lookup to avoid
having to use a GFP_ATOMIC. (Chris)
v3: s/obj_exec_list/obj_exec_link/
Updates to address
commit 6d2b888569d366beb4be72cacfde41adee2c25e1
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Wed Aug 7 18:30:54 2013 +0100
drm/i915: List objects allocated from stolen memory in debugfs
v4: Use obj = vma->obj for neatness in some places (Chris)
need_reloc_mappable() should return false if ppgtt (Chris)
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
[danvet: Split out prep patches. Also remove a FIXME comment which is
now taken care of.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2013-08-14 09:38:36 +00:00
|
|
|
struct eb_vmas *eb,
|
2015-05-20 14:00:13 +00:00
|
|
|
struct drm_i915_gem_exec_object2 *exec,
|
2016-05-24 13:53:34 +00:00
|
|
|
struct i915_gem_context *ctx)
|
2010-11-25 18:00:26 +00:00
|
|
|
{
|
|
|
|
struct drm_i915_gem_relocation_entry *reloc;
|
drm/i915: Convert execbuf code to use vmas
In order to transition more of our code over to using a VMA instead of
an <OBJ, VM> pair - we must have the vma accessible at execbuf time. Up
until now, we've only had a VMA when actually binding an object.
The previous patch helped handle the distinction on bound vs. unbound.
This patch will help us catch leaks, and other issues before we actually
shuffle a bunch of stuff around.
This attempts to convert all the execbuf code to speak in vmas. Since
the execbuf code is very self contained it was a nice isolated
conversion.
The meat of the code is about turning eb_objects into eb_vma, and then
wiring up the rest of the code to use vmas instead of obj, vm pairs.
Unfortunately, to do this, we must move the exec_list link from the obj
structure. This list is reused in the eviction code, so we must also
modify the eviction code to make this work.
WARNING: This patch makes an already hotly profiled path slower. The cost is
unavoidable. In reply to this mail, I will attach the extra data.
v2: Release table lock early, and two a 2 phase vma lookup to avoid
having to use a GFP_ATOMIC. (Chris)
v3: s/obj_exec_list/obj_exec_link/
Updates to address
commit 6d2b888569d366beb4be72cacfde41adee2c25e1
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Wed Aug 7 18:30:54 2013 +0100
drm/i915: List objects allocated from stolen memory in debugfs
v4: Use obj = vma->obj for neatness in some places (Chris)
need_reloc_mappable() should return false if ppgtt (Chris)
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
[danvet: Split out prep patches. Also remove a FIXME comment which is
now taken care of.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2013-08-14 09:38:36 +00:00
|
|
|
struct i915_address_space *vm;
|
|
|
|
struct i915_vma *vma;
|
2013-01-17 21:23:36 +00:00
|
|
|
bool need_relocs;
|
2011-01-12 23:49:13 +00:00
|
|
|
int *reloc_offset;
|
2010-11-25 18:00:26 +00:00
|
|
|
int i, total, ret;
|
2013-09-19 12:00:11 +00:00
|
|
|
unsigned count = args->buffer_count;
|
2010-11-25 18:00:26 +00:00
|
|
|
|
drm/i915: Convert execbuf code to use vmas
In order to transition more of our code over to using a VMA instead of
an <OBJ, VM> pair - we must have the vma accessible at execbuf time. Up
until now, we've only had a VMA when actually binding an object.
The previous patch helped handle the distinction on bound vs. unbound.
This patch will help us catch leaks, and other issues before we actually
shuffle a bunch of stuff around.
This attempts to convert all the execbuf code to speak in vmas. Since
the execbuf code is very self contained it was a nice isolated
conversion.
The meat of the code is about turning eb_objects into eb_vma, and then
wiring up the rest of the code to use vmas instead of obj, vm pairs.
Unfortunately, to do this, we must move the exec_list link from the obj
structure. This list is reused in the eviction code, so we must also
modify the eviction code to make this work.
WARNING: This patch makes an already hotly profiled path slower. The cost is
unavoidable. In reply to this mail, I will attach the extra data.
v2: Release table lock early, and two a 2 phase vma lookup to avoid
having to use a GFP_ATOMIC. (Chris)
v3: s/obj_exec_list/obj_exec_link/
Updates to address
commit 6d2b888569d366beb4be72cacfde41adee2c25e1
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Wed Aug 7 18:30:54 2013 +0100
drm/i915: List objects allocated from stolen memory in debugfs
v4: Use obj = vma->obj for neatness in some places (Chris)
need_reloc_mappable() should return false if ppgtt (Chris)
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
[danvet: Split out prep patches. Also remove a FIXME comment which is
now taken care of.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2013-08-14 09:38:36 +00:00
|
|
|
vm = list_first_entry(&eb->vmas, struct i915_vma, exec_list)->vm;
|
|
|
|
|
2010-12-08 10:38:14 +00:00
|
|
|
/* We may process another execbuffer during the unlock... */
|
drm/i915: Convert execbuf code to use vmas
In order to transition more of our code over to using a VMA instead of
an <OBJ, VM> pair - we must have the vma accessible at execbuf time. Up
until now, we've only had a VMA when actually binding an object.
The previous patch helped handle the distinction on bound vs. unbound.
This patch will help us catch leaks, and other issues before we actually
shuffle a bunch of stuff around.
This attempts to convert all the execbuf code to speak in vmas. Since
the execbuf code is very self contained it was a nice isolated
conversion.
The meat of the code is about turning eb_objects into eb_vma, and then
wiring up the rest of the code to use vmas instead of obj, vm pairs.
Unfortunately, to do this, we must move the exec_list link from the obj
structure. This list is reused in the eviction code, so we must also
modify the eviction code to make this work.
WARNING: This patch makes an already hotly profiled path slower. The cost is
unavoidable. In reply to this mail, I will attach the extra data.
v2: Release table lock early, and two a 2 phase vma lookup to avoid
having to use a GFP_ATOMIC. (Chris)
v3: s/obj_exec_list/obj_exec_link/
Updates to address
commit 6d2b888569d366beb4be72cacfde41adee2c25e1
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Wed Aug 7 18:30:54 2013 +0100
drm/i915: List objects allocated from stolen memory in debugfs
v4: Use obj = vma->obj for neatness in some places (Chris)
need_reloc_mappable() should return false if ppgtt (Chris)
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
[danvet: Split out prep patches. Also remove a FIXME comment which is
now taken care of.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2013-08-14 09:38:36 +00:00
|
|
|
while (!list_empty(&eb->vmas)) {
|
|
|
|
vma = list_first_entry(&eb->vmas, struct i915_vma, exec_list);
|
|
|
|
list_del_init(&vma->exec_list);
|
2013-11-26 11:23:15 +00:00
|
|
|
i915_gem_execbuffer_unreserve_vma(vma);
|
2016-08-15 09:48:50 +00:00
|
|
|
i915_vma_put(vma);
|
2010-12-08 10:38:14 +00:00
|
|
|
}
|
|
|
|
|
2010-11-25 18:00:26 +00:00
|
|
|
mutex_unlock(&dev->struct_mutex);
|
|
|
|
|
|
|
|
total = 0;
|
|
|
|
for (i = 0; i < count; i++)
|
2010-11-25 19:32:06 +00:00
|
|
|
total += exec[i].relocation_count;
|
2010-11-25 18:00:26 +00:00
|
|
|
|
2011-01-12 23:49:13 +00:00
|
|
|
reloc_offset = drm_malloc_ab(count, sizeof(*reloc_offset));
|
2010-11-25 18:00:26 +00:00
|
|
|
reloc = drm_malloc_ab(total, sizeof(*reloc));
|
2011-01-12 23:49:13 +00:00
|
|
|
if (reloc == NULL || reloc_offset == NULL) {
|
|
|
|
drm_free_large(reloc);
|
|
|
|
drm_free_large(reloc_offset);
|
2010-11-25 18:00:26 +00:00
|
|
|
mutex_lock(&dev->struct_mutex);
|
|
|
|
return -ENOMEM;
|
|
|
|
}
|
|
|
|
|
|
|
|
total = 0;
|
|
|
|
for (i = 0; i < count; i++) {
|
|
|
|
struct drm_i915_gem_relocation_entry __user *user_relocs;
|
2013-01-15 16:17:54 +00:00
|
|
|
u64 invalid_offset = (u64)-1;
|
|
|
|
int j;
|
2010-11-25 18:00:26 +00:00
|
|
|
|
2016-04-26 15:32:27 +00:00
|
|
|
user_relocs = u64_to_user_ptr(exec[i].relocs_ptr);
|
2010-11-25 18:00:26 +00:00
|
|
|
|
|
|
|
if (copy_from_user(reloc+total, user_relocs,
|
2010-11-25 19:32:06 +00:00
|
|
|
exec[i].relocation_count * sizeof(*reloc))) {
|
2010-11-25 18:00:26 +00:00
|
|
|
ret = -EFAULT;
|
|
|
|
mutex_lock(&dev->struct_mutex);
|
|
|
|
goto err;
|
|
|
|
}
|
|
|
|
|
2013-01-15 16:17:54 +00:00
|
|
|
/* As we do not update the known relocation offsets after
|
|
|
|
* relocating (due to the complexities in lock handling),
|
|
|
|
* we need to mark them as invalid now so that we force the
|
|
|
|
* relocation processing next time. Just in case the target
|
|
|
|
* object is evicted and then rebound into its old
|
|
|
|
* presumed_offset before the next execbuffer - if that
|
|
|
|
* happened we would make the mistake of assuming that the
|
|
|
|
* relocations were valid.
|
|
|
|
*/
|
|
|
|
for (j = 0; j < exec[i].relocation_count; j++) {
|
2014-05-23 09:45:52 +00:00
|
|
|
if (__copy_to_user(&user_relocs[j].presumed_offset,
|
|
|
|
&invalid_offset,
|
|
|
|
sizeof(invalid_offset))) {
|
2013-01-15 16:17:54 +00:00
|
|
|
ret = -EFAULT;
|
|
|
|
mutex_lock(&dev->struct_mutex);
|
|
|
|
goto err;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2011-01-12 23:49:13 +00:00
|
|
|
reloc_offset[i] = total;
|
2010-11-25 19:32:06 +00:00
|
|
|
total += exec[i].relocation_count;
|
2010-11-25 18:00:26 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
ret = i915_mutex_lock_interruptible(dev);
|
|
|
|
if (ret) {
|
|
|
|
mutex_lock(&dev->struct_mutex);
|
|
|
|
goto err;
|
|
|
|
}
|
|
|
|
|
2010-12-08 10:38:14 +00:00
|
|
|
/* reacquire the objects */
|
|
|
|
eb_reset(eb);
|
drm/i915: Convert execbuf code to use vmas
In order to transition more of our code over to using a VMA instead of
an <OBJ, VM> pair - we must have the vma accessible at execbuf time. Up
until now, we've only had a VMA when actually binding an object.
The previous patch helped handle the distinction on bound vs. unbound.
This patch will help us catch leaks, and other issues before we actually
shuffle a bunch of stuff around.
This attempts to convert all the execbuf code to speak in vmas. Since
the execbuf code is very self contained it was a nice isolated
conversion.
The meat of the code is about turning eb_objects into eb_vma, and then
wiring up the rest of the code to use vmas instead of obj, vm pairs.
Unfortunately, to do this, we must move the exec_list link from the obj
structure. This list is reused in the eviction code, so we must also
modify the eviction code to make this work.
WARNING: This patch makes an already hotly profiled path slower. The cost is
unavoidable. In reply to this mail, I will attach the extra data.
v2: Release table lock early, and two a 2 phase vma lookup to avoid
having to use a GFP_ATOMIC. (Chris)
v3: s/obj_exec_list/obj_exec_link/
Updates to address
commit 6d2b888569d366beb4be72cacfde41adee2c25e1
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Wed Aug 7 18:30:54 2013 +0100
drm/i915: List objects allocated from stolen memory in debugfs
v4: Use obj = vma->obj for neatness in some places (Chris)
need_reloc_mappable() should return false if ppgtt (Chris)
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
[danvet: Split out prep patches. Also remove a FIXME comment which is
now taken care of.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2013-08-14 09:38:36 +00:00
|
|
|
ret = eb_lookup_vmas(eb, exec, args, vm, file);
|
2013-01-08 10:53:14 +00:00
|
|
|
if (ret)
|
|
|
|
goto err;
|
2010-12-08 10:38:14 +00:00
|
|
|
|
2013-01-17 21:23:36 +00:00
|
|
|
need_relocs = (args->flags & I915_EXEC_NO_RELOC) == 0;
|
2016-03-16 11:00:37 +00:00
|
|
|
ret = i915_gem_execbuffer_reserve(engine, &eb->vmas, ctx,
|
|
|
|
&need_relocs);
|
2010-11-25 18:00:26 +00:00
|
|
|
if (ret)
|
|
|
|
goto err;
|
|
|
|
|
drm/i915: Convert execbuf code to use vmas
In order to transition more of our code over to using a VMA instead of
an <OBJ, VM> pair - we must have the vma accessible at execbuf time. Up
until now, we've only had a VMA when actually binding an object.
The previous patch helped handle the distinction on bound vs. unbound.
This patch will help us catch leaks, and other issues before we actually
shuffle a bunch of stuff around.
This attempts to convert all the execbuf code to speak in vmas. Since
the execbuf code is very self contained it was a nice isolated
conversion.
The meat of the code is about turning eb_objects into eb_vma, and then
wiring up the rest of the code to use vmas instead of obj, vm pairs.
Unfortunately, to do this, we must move the exec_list link from the obj
structure. This list is reused in the eviction code, so we must also
modify the eviction code to make this work.
WARNING: This patch makes an already hotly profiled path slower. The cost is
unavoidable. In reply to this mail, I will attach the extra data.
v2: Release table lock early, and two a 2 phase vma lookup to avoid
having to use a GFP_ATOMIC. (Chris)
v3: s/obj_exec_list/obj_exec_link/
Updates to address
commit 6d2b888569d366beb4be72cacfde41adee2c25e1
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Wed Aug 7 18:30:54 2013 +0100
drm/i915: List objects allocated from stolen memory in debugfs
v4: Use obj = vma->obj for neatness in some places (Chris)
need_reloc_mappable() should return false if ppgtt (Chris)
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
[danvet: Split out prep patches. Also remove a FIXME comment which is
now taken care of.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2013-08-14 09:38:36 +00:00
|
|
|
list_for_each_entry(vma, &eb->vmas, exec_list) {
|
|
|
|
int offset = vma->exec_entry - exec;
|
|
|
|
ret = i915_gem_execbuffer_relocate_vma_slow(vma, eb,
|
|
|
|
reloc + reloc_offset[offset]);
|
2010-11-25 18:00:26 +00:00
|
|
|
if (ret)
|
|
|
|
goto err;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Leave the user relocations as are, this is the painfully slow path,
|
|
|
|
* and we want to avoid the complication of dropping the lock whilst
|
|
|
|
* having buffers reserved in the aperture and so causing spurious
|
|
|
|
* ENOSPC for random operations.
|
|
|
|
*/
|
|
|
|
|
|
|
|
err:
|
|
|
|
drm_free_large(reloc);
|
2011-01-12 23:49:13 +00:00
|
|
|
drm_free_large(reloc_offset);
|
2010-11-25 18:00:26 +00:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
static int
|
2015-05-29 16:43:32 +00:00
|
|
|
i915_gem_execbuffer_move_to_gpu(struct drm_i915_gem_request *req,
|
drm/i915: Convert execbuf code to use vmas
In order to transition more of our code over to using a VMA instead of
an <OBJ, VM> pair - we must have the vma accessible at execbuf time. Up
until now, we've only had a VMA when actually binding an object.
The previous patch helped handle the distinction on bound vs. unbound.
This patch will help us catch leaks, and other issues before we actually
shuffle a bunch of stuff around.
This attempts to convert all the execbuf code to speak in vmas. Since
the execbuf code is very self contained it was a nice isolated
conversion.
The meat of the code is about turning eb_objects into eb_vma, and then
wiring up the rest of the code to use vmas instead of obj, vm pairs.
Unfortunately, to do this, we must move the exec_list link from the obj
structure. This list is reused in the eviction code, so we must also
modify the eviction code to make this work.
WARNING: This patch makes an already hotly profiled path slower. The cost is
unavoidable. In reply to this mail, I will attach the extra data.
v2: Release table lock early, and two a 2 phase vma lookup to avoid
having to use a GFP_ATOMIC. (Chris)
v3: s/obj_exec_list/obj_exec_link/
Updates to address
commit 6d2b888569d366beb4be72cacfde41adee2c25e1
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Wed Aug 7 18:30:54 2013 +0100
drm/i915: List objects allocated from stolen memory in debugfs
v4: Use obj = vma->obj for neatness in some places (Chris)
need_reloc_mappable() should return false if ppgtt (Chris)
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
[danvet: Split out prep patches. Also remove a FIXME comment which is
now taken care of.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2013-08-14 09:38:36 +00:00
|
|
|
struct list_head *vmas)
|
2010-11-25 18:00:26 +00:00
|
|
|
{
|
drm/i915: Convert execbuf code to use vmas
In order to transition more of our code over to using a VMA instead of
an <OBJ, VM> pair - we must have the vma accessible at execbuf time. Up
until now, we've only had a VMA when actually binding an object.
The previous patch helped handle the distinction on bound vs. unbound.
This patch will help us catch leaks, and other issues before we actually
shuffle a bunch of stuff around.
This attempts to convert all the execbuf code to speak in vmas. Since
the execbuf code is very self contained it was a nice isolated
conversion.
The meat of the code is about turning eb_objects into eb_vma, and then
wiring up the rest of the code to use vmas instead of obj, vm pairs.
Unfortunately, to do this, we must move the exec_list link from the obj
structure. This list is reused in the eviction code, so we must also
modify the eviction code to make this work.
WARNING: This patch makes an already hotly profiled path slower. The cost is
unavoidable. In reply to this mail, I will attach the extra data.
v2: Release table lock early, and two a 2 phase vma lookup to avoid
having to use a GFP_ATOMIC. (Chris)
v3: s/obj_exec_list/obj_exec_link/
Updates to address
commit 6d2b888569d366beb4be72cacfde41adee2c25e1
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Wed Aug 7 18:30:54 2013 +0100
drm/i915: List objects allocated from stolen memory in debugfs
v4: Use obj = vma->obj for neatness in some places (Chris)
need_reloc_mappable() should return false if ppgtt (Chris)
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
[danvet: Split out prep patches. Also remove a FIXME comment which is
now taken care of.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2013-08-14 09:38:36 +00:00
|
|
|
struct i915_vma *vma;
|
2010-11-25 19:32:06 +00:00
|
|
|
int ret;
|
2010-11-25 18:00:26 +00:00
|
|
|
|
drm/i915: Convert execbuf code to use vmas
In order to transition more of our code over to using a VMA instead of
an <OBJ, VM> pair - we must have the vma accessible at execbuf time. Up
until now, we've only had a VMA when actually binding an object.
The previous patch helped handle the distinction on bound vs. unbound.
This patch will help us catch leaks, and other issues before we actually
shuffle a bunch of stuff around.
This attempts to convert all the execbuf code to speak in vmas. Since
the execbuf code is very self contained it was a nice isolated
conversion.
The meat of the code is about turning eb_objects into eb_vma, and then
wiring up the rest of the code to use vmas instead of obj, vm pairs.
Unfortunately, to do this, we must move the exec_list link from the obj
structure. This list is reused in the eviction code, so we must also
modify the eviction code to make this work.
WARNING: This patch makes an already hotly profiled path slower. The cost is
unavoidable. In reply to this mail, I will attach the extra data.
v2: Release table lock early, and two a 2 phase vma lookup to avoid
having to use a GFP_ATOMIC. (Chris)
v3: s/obj_exec_list/obj_exec_link/
Updates to address
commit 6d2b888569d366beb4be72cacfde41adee2c25e1
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Wed Aug 7 18:30:54 2013 +0100
drm/i915: List objects allocated from stolen memory in debugfs
v4: Use obj = vma->obj for neatness in some places (Chris)
need_reloc_mappable() should return false if ppgtt (Chris)
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
[danvet: Split out prep patches. Also remove a FIXME comment which is
now taken care of.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2013-08-14 09:38:36 +00:00
|
|
|
list_for_each_entry(vma, vmas, exec_list) {
|
|
|
|
struct drm_i915_gem_object *obj = vma->obj;
|
2015-04-27 12:41:18 +00:00
|
|
|
|
2017-04-15 09:39:02 +00:00
|
|
|
if (vma->exec_entry->flags & EXEC_OBJECT_CAPTURE) {
|
|
|
|
struct i915_gem_capture_list *capture;
|
|
|
|
|
|
|
|
capture = kmalloc(sizeof(*capture), GFP_KERNEL);
|
|
|
|
if (unlikely(!capture))
|
|
|
|
return -ENOMEM;
|
|
|
|
|
|
|
|
capture->next = req->capture_list;
|
|
|
|
capture->vma = vma;
|
|
|
|
req->capture_list = capture;
|
|
|
|
}
|
|
|
|
|
2017-01-27 09:40:07 +00:00
|
|
|
if (vma->exec_entry->flags & EXEC_OBJECT_ASYNC)
|
|
|
|
continue;
|
|
|
|
|
2017-02-22 11:40:48 +00:00
|
|
|
if (obj->base.write_domain & I915_GEM_DOMAIN_CPU) {
|
|
|
|
i915_gem_clflush_object(obj, 0);
|
|
|
|
obj->base.write_domain = 0;
|
|
|
|
}
|
|
|
|
|
drm/i915: Move GEM activity tracking into a common struct reservation_object
In preparation to support many distinct timelines, we need to expand the
activity tracking on the GEM object to handle more than just a request
per engine. We already use the struct reservation_object on the dma-buf
to handle many fence contexts, so integrating that into the GEM object
itself is the preferred solution. (For example, we can now share the same
reservation_object between every consumer/producer using this buffer and
skip the manual import/export via dma-buf.)
v2: Reimplement busy-ioctl (by walking the reservation object), postpone
the ABI change for another day. Similarly use the reservation object to
find the last_write request (if active and from i915) for choosing
display CS flips.
Caveats:
* busy-ioctl: busy-ioctl only reports on the native fences, it will not
warn of stalls (in set-domain-ioctl, pread/pwrite etc) if the object is
being rendered to by external fences. It also will not report the same
busy state as wait-ioctl (or polling on the dma-buf) in the same
circumstances. On the plus side, it does retain reporting of which
*i915* engines are engaged with this object.
* non-blocking atomic modesets take a step backwards as the wait for
render completion blocks the ioctl. This is fixed in a subsequent
patch to use a fence instead for awaiting on the rendering, see
"drm/i915: Restore nonblocking awaits for modesetting"
* dynamic array manipulation for shared-fences in reservation is slower
than the previous lockless static assignment (e.g. gem_exec_lut_handle
runtime on ivb goes from 42s to 66s), mainly due to atomic operations
(maintaining the fence refcounts).
* loss of object-level retirement callbacks, emulated by VMA retirement
tracking.
* minor loss of object-level last activity information from debugfs,
could be replaced with per-vma information if desired
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20161028125858.23563-21-chris@chris-wilson.co.uk
2016-10-28 12:58:44 +00:00
|
|
|
ret = i915_gem_request_await_object
|
|
|
|
(req, obj, obj->base.pending_write_domain);
|
|
|
|
if (ret)
|
|
|
|
return ret;
|
2011-03-06 13:51:29 +00:00
|
|
|
}
|
|
|
|
|
2016-08-18 16:16:40 +00:00
|
|
|
/* Unconditionally flush any chipset caches (for streaming writes). */
|
|
|
|
i915_gem_chipset_flush(req->engine->i915);
|
2012-07-21 10:25:01 +00:00
|
|
|
|
2016-08-02 21:50:24 +00:00
|
|
|
/* Unconditionally invalidate GPU caches and TLBs. */
|
2016-08-02 21:50:25 +00:00
|
|
|
return req->engine->emit_flush(req, EMIT_INVALIDATE);
|
2010-11-25 18:00:26 +00:00
|
|
|
}
|
|
|
|
|
2010-11-25 19:32:06 +00:00
|
|
|
static bool
|
|
|
|
i915_gem_check_execbuffer(struct drm_i915_gem_execbuffer2 *exec)
|
2010-11-25 18:00:26 +00:00
|
|
|
{
|
2013-01-17 21:23:36 +00:00
|
|
|
if (exec->flags & __I915_EXEC_UNKNOWN_FLAGS)
|
|
|
|
return false;
|
|
|
|
|
2015-10-06 10:39:55 +00:00
|
|
|
/* Kernel clipping was a DRI1 misfeature */
|
|
|
|
if (exec->num_cliprects || exec->cliprects_ptr)
|
|
|
|
return false;
|
|
|
|
|
|
|
|
if (exec->DR4 == 0xffffffff) {
|
|
|
|
DRM_DEBUG("UXA submitting garbage DR4, fixing up\n");
|
|
|
|
exec->DR4 = 0;
|
|
|
|
}
|
|
|
|
if (exec->DR1 || exec->DR4)
|
|
|
|
return false;
|
|
|
|
|
|
|
|
if ((exec->batch_start_offset | exec->batch_len) & 0x7)
|
|
|
|
return false;
|
|
|
|
|
|
|
|
return true;
|
2010-11-25 18:00:26 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
static int
|
2014-08-10 05:29:08 +00:00
|
|
|
validate_exec_list(struct drm_device *dev,
|
|
|
|
struct drm_i915_gem_exec_object2 *exec,
|
2010-11-25 18:00:26 +00:00
|
|
|
int count)
|
|
|
|
{
|
2013-09-19 12:00:11 +00:00
|
|
|
unsigned relocs_total = 0;
|
|
|
|
unsigned relocs_max = UINT_MAX / sizeof(struct drm_i915_gem_relocation_entry);
|
2014-08-10 05:29:08 +00:00
|
|
|
unsigned invalid_flags;
|
|
|
|
int i;
|
|
|
|
|
2016-07-14 13:52:03 +00:00
|
|
|
/* INTERNAL flags must not overlap with external ones */
|
|
|
|
BUILD_BUG_ON(__EXEC_OBJECT_INTERNAL_FLAGS & ~__EXEC_OBJECT_UNKNOWN_FLAGS);
|
|
|
|
|
2014-08-10 05:29:08 +00:00
|
|
|
invalid_flags = __EXEC_OBJECT_UNKNOWN_FLAGS;
|
|
|
|
if (USES_FULL_PPGTT(dev))
|
|
|
|
invalid_flags |= EXEC_OBJECT_NEEDS_GTT;
|
2010-11-25 18:00:26 +00:00
|
|
|
|
|
|
|
for (i = 0; i < count; i++) {
|
2016-04-26 15:32:27 +00:00
|
|
|
char __user *ptr = u64_to_user_ptr(exec[i].relocs_ptr);
|
2010-11-25 18:00:26 +00:00
|
|
|
int length; /* limited by fault_in_pages_readable() */
|
|
|
|
|
2014-08-10 05:29:08 +00:00
|
|
|
if (exec[i].flags & invalid_flags)
|
2013-01-17 21:23:36 +00:00
|
|
|
return -EINVAL;
|
|
|
|
|
2015-12-29 17:24:52 +00:00
|
|
|
/* Offset can be used as input (EXEC_OBJECT_PINNED), reject
|
|
|
|
* any non-page-aligned or non-canonical addresses.
|
|
|
|
*/
|
|
|
|
if (exec[i].flags & EXEC_OBJECT_PINNED) {
|
|
|
|
if (exec[i].offset !=
|
|
|
|
gen8_canonical_addr(exec[i].offset & PAGE_MASK))
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
|
2017-02-07 19:55:59 +00:00
|
|
|
/* From drm_mm perspective address space is continuous,
|
|
|
|
* so from this point we're always using non-canonical
|
|
|
|
* form internally.
|
|
|
|
*/
|
|
|
|
exec[i].offset = gen8_noncanonical_addr(exec[i].offset);
|
|
|
|
|
2015-06-19 12:59:46 +00:00
|
|
|
if (exec[i].alignment && !is_power_of_2(exec[i].alignment))
|
|
|
|
return -EINVAL;
|
|
|
|
|
2016-08-04 15:32:23 +00:00
|
|
|
/* pad_to_size was once a reserved field, so sanitize it */
|
|
|
|
if (exec[i].flags & EXEC_OBJECT_PAD_TO_SIZE) {
|
|
|
|
if (offset_in_page(exec[i].pad_to_size))
|
|
|
|
return -EINVAL;
|
|
|
|
} else {
|
|
|
|
exec[i].pad_to_size = 0;
|
|
|
|
}
|
|
|
|
|
2013-03-12 00:31:45 +00:00
|
|
|
/* First check for malicious input causing overflow in
|
|
|
|
* the worst case where we need to allocate the entire
|
|
|
|
* relocation tree as a single array.
|
|
|
|
*/
|
|
|
|
if (exec[i].relocation_count > relocs_max - relocs_total)
|
2010-11-25 18:00:26 +00:00
|
|
|
return -EINVAL;
|
2013-03-12 00:31:45 +00:00
|
|
|
relocs_total += exec[i].relocation_count;
|
2010-11-25 18:00:26 +00:00
|
|
|
|
|
|
|
length = exec[i].relocation_count *
|
|
|
|
sizeof(struct drm_i915_gem_relocation_entry);
|
2013-03-11 21:37:35 +00:00
|
|
|
/*
|
|
|
|
* We must check that the entire relocation array is safe
|
|
|
|
* to read, but since we may need to update the presumed
|
|
|
|
* offsets during execution, check for full write access.
|
|
|
|
*/
|
2010-11-25 18:00:26 +00:00
|
|
|
if (!access_ok(VERIFY_WRITE, ptr, length))
|
|
|
|
return -EFAULT;
|
|
|
|
|
2014-01-21 09:24:25 +00:00
|
|
|
if (likely(!i915.prefault_disable)) {
|
2016-09-17 22:02:44 +00:00
|
|
|
if (fault_in_pages_readable(ptr, length))
|
2013-07-19 05:51:24 +00:00
|
|
|
return -EFAULT;
|
|
|
|
}
|
2010-11-25 18:00:26 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2016-05-24 13:53:34 +00:00
|
|
|
static struct i915_gem_context *
|
2013-11-26 14:14:33 +00:00
|
|
|
i915_gem_validate_context(struct drm_device *dev, struct drm_file *file,
|
2016-03-16 11:00:37 +00:00
|
|
|
struct intel_engine_cs *engine, const u32 ctx_id)
|
2013-11-26 14:14:33 +00:00
|
|
|
{
|
2016-08-22 08:03:36 +00:00
|
|
|
struct i915_gem_context *ctx;
|
2013-11-26 14:14:33 +00:00
|
|
|
|
2016-05-24 13:53:36 +00:00
|
|
|
ctx = i915_gem_context_lookup(file->driver_priv, ctx_id);
|
2014-01-03 05:50:27 +00:00
|
|
|
if (IS_ERR(ctx))
|
drm/i915: Get context early in execbuf
We need to have the address space when reserving space for the objects.
Since the address space and context are tied together, and reserve
occurs before context switch (for good reason), we must lookup our
context earlier in the process.
This leaves some room for optimizations where we no longer need to use
ctx_id in certain places. This will be addressed in a subsequent patch.
Important tricky bit:
Because slow relocations during execbuffer drop struct_mutex
Perhaps it would be best to acquire the reference when we get the
context, but I'll save that for another day (note I have written the
patch before, and I found the changes required to be uglier than this).
Note that since we currently access everything via context id, and not
the data structure this is fine, though not desirable. The next change
attempts to get the context only once via the context ID idr lookup, and
as such, the following can happen:
CTX-A is created, refcount = 1
CTX-A execbuf, mutex dropped
close IOCTL called on CTX-A, refcount = 0
CTX-A resumes in execbuf.
v2: Rebased on top of
commit b6359918b885da7c7b58c050674278dbd06020ab
Author: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Date: Wed Oct 30 15:44:16 2013 +0200
drm/i915: add i915_get_reset_stats_ioctl
v3: Rebased on top of
commit 25b3dfc87bff80317d67ddd2cd4cfb91e6fe7d79
Author: Mika Westerberg <mika.westerberg@linux.intel.com>
Date: Tue Nov 12 11:57:30 2013 +0200
Author: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Date: Tue Nov 26 16:14:33 2013 +0200
drm/i915: check context reset stats before relocations
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2013-12-06 22:11:21 +00:00
|
|
|
return ctx;
|
2013-11-26 14:14:33 +00:00
|
|
|
|
2016-12-31 11:20:11 +00:00
|
|
|
if (i915_gem_context_is_banned(ctx)) {
|
2013-11-26 14:14:33 +00:00
|
|
|
DRM_DEBUG("Context %u tried to submit while banned\n", ctx_id);
|
drm/i915: Get context early in execbuf
We need to have the address space when reserving space for the objects.
Since the address space and context are tied together, and reserve
occurs before context switch (for good reason), we must lookup our
context earlier in the process.
This leaves some room for optimizations where we no longer need to use
ctx_id in certain places. This will be addressed in a subsequent patch.
Important tricky bit:
Because slow relocations during execbuffer drop struct_mutex
Perhaps it would be best to acquire the reference when we get the
context, but I'll save that for another day (note I have written the
patch before, and I found the changes required to be uglier than this).
Note that since we currently access everything via context id, and not
the data structure this is fine, though not desirable. The next change
attempts to get the context only once via the context ID idr lookup, and
as such, the following can happen:
CTX-A is created, refcount = 1
CTX-A execbuf, mutex dropped
close IOCTL called on CTX-A, refcount = 0
CTX-A resumes in execbuf.
v2: Rebased on top of
commit b6359918b885da7c7b58c050674278dbd06020ab
Author: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Date: Wed Oct 30 15:44:16 2013 +0200
drm/i915: add i915_get_reset_stats_ioctl
v3: Rebased on top of
commit 25b3dfc87bff80317d67ddd2cd4cfb91e6fe7d79
Author: Mika Westerberg <mika.westerberg@linux.intel.com>
Date: Tue Nov 12 11:57:30 2013 +0200
Author: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Date: Tue Nov 26 16:14:33 2013 +0200
drm/i915: check context reset stats before relocations
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2013-12-06 22:11:21 +00:00
|
|
|
return ERR_PTR(-EIO);
|
2013-11-26 14:14:33 +00:00
|
|
|
}
|
|
|
|
|
drm/i915: Get context early in execbuf
We need to have the address space when reserving space for the objects.
Since the address space and context are tied together, and reserve
occurs before context switch (for good reason), we must lookup our
context earlier in the process.
This leaves some room for optimizations where we no longer need to use
ctx_id in certain places. This will be addressed in a subsequent patch.
Important tricky bit:
Because slow relocations during execbuffer drop struct_mutex
Perhaps it would be best to acquire the reference when we get the
context, but I'll save that for another day (note I have written the
patch before, and I found the changes required to be uglier than this).
Note that since we currently access everything via context id, and not
the data structure this is fine, though not desirable. The next change
attempts to get the context only once via the context ID idr lookup, and
as such, the following can happen:
CTX-A is created, refcount = 1
CTX-A execbuf, mutex dropped
close IOCTL called on CTX-A, refcount = 0
CTX-A resumes in execbuf.
v2: Rebased on top of
commit b6359918b885da7c7b58c050674278dbd06020ab
Author: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Date: Wed Oct 30 15:44:16 2013 +0200
drm/i915: add i915_get_reset_stats_ioctl
v3: Rebased on top of
commit 25b3dfc87bff80317d67ddd2cd4cfb91e6fe7d79
Author: Mika Westerberg <mika.westerberg@linux.intel.com>
Date: Tue Nov 12 11:57:30 2013 +0200
Author: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Date: Tue Nov 26 16:14:33 2013 +0200
drm/i915: check context reset stats before relocations
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2013-12-06 22:11:21 +00:00
|
|
|
return ctx;
|
2013-11-26 14:14:33 +00:00
|
|
|
}
|
|
|
|
|
2016-11-07 16:52:04 +00:00
|
|
|
static bool gpu_write_needs_clflush(struct drm_i915_gem_object *obj)
|
|
|
|
{
|
|
|
|
return !(obj->cache_level == I915_CACHE_NONE ||
|
|
|
|
obj->cache_level == I915_CACHE_WT);
|
|
|
|
}
|
|
|
|
|
2016-08-04 06:52:43 +00:00
|
|
|
void i915_vma_move_to_active(struct i915_vma *vma,
|
|
|
|
struct drm_i915_gem_request *req,
|
|
|
|
unsigned int flags)
|
|
|
|
{
|
|
|
|
struct drm_i915_gem_object *obj = vma->obj;
|
|
|
|
const unsigned int idx = req->engine->id;
|
|
|
|
|
2016-12-18 15:37:18 +00:00
|
|
|
lockdep_assert_held(&req->i915->drm.struct_mutex);
|
2016-08-04 06:52:43 +00:00
|
|
|
GEM_BUG_ON(!drm_mm_node_allocated(&vma->node));
|
|
|
|
|
2016-08-04 06:52:44 +00:00
|
|
|
/* Add a reference if we're newly entering the active list.
|
|
|
|
* The order in which we add operations to the retirement queue is
|
|
|
|
* vital here: mark_active adds to the start of the callback list,
|
|
|
|
* such that subsequent callbacks are called first. Therefore we
|
|
|
|
* add the active reference first and queue for it to be dropped
|
|
|
|
* *last*.
|
|
|
|
*/
|
drm/i915: Move GEM activity tracking into a common struct reservation_object
In preparation to support many distinct timelines, we need to expand the
activity tracking on the GEM object to handle more than just a request
per engine. We already use the struct reservation_object on the dma-buf
to handle many fence contexts, so integrating that into the GEM object
itself is the preferred solution. (For example, we can now share the same
reservation_object between every consumer/producer using this buffer and
skip the manual import/export via dma-buf.)
v2: Reimplement busy-ioctl (by walking the reservation object), postpone
the ABI change for another day. Similarly use the reservation object to
find the last_write request (if active and from i915) for choosing
display CS flips.
Caveats:
* busy-ioctl: busy-ioctl only reports on the native fences, it will not
warn of stalls (in set-domain-ioctl, pread/pwrite etc) if the object is
being rendered to by external fences. It also will not report the same
busy state as wait-ioctl (or polling on the dma-buf) in the same
circumstances. On the plus side, it does retain reporting of which
*i915* engines are engaged with this object.
* non-blocking atomic modesets take a step backwards as the wait for
render completion blocks the ioctl. This is fixed in a subsequent
patch to use a fence instead for awaiting on the rendering, see
"drm/i915: Restore nonblocking awaits for modesetting"
* dynamic array manipulation for shared-fences in reservation is slower
than the previous lockless static assignment (e.g. gem_exec_lut_handle
runtime on ivb goes from 42s to 66s), mainly due to atomic operations
(maintaining the fence refcounts).
* loss of object-level retirement callbacks, emulated by VMA retirement
tracking.
* minor loss of object-level last activity information from debugfs,
could be replaced with per-vma information if desired
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20161028125858.23563-21-chris@chris-wilson.co.uk
2016-10-28 12:58:44 +00:00
|
|
|
if (!i915_vma_is_active(vma))
|
|
|
|
obj->active_count++;
|
|
|
|
i915_vma_set_active(vma, idx);
|
|
|
|
i915_gem_active_set(&vma->last_read[idx], req);
|
|
|
|
list_move_tail(&vma->vm_link, &vma->vm->active_list);
|
2016-08-04 06:52:43 +00:00
|
|
|
|
|
|
|
if (flags & EXEC_OBJECT_WRITE) {
|
2016-11-16 19:07:04 +00:00
|
|
|
if (intel_fb_obj_invalidate(obj, ORIGIN_CS))
|
|
|
|
i915_gem_active_set(&obj->frontbuffer_write, req);
|
2016-08-04 06:52:43 +00:00
|
|
|
|
|
|
|
/* update for the implicit flush after a batch */
|
|
|
|
obj->base.write_domain &= ~I915_GEM_GPU_DOMAINS;
|
2016-11-07 16:52:04 +00:00
|
|
|
if (!obj->cache_dirty && gpu_write_needs_clflush(obj))
|
|
|
|
obj->cache_dirty = true;
|
2016-08-04 06:52:43 +00:00
|
|
|
}
|
|
|
|
|
2016-08-18 16:17:00 +00:00
|
|
|
if (flags & EXEC_OBJECT_NEEDS_FENCE)
|
|
|
|
i915_gem_active_set(&vma->last_fence, req);
|
2016-08-04 06:52:43 +00:00
|
|
|
}
|
|
|
|
|
2016-08-04 15:32:42 +00:00
|
|
|
static void eb_export_fence(struct drm_i915_gem_object *obj,
|
|
|
|
struct drm_i915_gem_request *req,
|
|
|
|
unsigned int flags)
|
|
|
|
{
|
drm/i915: Move GEM activity tracking into a common struct reservation_object
In preparation to support many distinct timelines, we need to expand the
activity tracking on the GEM object to handle more than just a request
per engine. We already use the struct reservation_object on the dma-buf
to handle many fence contexts, so integrating that into the GEM object
itself is the preferred solution. (For example, we can now share the same
reservation_object between every consumer/producer using this buffer and
skip the manual import/export via dma-buf.)
v2: Reimplement busy-ioctl (by walking the reservation object), postpone
the ABI change for another day. Similarly use the reservation object to
find the last_write request (if active and from i915) for choosing
display CS flips.
Caveats:
* busy-ioctl: busy-ioctl only reports on the native fences, it will not
warn of stalls (in set-domain-ioctl, pread/pwrite etc) if the object is
being rendered to by external fences. It also will not report the same
busy state as wait-ioctl (or polling on the dma-buf) in the same
circumstances. On the plus side, it does retain reporting of which
*i915* engines are engaged with this object.
* non-blocking atomic modesets take a step backwards as the wait for
render completion blocks the ioctl. This is fixed in a subsequent
patch to use a fence instead for awaiting on the rendering, see
"drm/i915: Restore nonblocking awaits for modesetting"
* dynamic array manipulation for shared-fences in reservation is slower
than the previous lockless static assignment (e.g. gem_exec_lut_handle
runtime on ivb goes from 42s to 66s), mainly due to atomic operations
(maintaining the fence refcounts).
* loss of object-level retirement callbacks, emulated by VMA retirement
tracking.
* minor loss of object-level last activity information from debugfs,
could be replaced with per-vma information if desired
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20161028125858.23563-21-chris@chris-wilson.co.uk
2016-10-28 12:58:44 +00:00
|
|
|
struct reservation_object *resv = obj->resv;
|
2016-08-04 15:32:42 +00:00
|
|
|
|
|
|
|
/* Ignore errors from failing to allocate the new fence, we can't
|
|
|
|
* handle an error right now. Worst case should be missed
|
|
|
|
* synchronisation leading to rendering corruption.
|
|
|
|
*/
|
2017-02-21 09:17:23 +00:00
|
|
|
reservation_object_lock(resv, NULL);
|
2016-08-04 15:32:42 +00:00
|
|
|
if (flags & EXEC_OBJECT_WRITE)
|
|
|
|
reservation_object_add_excl_fence(resv, &req->fence);
|
|
|
|
else if (reservation_object_reserve_shared(resv) == 0)
|
|
|
|
reservation_object_add_shared_fence(resv, &req->fence);
|
2017-02-21 09:17:23 +00:00
|
|
|
reservation_object_unlock(resv);
|
2016-08-04 15:32:42 +00:00
|
|
|
}
|
|
|
|
|
2016-08-02 21:50:38 +00:00
|
|
|
static void
|
drm/i915: Convert execbuf code to use vmas
In order to transition more of our code over to using a VMA instead of
an <OBJ, VM> pair - we must have the vma accessible at execbuf time. Up
until now, we've only had a VMA when actually binding an object.
The previous patch helped handle the distinction on bound vs. unbound.
This patch will help us catch leaks, and other issues before we actually
shuffle a bunch of stuff around.
This attempts to convert all the execbuf code to speak in vmas. Since
the execbuf code is very self contained it was a nice isolated
conversion.
The meat of the code is about turning eb_objects into eb_vma, and then
wiring up the rest of the code to use vmas instead of obj, vm pairs.
Unfortunately, to do this, we must move the exec_list link from the obj
structure. This list is reused in the eviction code, so we must also
modify the eviction code to make this work.
WARNING: This patch makes an already hotly profiled path slower. The cost is
unavoidable. In reply to this mail, I will attach the extra data.
v2: Release table lock early, and two a 2 phase vma lookup to avoid
having to use a GFP_ATOMIC. (Chris)
v3: s/obj_exec_list/obj_exec_link/
Updates to address
commit 6d2b888569d366beb4be72cacfde41adee2c25e1
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Wed Aug 7 18:30:54 2013 +0100
drm/i915: List objects allocated from stolen memory in debugfs
v4: Use obj = vma->obj for neatness in some places (Chris)
need_reloc_mappable() should return false if ppgtt (Chris)
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
[danvet: Split out prep patches. Also remove a FIXME comment which is
now taken care of.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2013-08-14 09:38:36 +00:00
|
|
|
i915_gem_execbuffer_move_to_active(struct list_head *vmas,
|
2015-05-29 16:43:33 +00:00
|
|
|
struct drm_i915_gem_request *req)
|
2010-11-25 19:32:06 +00:00
|
|
|
{
|
drm/i915: Convert execbuf code to use vmas
In order to transition more of our code over to using a VMA instead of
an <OBJ, VM> pair - we must have the vma accessible at execbuf time. Up
until now, we've only had a VMA when actually binding an object.
The previous patch helped handle the distinction on bound vs. unbound.
This patch will help us catch leaks, and other issues before we actually
shuffle a bunch of stuff around.
This attempts to convert all the execbuf code to speak in vmas. Since
the execbuf code is very self contained it was a nice isolated
conversion.
The meat of the code is about turning eb_objects into eb_vma, and then
wiring up the rest of the code to use vmas instead of obj, vm pairs.
Unfortunately, to do this, we must move the exec_list link from the obj
structure. This list is reused in the eviction code, so we must also
modify the eviction code to make this work.
WARNING: This patch makes an already hotly profiled path slower. The cost is
unavoidable. In reply to this mail, I will attach the extra data.
v2: Release table lock early, and two a 2 phase vma lookup to avoid
having to use a GFP_ATOMIC. (Chris)
v3: s/obj_exec_list/obj_exec_link/
Updates to address
commit 6d2b888569d366beb4be72cacfde41adee2c25e1
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Wed Aug 7 18:30:54 2013 +0100
drm/i915: List objects allocated from stolen memory in debugfs
v4: Use obj = vma->obj for neatness in some places (Chris)
need_reloc_mappable() should return false if ppgtt (Chris)
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
[danvet: Split out prep patches. Also remove a FIXME comment which is
now taken care of.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2013-08-14 09:38:36 +00:00
|
|
|
struct i915_vma *vma;
|
2010-11-25 19:32:06 +00:00
|
|
|
|
drm/i915: Convert execbuf code to use vmas
In order to transition more of our code over to using a VMA instead of
an <OBJ, VM> pair - we must have the vma accessible at execbuf time. Up
until now, we've only had a VMA when actually binding an object.
The previous patch helped handle the distinction on bound vs. unbound.
This patch will help us catch leaks, and other issues before we actually
shuffle a bunch of stuff around.
This attempts to convert all the execbuf code to speak in vmas. Since
the execbuf code is very self contained it was a nice isolated
conversion.
The meat of the code is about turning eb_objects into eb_vma, and then
wiring up the rest of the code to use vmas instead of obj, vm pairs.
Unfortunately, to do this, we must move the exec_list link from the obj
structure. This list is reused in the eviction code, so we must also
modify the eviction code to make this work.
WARNING: This patch makes an already hotly profiled path slower. The cost is
unavoidable. In reply to this mail, I will attach the extra data.
v2: Release table lock early, and two a 2 phase vma lookup to avoid
having to use a GFP_ATOMIC. (Chris)
v3: s/obj_exec_list/obj_exec_link/
Updates to address
commit 6d2b888569d366beb4be72cacfde41adee2c25e1
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Wed Aug 7 18:30:54 2013 +0100
drm/i915: List objects allocated from stolen memory in debugfs
v4: Use obj = vma->obj for neatness in some places (Chris)
need_reloc_mappable() should return false if ppgtt (Chris)
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
[danvet: Split out prep patches. Also remove a FIXME comment which is
now taken care of.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2013-08-14 09:38:36 +00:00
|
|
|
list_for_each_entry(vma, vmas, exec_list) {
|
|
|
|
struct drm_i915_gem_object *obj = vma->obj;
|
2011-02-03 11:57:46 +00:00
|
|
|
|
2010-11-25 19:32:06 +00:00
|
|
|
obj->base.write_domain = obj->base.pending_write_domain;
|
2016-08-04 06:52:43 +00:00
|
|
|
if (obj->base.write_domain)
|
|
|
|
vma->exec_entry->flags |= EXEC_OBJECT_WRITE;
|
|
|
|
else
|
2013-01-17 21:23:36 +00:00
|
|
|
obj->base.pending_read_domains |= obj->base.read_domains;
|
|
|
|
obj->base.read_domains = obj->base.pending_read_domains;
|
2010-11-25 19:32:06 +00:00
|
|
|
|
2016-08-04 06:52:43 +00:00
|
|
|
i915_vma_move_to_active(vma, req, vma->exec_entry->flags);
|
2016-08-04 15:32:42 +00:00
|
|
|
eb_export_fence(obj, req, vma->exec_entry->flags);
|
2010-11-25 19:32:06 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2012-01-03 17:23:29 +00:00
|
|
|
static int
|
2016-08-02 21:50:18 +00:00
|
|
|
i915_reset_gen7_sol_offsets(struct drm_i915_gem_request *req)
|
2012-01-03 17:23:29 +00:00
|
|
|
{
|
2017-02-14 11:32:42 +00:00
|
|
|
u32 *cs;
|
|
|
|
int i;
|
2012-01-03 17:23:29 +00:00
|
|
|
|
2016-08-02 21:50:18 +00:00
|
|
|
if (!IS_GEN7(req->i915) || req->engine->id != RCS) {
|
2014-04-24 06:09:09 +00:00
|
|
|
DRM_DEBUG("sol reset is gen7/rcs only\n");
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
2012-01-03 17:23:29 +00:00
|
|
|
|
2017-02-14 11:32:42 +00:00
|
|
|
cs = intel_ring_begin(req, 4 * 3);
|
|
|
|
if (IS_ERR(cs))
|
|
|
|
return PTR_ERR(cs);
|
2012-01-03 17:23:29 +00:00
|
|
|
|
|
|
|
for (i = 0; i < 4; i++) {
|
2017-02-14 11:32:42 +00:00
|
|
|
*cs++ = MI_LOAD_REGISTER_IMM(1);
|
|
|
|
*cs++ = i915_mmio_reg_offset(GEN7_SO_WRITE_OFFSET(i));
|
|
|
|
*cs++ = 0;
|
2012-01-03 17:23:29 +00:00
|
|
|
}
|
|
|
|
|
2017-02-14 11:32:42 +00:00
|
|
|
intel_ring_advance(req, cs);
|
2012-01-03 17:23:29 +00:00
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2016-08-15 09:49:06 +00:00
|
|
|
static struct i915_vma *
|
2016-03-16 11:00:37 +00:00
|
|
|
i915_gem_execbuffer_parse(struct intel_engine_cs *engine,
|
2014-12-11 20:13:12 +00:00
|
|
|
struct drm_i915_gem_exec_object2 *shadow_exec_entry,
|
|
|
|
struct drm_i915_gem_object *batch_obj,
|
2016-08-04 15:32:31 +00:00
|
|
|
struct eb_vmas *eb,
|
2014-12-11 20:13:12 +00:00
|
|
|
u32 batch_start_offset,
|
|
|
|
u32 batch_len,
|
2015-01-14 11:20:57 +00:00
|
|
|
bool is_master)
|
2014-12-11 20:13:12 +00:00
|
|
|
{
|
|
|
|
struct drm_i915_gem_object *shadow_batch_obj;
|
2015-01-14 11:20:57 +00:00
|
|
|
struct i915_vma *vma;
|
2014-12-11 20:13:12 +00:00
|
|
|
int ret;
|
|
|
|
|
2016-03-16 11:00:37 +00:00
|
|
|
shadow_batch_obj = i915_gem_batch_pool_get(&engine->batch_pool,
|
2015-01-14 11:20:57 +00:00
|
|
|
PAGE_ALIGN(batch_len));
|
2014-12-11 20:13:12 +00:00
|
|
|
if (IS_ERR(shadow_batch_obj))
|
2016-08-04 15:32:31 +00:00
|
|
|
return ERR_CAST(shadow_batch_obj);
|
2014-12-11 20:13:12 +00:00
|
|
|
|
2016-07-27 08:07:26 +00:00
|
|
|
ret = intel_engine_cmd_parser(engine,
|
|
|
|
batch_obj,
|
|
|
|
shadow_batch_obj,
|
|
|
|
batch_start_offset,
|
|
|
|
batch_len,
|
|
|
|
is_master);
|
2016-08-15 09:49:06 +00:00
|
|
|
if (ret) {
|
|
|
|
if (ret == -EACCES) /* unhandled chained batch */
|
|
|
|
vma = NULL;
|
|
|
|
else
|
|
|
|
vma = ERR_PTR(ret);
|
|
|
|
goto out;
|
|
|
|
}
|
2014-12-11 20:13:12 +00:00
|
|
|
|
2016-08-15 09:49:06 +00:00
|
|
|
vma = i915_gem_object_ggtt_pin(shadow_batch_obj, NULL, 0, 0, 0);
|
|
|
|
if (IS_ERR(vma))
|
|
|
|
goto out;
|
2015-04-07 15:20:35 +00:00
|
|
|
|
2015-01-14 11:20:57 +00:00
|
|
|
memset(shadow_exec_entry, 0, sizeof(*shadow_exec_entry));
|
2014-12-11 20:13:12 +00:00
|
|
|
|
2015-01-14 11:20:57 +00:00
|
|
|
vma->exec_entry = shadow_exec_entry;
|
2015-04-07 15:20:35 +00:00
|
|
|
vma->exec_entry->flags = __EXEC_OBJECT_HAS_PIN;
|
2016-07-20 12:31:52 +00:00
|
|
|
i915_gem_object_get(shadow_batch_obj);
|
2015-01-14 11:20:57 +00:00
|
|
|
list_add_tail(&vma->exec_list, &eb->vmas);
|
2014-12-11 20:13:12 +00:00
|
|
|
|
2016-08-15 09:49:06 +00:00
|
|
|
out:
|
2015-04-07 15:20:35 +00:00
|
|
|
i915_gem_object_unpin_pages(shadow_batch_obj);
|
2016-08-15 09:49:06 +00:00
|
|
|
return vma;
|
2014-12-11 20:13:12 +00:00
|
|
|
}
|
2014-09-06 09:28:27 +00:00
|
|
|
|
2017-03-02 12:25:25 +00:00
|
|
|
static void
|
|
|
|
add_to_client(struct drm_i915_gem_request *req,
|
|
|
|
struct drm_file *file)
|
|
|
|
{
|
|
|
|
req->file_priv = file->driver_priv;
|
|
|
|
list_add_tail(&req->client_link, &req->file_priv->mm.request_list);
|
|
|
|
}
|
|
|
|
|
2016-08-02 21:50:38 +00:00
|
|
|
static int
|
|
|
|
execbuf_submit(struct i915_execbuffer_params *params,
|
|
|
|
struct drm_i915_gem_execbuffer2 *args,
|
|
|
|
struct list_head *vmas)
|
2014-07-03 15:28:05 +00:00
|
|
|
{
|
2015-05-29 16:43:27 +00:00
|
|
|
u64 exec_start, exec_len;
|
2015-10-06 10:39:55 +00:00
|
|
|
int ret;
|
2014-07-03 15:28:05 +00:00
|
|
|
|
2015-05-29 16:43:32 +00:00
|
|
|
ret = i915_gem_execbuffer_move_to_gpu(params->request, vmas);
|
2014-07-03 15:28:05 +00:00
|
|
|
if (ret)
|
2015-10-06 10:39:55 +00:00
|
|
|
return ret;
|
2014-07-03 15:28:05 +00:00
|
|
|
|
2015-05-29 16:43:41 +00:00
|
|
|
ret = i915_switch_context(params->request);
|
2014-07-03 15:28:05 +00:00
|
|
|
if (ret)
|
2015-10-06 10:39:55 +00:00
|
|
|
return ret;
|
2014-07-03 15:28:05 +00:00
|
|
|
|
drm/i915: Drop support for I915_EXEC_CONSTANTS_* execbuf parameters.
This patch makes the I915_PARAM_HAS_EXEC_CONSTANTS getparam return 0
(indicating the optional feature is not supported), and makes execbuf
always return -EINVAL if the flags are used.
Apparently, no userspace ever shipped which used this optional feature:
I checked the git history of Mesa, xf86-video-intel, libva, and Beignet,
and there were zero commits showing a use of these flags. Kernel commit
72bfa19c8deb4 apparently introduced the feature prematurely. According
to Chris, the intention was to use this in cairo-drm, but "the use was
broken for gen6", so I don't think it ever happened.
'relative_constants_mode' has always been tracked per-device, but this
has actually been wrong ever since hardware contexts were introduced, as
the INSTPM register is saved (and automatically restored) as part of the
render ring context. The software per-device value could therefore get
out of sync with the hardware per-context value. This meant that using
them is actually unsafe: a client which tried to use them could damage
the state of other clients, causing the GPU to interpret their BO
offsets as absolute pointers, leading to bogus memory reads.
These flags were also never ported to execlist mode, making them no-ops
on Gen9+ (which requires execlists), and Gen8 in the default mode.
On Gen8+, userspace can write these registers directly, achieving the
same effect. On Gen6-7.5, it likely makes sense to extend the command
parser to support them. I don't think anyone wants this on Gen4-5.
Based on a patch by Dave Gordon.
v3: Return -ENODEV for the getparam, as this is what we do for other
obsolete features. Suggested by Chris Wilson.
Cc: stable@vger.kernel.org
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=92448
Signed-off-by: Kenneth Graunke <kenneth@whitecape.org>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: http://patchwork.freedesktop.org/patch/msgid/20170215093446.21291-1-kenneth@whitecape.org
Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
2017-02-15 09:34:46 +00:00
|
|
|
if (args->flags & I915_EXEC_CONSTANTS_MASK) {
|
|
|
|
DRM_DEBUG("I915_EXEC_CONSTANTS_* unsupported\n");
|
2015-10-06 10:39:55 +00:00
|
|
|
return -EINVAL;
|
2014-07-03 15:28:05 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
if (args->flags & I915_EXEC_GEN7_SOL_RESET) {
|
2016-08-02 21:50:18 +00:00
|
|
|
ret = i915_reset_gen7_sol_offsets(params->request);
|
2014-07-03 15:28:05 +00:00
|
|
|
if (ret)
|
2015-10-06 10:39:55 +00:00
|
|
|
return ret;
|
2014-07-03 15:28:05 +00:00
|
|
|
}
|
|
|
|
|
2015-05-29 16:43:27 +00:00
|
|
|
exec_len = args->batch_len;
|
2016-08-04 15:32:31 +00:00
|
|
|
exec_start = params->batch->node.start +
|
2015-05-29 16:43:27 +00:00
|
|
|
params->args_batch_start_offset;
|
|
|
|
|
2015-12-14 16:23:49 +00:00
|
|
|
if (exec_len == 0)
|
2016-08-18 16:17:12 +00:00
|
|
|
exec_len = params->batch->size - params->args_batch_start_offset;
|
2015-12-14 16:23:49 +00:00
|
|
|
|
2016-08-02 21:50:27 +00:00
|
|
|
ret = params->engine->emit_bb_start(params->request,
|
|
|
|
exec_start, exec_len,
|
|
|
|
params->dispatch_flags);
|
2015-10-06 10:39:55 +00:00
|
|
|
if (ret)
|
|
|
|
return ret;
|
2014-07-03 15:28:05 +00:00
|
|
|
|
2015-05-29 16:43:33 +00:00
|
|
|
i915_gem_execbuffer_move_to_active(vmas, params->request);
|
2014-07-03 15:28:05 +00:00
|
|
|
|
2015-10-06 10:39:55 +00:00
|
|
|
return 0;
|
2014-07-03 15:28:05 +00:00
|
|
|
}
|
|
|
|
|
2014-04-17 02:37:40 +00:00
|
|
|
/**
|
|
|
|
* Find one BSD ring to dispatch the corresponding BSD command.
|
2016-07-27 08:07:27 +00:00
|
|
|
* The engine index is returned.
|
2014-04-17 02:37:40 +00:00
|
|
|
*/
|
2016-01-15 15:12:50 +00:00
|
|
|
static unsigned int
|
2016-07-27 08:07:27 +00:00
|
|
|
gen8_dispatch_bsd_engine(struct drm_i915_private *dev_priv,
|
|
|
|
struct drm_file *file)
|
2014-04-17 02:37:40 +00:00
|
|
|
{
|
|
|
|
struct drm_i915_file_private *file_priv = file->driver_priv;
|
|
|
|
|
2016-01-15 15:12:50 +00:00
|
|
|
/* Check whether the file_priv has already selected one ring. */
|
2016-09-01 11:58:21 +00:00
|
|
|
if ((int)file_priv->bsd_engine < 0)
|
|
|
|
file_priv->bsd_engine = atomic_fetch_xor(1,
|
|
|
|
&dev_priv->mm.bsd_engine_dispatch_index);
|
drm/i915: Prevent negative relocation deltas from wrapping
This is pure evil. Userspace, I'm looking at you SNA, repacks batch
buffers on the fly after generation as they are being passed to the
kernel for execution. These batches also contain self-referenced
relocations as a single buffer encompasses the state commands, kernels,
vertices and sampler. During generation the buffers are placed at known
offsets within the full batch, and then the relocation deltas (as passed
to the kernel) are tweaked as the batch is repacked into a smaller buffer.
This means that userspace is passing negative relocations deltas, which
subsequently wrap to large values if the batch is at a low address. The
GPU hangs when it then tries to use the large value as a base for its
address offsets, rather than wrapping back to the real value (as one
would hope). As the GPU uses positive offsets from the base, we can
treat the relocation address as the minimum address read by the GPU.
For the upper bound, we trust that userspace will not read beyond the
end of the buffer.
So, how do we fix negative relocations from wrapping? We can either
check that every relocation looks valid when we write it, and then
position each object such that we prevent the offset wraparound, or we
just special-case the self-referential behaviour of SNA and force all
batches to be above 256k. Daniel prefers the latter approach.
This fixes a GPU hang when it tries to use an address (relocation +
offset) greater than the GTT size. The issue would occur quite easily
with full-ppgtt as each fd gets its own VM space, so low offsets would
often be handed out. However, with the rearrangement of the low GTT due
to capturing the BIOS framebuffer, it is already affecting kernels 3.15
onwards. I think only IVB+ is susceptible to this bug, but the workaround
should only kick in rarely, so it seems sensible to always apply it.
v3: Use a bias for batch buffers to prevent small negative delta relocations
from wrapping.
v4 from Daniel:
- s/BIAS/BATCH_OFFSET_BIAS/
- Extract eb_vma_misplaced/i915_vma_misplaced since the conditions
were growing rather cumbersome.
- Add a comment to eb_get_batch explaining why we do this.
- Apply the batch offset bias everywhere but mention that we've only
observed it on gen7 gpus.
- Drop PIN_OFFSET_FIX for now, that slipped in from a feature patch.
v5: Add static to eb_get_batch, spotted by 0-day tester.
Testcase: igt/gem_bad_reloc
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=78533
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> (v3)
Cc: stable@vger.kernel.org
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-05-23 06:48:08 +00:00
|
|
|
|
2016-07-27 08:07:27 +00:00
|
|
|
return file_priv->bsd_engine;
|
drm/i915: Prevent negative relocation deltas from wrapping
This is pure evil. Userspace, I'm looking at you SNA, repacks batch
buffers on the fly after generation as they are being passed to the
kernel for execution. These batches also contain self-referenced
relocations as a single buffer encompasses the state commands, kernels,
vertices and sampler. During generation the buffers are placed at known
offsets within the full batch, and then the relocation deltas (as passed
to the kernel) are tweaked as the batch is repacked into a smaller buffer.
This means that userspace is passing negative relocations deltas, which
subsequently wrap to large values if the batch is at a low address. The
GPU hangs when it then tries to use the large value as a base for its
address offsets, rather than wrapping back to the real value (as one
would hope). As the GPU uses positive offsets from the base, we can
treat the relocation address as the minimum address read by the GPU.
For the upper bound, we trust that userspace will not read beyond the
end of the buffer.
So, how do we fix negative relocations from wrapping? We can either
check that every relocation looks valid when we write it, and then
position each object such that we prevent the offset wraparound, or we
just special-case the self-referential behaviour of SNA and force all
batches to be above 256k. Daniel prefers the latter approach.
This fixes a GPU hang when it tries to use an address (relocation +
offset) greater than the GTT size. The issue would occur quite easily
with full-ppgtt as each fd gets its own VM space, so low offsets would
often be handed out. However, with the rearrangement of the low GTT due
to capturing the BIOS framebuffer, it is already affecting kernels 3.15
onwards. I think only IVB+ is susceptible to this bug, but the workaround
should only kick in rarely, so it seems sensible to always apply it.
v3: Use a bias for batch buffers to prevent small negative delta relocations
from wrapping.
v4 from Daniel:
- s/BIAS/BATCH_OFFSET_BIAS/
- Extract eb_vma_misplaced/i915_vma_misplaced since the conditions
were growing rather cumbersome.
- Add a comment to eb_get_batch explaining why we do this.
- Apply the batch offset bias everywhere but mention that we've only
observed it on gen7 gpus.
- Drop PIN_OFFSET_FIX for now, that slipped in from a feature patch.
v5: Add static to eb_get_batch, spotted by 0-day tester.
Testcase: igt/gem_bad_reloc
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=78533
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> (v3)
Cc: stable@vger.kernel.org
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2014-05-23 06:48:08 +00:00
|
|
|
}
|
|
|
|
|
2016-01-15 15:12:50 +00:00
|
|
|
#define I915_USER_RINGS (4)
|
|
|
|
|
2016-03-16 11:00:40 +00:00
|
|
|
static const enum intel_engine_id user_ring_map[I915_USER_RINGS + 1] = {
|
2016-01-15 15:12:50 +00:00
|
|
|
[I915_EXEC_DEFAULT] = RCS,
|
|
|
|
[I915_EXEC_RENDER] = RCS,
|
|
|
|
[I915_EXEC_BLT] = BCS,
|
|
|
|
[I915_EXEC_BSD] = VCS,
|
|
|
|
[I915_EXEC_VEBOX] = VECS
|
|
|
|
};
|
|
|
|
|
2016-07-20 17:16:07 +00:00
|
|
|
static struct intel_engine_cs *
|
|
|
|
eb_select_engine(struct drm_i915_private *dev_priv,
|
|
|
|
struct drm_file *file,
|
|
|
|
struct drm_i915_gem_execbuffer2 *args)
|
2016-01-15 15:12:50 +00:00
|
|
|
{
|
|
|
|
unsigned int user_ring_id = args->flags & I915_EXEC_RING_MASK;
|
2016-07-20 17:16:07 +00:00
|
|
|
struct intel_engine_cs *engine;
|
2016-01-15 15:12:50 +00:00
|
|
|
|
|
|
|
if (user_ring_id > I915_USER_RINGS) {
|
|
|
|
DRM_DEBUG("execbuf with unknown ring: %u\n", user_ring_id);
|
2016-07-20 17:16:07 +00:00
|
|
|
return NULL;
|
2016-01-15 15:12:50 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
if ((user_ring_id != I915_EXEC_BSD) &&
|
|
|
|
((args->flags & I915_EXEC_BSD_MASK) != 0)) {
|
|
|
|
DRM_DEBUG("execbuf with non bsd ring but with invalid "
|
|
|
|
"bsd dispatch flags: %d\n", (int)(args->flags));
|
2016-07-20 17:16:07 +00:00
|
|
|
return NULL;
|
2016-01-15 15:12:50 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
if (user_ring_id == I915_EXEC_BSD && HAS_BSD2(dev_priv)) {
|
|
|
|
unsigned int bsd_idx = args->flags & I915_EXEC_BSD_MASK;
|
|
|
|
|
|
|
|
if (bsd_idx == I915_EXEC_BSD_DEFAULT) {
|
2016-07-27 08:07:27 +00:00
|
|
|
bsd_idx = gen8_dispatch_bsd_engine(dev_priv, file);
|
2016-01-15 15:12:50 +00:00
|
|
|
} else if (bsd_idx >= I915_EXEC_BSD_RING1 &&
|
|
|
|
bsd_idx <= I915_EXEC_BSD_RING2) {
|
2016-01-27 13:41:09 +00:00
|
|
|
bsd_idx >>= I915_EXEC_BSD_SHIFT;
|
2016-01-15 15:12:50 +00:00
|
|
|
bsd_idx--;
|
|
|
|
} else {
|
|
|
|
DRM_DEBUG("execbuf with unknown bsd ring: %u\n",
|
|
|
|
bsd_idx);
|
2016-07-20 17:16:07 +00:00
|
|
|
return NULL;
|
2016-01-15 15:12:50 +00:00
|
|
|
}
|
|
|
|
|
drm/i915: Allocate intel_engine_cs structure only for the enabled engines
With the possibility of addition of many more number of rings in future,
the drm_i915_private structure could bloat as an array, of type
intel_engine_cs, is embedded inside it.
struct intel_engine_cs engine[I915_NUM_ENGINES];
Though this is still fine as generally there is only a single instance of
drm_i915_private structure used, but not all of the possible rings would be
enabled or active on most of the platforms. Some memory can be saved by
allocating intel_engine_cs structure only for the enabled/active engines.
Currently the engine/ring ID is kept static and dev_priv->engine[] is simply
indexed using the enums defined in intel_engine_id.
To save memory and continue using the static engine/ring IDs, 'engine' is
defined as an array of pointers.
struct intel_engine_cs *engine[I915_NUM_ENGINES];
dev_priv->engine[engine_ID] will be NULL for disabled engine instances.
There is a text size reduction of 928 bytes, from 1028200 to 1027272, for
i915.o file (but for i915.ko file text size remain same as 1193131 bytes).
v2:
- Remove the engine iterator field added in drm_i915_private structure,
instead pass a local iterator variable to the for_each_engine**
macros. (Chris)
- Do away with intel_engine_initialized() and instead directly use the
NULL pointer check on engine pointer. (Chris)
v3:
- Remove for_each_engine_id() macro, as the updated macro for_each_engine()
can be used in place of it. (Chris)
- Protect the access to Render engine Fault register with a NULL check, as
engine specific init is done later in Driver load sequence.
v4:
- Use !!dev_priv->engine[VCS] style for the engine check in getparam. (Chris)
- Kill the superfluous init_engine_lists().
v5:
- Cleanup the intel_engines_init() & intel_engines_setup(), with respect to
allocation of intel_engine_cs structure. (Chris)
v6:
- Rebase.
v7:
- Optimize the for_each_engine_masked() macro. (Chris)
- Change the type of 'iter' local variable to enum intel_engine_id. (Chris)
- Rebase.
v8: Rebase.
v9: Rebase.
v10:
- For index calculation use engine ID instead of pointer based arithmetic in
intel_engine_sync_index() as engine pointers are not contiguous now (Chris)
- For appropriateness, rename local enum variable 'iter' to 'id'. (Joonas)
- Use for_each_engine macro for cleanup in intel_engines_init() and remove
check for NULL engine pointer in cleanup() routines. (Joonas)
v11: Rebase.
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Akash Goel <akash.goel@intel.com>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1476378888-7372-1-git-send-email-akash.goel@intel.com
2016-10-13 17:14:48 +00:00
|
|
|
engine = dev_priv->engine[_VCS(bsd_idx)];
|
2016-01-15 15:12:50 +00:00
|
|
|
} else {
|
drm/i915: Allocate intel_engine_cs structure only for the enabled engines
With the possibility of addition of many more number of rings in future,
the drm_i915_private structure could bloat as an array, of type
intel_engine_cs, is embedded inside it.
struct intel_engine_cs engine[I915_NUM_ENGINES];
Though this is still fine as generally there is only a single instance of
drm_i915_private structure used, but not all of the possible rings would be
enabled or active on most of the platforms. Some memory can be saved by
allocating intel_engine_cs structure only for the enabled/active engines.
Currently the engine/ring ID is kept static and dev_priv->engine[] is simply
indexed using the enums defined in intel_engine_id.
To save memory and continue using the static engine/ring IDs, 'engine' is
defined as an array of pointers.
struct intel_engine_cs *engine[I915_NUM_ENGINES];
dev_priv->engine[engine_ID] will be NULL for disabled engine instances.
There is a text size reduction of 928 bytes, from 1028200 to 1027272, for
i915.o file (but for i915.ko file text size remain same as 1193131 bytes).
v2:
- Remove the engine iterator field added in drm_i915_private structure,
instead pass a local iterator variable to the for_each_engine**
macros. (Chris)
- Do away with intel_engine_initialized() and instead directly use the
NULL pointer check on engine pointer. (Chris)
v3:
- Remove for_each_engine_id() macro, as the updated macro for_each_engine()
can be used in place of it. (Chris)
- Protect the access to Render engine Fault register with a NULL check, as
engine specific init is done later in Driver load sequence.
v4:
- Use !!dev_priv->engine[VCS] style for the engine check in getparam. (Chris)
- Kill the superfluous init_engine_lists().
v5:
- Cleanup the intel_engines_init() & intel_engines_setup(), with respect to
allocation of intel_engine_cs structure. (Chris)
v6:
- Rebase.
v7:
- Optimize the for_each_engine_masked() macro. (Chris)
- Change the type of 'iter' local variable to enum intel_engine_id. (Chris)
- Rebase.
v8: Rebase.
v9: Rebase.
v10:
- For index calculation use engine ID instead of pointer based arithmetic in
intel_engine_sync_index() as engine pointers are not contiguous now (Chris)
- For appropriateness, rename local enum variable 'iter' to 'id'. (Joonas)
- Use for_each_engine macro for cleanup in intel_engines_init() and remove
check for NULL engine pointer in cleanup() routines. (Joonas)
v11: Rebase.
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Akash Goel <akash.goel@intel.com>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1476378888-7372-1-git-send-email-akash.goel@intel.com
2016-10-13 17:14:48 +00:00
|
|
|
engine = dev_priv->engine[user_ring_map[user_ring_id]];
|
2016-01-15 15:12:50 +00:00
|
|
|
}
|
|
|
|
|
drm/i915: Allocate intel_engine_cs structure only for the enabled engines
With the possibility of addition of many more number of rings in future,
the drm_i915_private structure could bloat as an array, of type
intel_engine_cs, is embedded inside it.
struct intel_engine_cs engine[I915_NUM_ENGINES];
Though this is still fine as generally there is only a single instance of
drm_i915_private structure used, but not all of the possible rings would be
enabled or active on most of the platforms. Some memory can be saved by
allocating intel_engine_cs structure only for the enabled/active engines.
Currently the engine/ring ID is kept static and dev_priv->engine[] is simply
indexed using the enums defined in intel_engine_id.
To save memory and continue using the static engine/ring IDs, 'engine' is
defined as an array of pointers.
struct intel_engine_cs *engine[I915_NUM_ENGINES];
dev_priv->engine[engine_ID] will be NULL for disabled engine instances.
There is a text size reduction of 928 bytes, from 1028200 to 1027272, for
i915.o file (but for i915.ko file text size remain same as 1193131 bytes).
v2:
- Remove the engine iterator field added in drm_i915_private structure,
instead pass a local iterator variable to the for_each_engine**
macros. (Chris)
- Do away with intel_engine_initialized() and instead directly use the
NULL pointer check on engine pointer. (Chris)
v3:
- Remove for_each_engine_id() macro, as the updated macro for_each_engine()
can be used in place of it. (Chris)
- Protect the access to Render engine Fault register with a NULL check, as
engine specific init is done later in Driver load sequence.
v4:
- Use !!dev_priv->engine[VCS] style for the engine check in getparam. (Chris)
- Kill the superfluous init_engine_lists().
v5:
- Cleanup the intel_engines_init() & intel_engines_setup(), with respect to
allocation of intel_engine_cs structure. (Chris)
v6:
- Rebase.
v7:
- Optimize the for_each_engine_masked() macro. (Chris)
- Change the type of 'iter' local variable to enum intel_engine_id. (Chris)
- Rebase.
v8: Rebase.
v9: Rebase.
v10:
- For index calculation use engine ID instead of pointer based arithmetic in
intel_engine_sync_index() as engine pointers are not contiguous now (Chris)
- For appropriateness, rename local enum variable 'iter' to 'id'. (Joonas)
- Use for_each_engine macro for cleanup in intel_engines_init() and remove
check for NULL engine pointer in cleanup() routines. (Joonas)
v11: Rebase.
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Akash Goel <akash.goel@intel.com>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1476378888-7372-1-git-send-email-akash.goel@intel.com
2016-10-13 17:14:48 +00:00
|
|
|
if (!engine) {
|
2016-01-15 15:12:50 +00:00
|
|
|
DRM_DEBUG("execbuf with invalid ring: %u\n", user_ring_id);
|
2016-07-20 17:16:07 +00:00
|
|
|
return NULL;
|
2016-01-15 15:12:50 +00:00
|
|
|
}
|
|
|
|
|
2016-07-20 17:16:07 +00:00
|
|
|
return engine;
|
2016-01-15 15:12:50 +00:00
|
|
|
}
|
|
|
|
|
2010-11-25 18:00:26 +00:00
|
|
|
static int
|
|
|
|
i915_gem_do_execbuffer(struct drm_device *dev, void *data,
|
|
|
|
struct drm_file *file,
|
|
|
|
struct drm_i915_gem_execbuffer2 *args,
|
drm/i915: Get context early in execbuf
We need to have the address space when reserving space for the objects.
Since the address space and context are tied together, and reserve
occurs before context switch (for good reason), we must lookup our
context earlier in the process.
This leaves some room for optimizations where we no longer need to use
ctx_id in certain places. This will be addressed in a subsequent patch.
Important tricky bit:
Because slow relocations during execbuffer drop struct_mutex
Perhaps it would be best to acquire the reference when we get the
context, but I'll save that for another day (note I have written the
patch before, and I found the changes required to be uglier than this).
Note that since we currently access everything via context id, and not
the data structure this is fine, though not desirable. The next change
attempts to get the context only once via the context ID idr lookup, and
as such, the following can happen:
CTX-A is created, refcount = 1
CTX-A execbuf, mutex dropped
close IOCTL called on CTX-A, refcount = 0
CTX-A resumes in execbuf.
v2: Rebased on top of
commit b6359918b885da7c7b58c050674278dbd06020ab
Author: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Date: Wed Oct 30 15:44:16 2013 +0200
drm/i915: add i915_get_reset_stats_ioctl
v3: Rebased on top of
commit 25b3dfc87bff80317d67ddd2cd4cfb91e6fe7d79
Author: Mika Westerberg <mika.westerberg@linux.intel.com>
Date: Tue Nov 12 11:57:30 2013 +0200
Author: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Date: Tue Nov 26 16:14:33 2013 +0200
drm/i915: check context reset stats before relocations
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2013-12-06 22:11:21 +00:00
|
|
|
struct drm_i915_gem_exec_object2 *exec)
|
2010-11-25 18:00:26 +00:00
|
|
|
{
|
2016-03-30 13:57:10 +00:00
|
|
|
struct drm_i915_private *dev_priv = to_i915(dev);
|
|
|
|
struct i915_ggtt *ggtt = &dev_priv->ggtt;
|
drm/i915: Convert execbuf code to use vmas
In order to transition more of our code over to using a VMA instead of
an <OBJ, VM> pair - we must have the vma accessible at execbuf time. Up
until now, we've only had a VMA when actually binding an object.
The previous patch helped handle the distinction on bound vs. unbound.
This patch will help us catch leaks, and other issues before we actually
shuffle a bunch of stuff around.
This attempts to convert all the execbuf code to speak in vmas. Since
the execbuf code is very self contained it was a nice isolated
conversion.
The meat of the code is about turning eb_objects into eb_vma, and then
wiring up the rest of the code to use vmas instead of obj, vm pairs.
Unfortunately, to do this, we must move the exec_list link from the obj
structure. This list is reused in the eviction code, so we must also
modify the eviction code to make this work.
WARNING: This patch makes an already hotly profiled path slower. The cost is
unavoidable. In reply to this mail, I will attach the extra data.
v2: Release table lock early, and two a 2 phase vma lookup to avoid
having to use a GFP_ATOMIC. (Chris)
v3: s/obj_exec_list/obj_exec_link/
Updates to address
commit 6d2b888569d366beb4be72cacfde41adee2c25e1
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Wed Aug 7 18:30:54 2013 +0100
drm/i915: List objects allocated from stolen memory in debugfs
v4: Use obj = vma->obj for neatness in some places (Chris)
need_reloc_mappable() should return false if ppgtt (Chris)
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
[danvet: Split out prep patches. Also remove a FIXME comment which is
now taken care of.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2013-08-14 09:38:36 +00:00
|
|
|
struct eb_vmas *eb;
|
2014-12-11 20:13:09 +00:00
|
|
|
struct drm_i915_gem_exec_object2 shadow_exec_entry;
|
2016-03-16 11:00:36 +00:00
|
|
|
struct intel_engine_cs *engine;
|
2016-05-24 13:53:34 +00:00
|
|
|
struct i915_gem_context *ctx;
|
drm/i915: Get context early in execbuf
We need to have the address space when reserving space for the objects.
Since the address space and context are tied together, and reserve
occurs before context switch (for good reason), we must lookup our
context earlier in the process.
This leaves some room for optimizations where we no longer need to use
ctx_id in certain places. This will be addressed in a subsequent patch.
Important tricky bit:
Because slow relocations during execbuffer drop struct_mutex
Perhaps it would be best to acquire the reference when we get the
context, but I'll save that for another day (note I have written the
patch before, and I found the changes required to be uglier than this).
Note that since we currently access everything via context id, and not
the data structure this is fine, though not desirable. The next change
attempts to get the context only once via the context ID idr lookup, and
as such, the following can happen:
CTX-A is created, refcount = 1
CTX-A execbuf, mutex dropped
close IOCTL called on CTX-A, refcount = 0
CTX-A resumes in execbuf.
v2: Rebased on top of
commit b6359918b885da7c7b58c050674278dbd06020ab
Author: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Date: Wed Oct 30 15:44:16 2013 +0200
drm/i915: add i915_get_reset_stats_ioctl
v3: Rebased on top of
commit 25b3dfc87bff80317d67ddd2cd4cfb91e6fe7d79
Author: Mika Westerberg <mika.westerberg@linux.intel.com>
Date: Tue Nov 12 11:57:30 2013 +0200
Author: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Date: Tue Nov 26 16:14:33 2013 +0200
drm/i915: check context reset stats before relocations
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2013-12-06 22:11:21 +00:00
|
|
|
struct i915_address_space *vm;
|
2015-05-29 16:43:27 +00:00
|
|
|
struct i915_execbuffer_params params_master; /* XXX: will be removed later */
|
|
|
|
struct i915_execbuffer_params *params = ¶ms_master;
|
2013-11-26 14:14:33 +00:00
|
|
|
const u32 ctx_id = i915_execbuffer2_get_context_id(*args);
|
2015-02-13 11:48:10 +00:00
|
|
|
u32 dispatch_flags;
|
2017-01-27 09:40:08 +00:00
|
|
|
struct dma_fence *in_fence = NULL;
|
|
|
|
struct sync_file *out_fence = NULL;
|
|
|
|
int out_fence_fd = -1;
|
2014-07-03 15:28:05 +00:00
|
|
|
int ret;
|
2013-01-17 21:23:36 +00:00
|
|
|
bool need_relocs;
|
2010-11-25 18:00:26 +00:00
|
|
|
|
2013-01-17 21:23:36 +00:00
|
|
|
if (!i915_gem_check_execbuffer(args))
|
2010-11-25 19:32:06 +00:00
|
|
|
return -EINVAL;
|
|
|
|
|
2014-08-10 05:29:08 +00:00
|
|
|
ret = validate_exec_list(dev, exec, args->buffer_count);
|
2010-11-25 18:00:26 +00:00
|
|
|
if (ret)
|
|
|
|
return ret;
|
|
|
|
|
2015-02-13 11:48:10 +00:00
|
|
|
dispatch_flags = 0;
|
2012-10-17 11:09:54 +00:00
|
|
|
if (args->flags & I915_EXEC_SECURE) {
|
2016-06-21 08:54:20 +00:00
|
|
|
if (!drm_is_current_master(file) || !capable(CAP_SYS_ADMIN))
|
2012-10-17 11:09:54 +00:00
|
|
|
return -EPERM;
|
|
|
|
|
2015-02-13 11:48:10 +00:00
|
|
|
dispatch_flags |= I915_DISPATCH_SECURE;
|
2012-10-17 11:09:54 +00:00
|
|
|
}
|
2012-12-17 15:21:27 +00:00
|
|
|
if (args->flags & I915_EXEC_IS_PINNED)
|
2015-02-13 11:48:10 +00:00
|
|
|
dispatch_flags |= I915_DISPATCH_PINNED;
|
2012-10-17 11:09:54 +00:00
|
|
|
|
2016-07-20 17:16:07 +00:00
|
|
|
engine = eb_select_engine(dev_priv, file, args);
|
|
|
|
if (!engine)
|
|
|
|
return -EINVAL;
|
2010-11-25 18:00:26 +00:00
|
|
|
|
|
|
|
if (args->buffer_count < 1) {
|
2012-01-31 20:08:14 +00:00
|
|
|
DRM_DEBUG("execbuf with %d buffers\n", args->buffer_count);
|
2010-11-25 18:00:26 +00:00
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
|
2015-07-01 07:12:23 +00:00
|
|
|
if (args->flags & I915_EXEC_RESOURCE_STREAMER) {
|
2016-11-04 14:42:46 +00:00
|
|
|
if (!HAS_RESOURCE_STREAMER(dev_priv)) {
|
2015-07-01 07:12:23 +00:00
|
|
|
DRM_DEBUG("RS is only allowed for Haswell, Gen8 and above\n");
|
|
|
|
return -EINVAL;
|
|
|
|
}
|
2016-03-16 11:00:36 +00:00
|
|
|
if (engine->id != RCS) {
|
2015-07-01 07:12:23 +00:00
|
|
|
DRM_DEBUG("RS is not available on %s\n",
|
2016-03-16 11:00:36 +00:00
|
|
|
engine->name);
|
2015-07-01 07:12:23 +00:00
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
|
|
|
|
dispatch_flags |= I915_DISPATCH_RS;
|
|
|
|
}
|
|
|
|
|
2017-01-27 09:40:08 +00:00
|
|
|
if (args->flags & I915_EXEC_FENCE_IN) {
|
|
|
|
in_fence = sync_file_get_fence(lower_32_bits(args->rsvd2));
|
2017-02-03 22:45:29 +00:00
|
|
|
if (!in_fence)
|
|
|
|
return -EINVAL;
|
2017-01-27 09:40:08 +00:00
|
|
|
}
|
|
|
|
|
|
|
|
if (args->flags & I915_EXEC_FENCE_OUT) {
|
|
|
|
out_fence_fd = get_unused_fd_flags(O_CLOEXEC);
|
|
|
|
if (out_fence_fd < 0) {
|
|
|
|
ret = out_fence_fd;
|
2017-02-03 22:45:29 +00:00
|
|
|
goto err_in_fence;
|
2017-01-27 09:40:08 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2016-07-04 07:08:31 +00:00
|
|
|
/* Take a local wakeref for preparing to dispatch the execbuf as
|
|
|
|
* we expect to access the hardware fairly frequently in the
|
|
|
|
* process. Upon first dispatch, we acquire another prolonged
|
|
|
|
* wakeref that we hold until the GPU has been idle for at least
|
|
|
|
* 100ms.
|
|
|
|
*/
|
2013-11-27 20:20:34 +00:00
|
|
|
intel_runtime_pm_get(dev_priv);
|
|
|
|
|
2010-11-25 18:00:26 +00:00
|
|
|
ret = i915_mutex_lock_interruptible(dev);
|
|
|
|
if (ret)
|
|
|
|
goto pre_mutex_err;
|
|
|
|
|
2016-03-16 11:00:36 +00:00
|
|
|
ctx = i915_gem_validate_context(dev, file, engine, ctx_id);
|
2014-01-03 05:50:27 +00:00
|
|
|
if (IS_ERR(ctx)) {
|
2013-11-26 14:14:33 +00:00
|
|
|
mutex_unlock(&dev->struct_mutex);
|
drm/i915: Get context early in execbuf
We need to have the address space when reserving space for the objects.
Since the address space and context are tied together, and reserve
occurs before context switch (for good reason), we must lookup our
context earlier in the process.
This leaves some room for optimizations where we no longer need to use
ctx_id in certain places. This will be addressed in a subsequent patch.
Important tricky bit:
Because slow relocations during execbuffer drop struct_mutex
Perhaps it would be best to acquire the reference when we get the
context, but I'll save that for another day (note I have written the
patch before, and I found the changes required to be uglier than this).
Note that since we currently access everything via context id, and not
the data structure this is fine, though not desirable. The next change
attempts to get the context only once via the context ID idr lookup, and
as such, the following can happen:
CTX-A is created, refcount = 1
CTX-A execbuf, mutex dropped
close IOCTL called on CTX-A, refcount = 0
CTX-A resumes in execbuf.
v2: Rebased on top of
commit b6359918b885da7c7b58c050674278dbd06020ab
Author: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Date: Wed Oct 30 15:44:16 2013 +0200
drm/i915: add i915_get_reset_stats_ioctl
v3: Rebased on top of
commit 25b3dfc87bff80317d67ddd2cd4cfb91e6fe7d79
Author: Mika Westerberg <mika.westerberg@linux.intel.com>
Date: Tue Nov 12 11:57:30 2013 +0200
Author: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Date: Tue Nov 26 16:14:33 2013 +0200
drm/i915: check context reset stats before relocations
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2013-12-06 22:11:21 +00:00
|
|
|
ret = PTR_ERR(ctx);
|
2013-11-26 14:14:33 +00:00
|
|
|
goto pre_mutex_err;
|
2014-04-05 05:41:07 +00:00
|
|
|
}
|
drm/i915: Get context early in execbuf
We need to have the address space when reserving space for the objects.
Since the address space and context are tied together, and reserve
occurs before context switch (for good reason), we must lookup our
context earlier in the process.
This leaves some room for optimizations where we no longer need to use
ctx_id in certain places. This will be addressed in a subsequent patch.
Important tricky bit:
Because slow relocations during execbuffer drop struct_mutex
Perhaps it would be best to acquire the reference when we get the
context, but I'll save that for another day (note I have written the
patch before, and I found the changes required to be uglier than this).
Note that since we currently access everything via context id, and not
the data structure this is fine, though not desirable. The next change
attempts to get the context only once via the context ID idr lookup, and
as such, the following can happen:
CTX-A is created, refcount = 1
CTX-A execbuf, mutex dropped
close IOCTL called on CTX-A, refcount = 0
CTX-A resumes in execbuf.
v2: Rebased on top of
commit b6359918b885da7c7b58c050674278dbd06020ab
Author: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Date: Wed Oct 30 15:44:16 2013 +0200
drm/i915: add i915_get_reset_stats_ioctl
v3: Rebased on top of
commit 25b3dfc87bff80317d67ddd2cd4cfb91e6fe7d79
Author: Mika Westerberg <mika.westerberg@linux.intel.com>
Date: Tue Nov 12 11:57:30 2013 +0200
Author: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Date: Tue Nov 26 16:14:33 2013 +0200
drm/i915: check context reset stats before relocations
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2013-12-06 22:11:21 +00:00
|
|
|
|
2016-07-20 12:31:50 +00:00
|
|
|
i915_gem_context_get(ctx);
|
drm/i915: Get context early in execbuf
We need to have the address space when reserving space for the objects.
Since the address space and context are tied together, and reserve
occurs before context switch (for good reason), we must lookup our
context earlier in the process.
This leaves some room for optimizations where we no longer need to use
ctx_id in certain places. This will be addressed in a subsequent patch.
Important tricky bit:
Because slow relocations during execbuffer drop struct_mutex
Perhaps it would be best to acquire the reference when we get the
context, but I'll save that for another day (note I have written the
patch before, and I found the changes required to be uglier than this).
Note that since we currently access everything via context id, and not
the data structure this is fine, though not desirable. The next change
attempts to get the context only once via the context ID idr lookup, and
as such, the following can happen:
CTX-A is created, refcount = 1
CTX-A execbuf, mutex dropped
close IOCTL called on CTX-A, refcount = 0
CTX-A resumes in execbuf.
v2: Rebased on top of
commit b6359918b885da7c7b58c050674278dbd06020ab
Author: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Date: Wed Oct 30 15:44:16 2013 +0200
drm/i915: add i915_get_reset_stats_ioctl
v3: Rebased on top of
commit 25b3dfc87bff80317d67ddd2cd4cfb91e6fe7d79
Author: Mika Westerberg <mika.westerberg@linux.intel.com>
Date: Tue Nov 12 11:57:30 2013 +0200
Author: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Date: Tue Nov 26 16:14:33 2013 +0200
drm/i915: check context reset stats before relocations
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2013-12-06 22:11:21 +00:00
|
|
|
|
2014-08-06 13:04:53 +00:00
|
|
|
if (ctx->ppgtt)
|
|
|
|
vm = &ctx->ppgtt->base;
|
|
|
|
else
|
2016-03-30 13:57:10 +00:00
|
|
|
vm = &ggtt->base;
|
2013-11-26 14:14:33 +00:00
|
|
|
|
2015-05-29 16:43:27 +00:00
|
|
|
memset(¶ms_master, 0x00, sizeof(params_master));
|
|
|
|
|
2016-08-18 16:16:52 +00:00
|
|
|
eb = eb_create(dev_priv, args);
|
2010-12-08 10:38:14 +00:00
|
|
|
if (eb == NULL) {
|
2016-07-20 12:31:50 +00:00
|
|
|
i915_gem_context_put(ctx);
|
2010-12-08 10:38:14 +00:00
|
|
|
mutex_unlock(&dev->struct_mutex);
|
|
|
|
ret = -ENOMEM;
|
|
|
|
goto pre_mutex_err;
|
|
|
|
}
|
|
|
|
|
2010-11-25 18:00:26 +00:00
|
|
|
/* Look up object handles */
|
drm/i915: Convert execbuf code to use vmas
In order to transition more of our code over to using a VMA instead of
an <OBJ, VM> pair - we must have the vma accessible at execbuf time. Up
until now, we've only had a VMA when actually binding an object.
The previous patch helped handle the distinction on bound vs. unbound.
This patch will help us catch leaks, and other issues before we actually
shuffle a bunch of stuff around.
This attempts to convert all the execbuf code to speak in vmas. Since
the execbuf code is very self contained it was a nice isolated
conversion.
The meat of the code is about turning eb_objects into eb_vma, and then
wiring up the rest of the code to use vmas instead of obj, vm pairs.
Unfortunately, to do this, we must move the exec_list link from the obj
structure. This list is reused in the eviction code, so we must also
modify the eviction code to make this work.
WARNING: This patch makes an already hotly profiled path slower. The cost is
unavoidable. In reply to this mail, I will attach the extra data.
v2: Release table lock early, and two a 2 phase vma lookup to avoid
having to use a GFP_ATOMIC. (Chris)
v3: s/obj_exec_list/obj_exec_link/
Updates to address
commit 6d2b888569d366beb4be72cacfde41adee2c25e1
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Wed Aug 7 18:30:54 2013 +0100
drm/i915: List objects allocated from stolen memory in debugfs
v4: Use obj = vma->obj for neatness in some places (Chris)
need_reloc_mappable() should return false if ppgtt (Chris)
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
[danvet: Split out prep patches. Also remove a FIXME comment which is
now taken care of.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2013-08-14 09:38:36 +00:00
|
|
|
ret = eb_lookup_vmas(eb, exec, args, vm, file);
|
2013-01-08 10:53:14 +00:00
|
|
|
if (ret)
|
|
|
|
goto err;
|
2010-11-25 18:00:26 +00:00
|
|
|
|
2011-01-10 17:35:37 +00:00
|
|
|
/* take note of the batch buffer before we might reorder the lists */
|
2016-08-04 15:32:31 +00:00
|
|
|
params->batch = eb_get_batch(eb);
|
2011-01-10 17:35:37 +00:00
|
|
|
|
2010-11-25 18:00:26 +00:00
|
|
|
/* Move the objects en-masse into the GTT, evicting if necessary. */
|
2013-01-17 21:23:36 +00:00
|
|
|
need_relocs = (args->flags & I915_EXEC_NO_RELOC) == 0;
|
2016-03-16 11:00:36 +00:00
|
|
|
ret = i915_gem_execbuffer_reserve(engine, &eb->vmas, ctx,
|
|
|
|
&need_relocs);
|
2010-11-25 18:00:26 +00:00
|
|
|
if (ret)
|
|
|
|
goto err;
|
|
|
|
|
|
|
|
/* The objects are in their final locations, apply the relocations. */
|
2013-01-17 21:23:36 +00:00
|
|
|
if (need_relocs)
|
2013-11-25 17:54:38 +00:00
|
|
|
ret = i915_gem_execbuffer_relocate(eb);
|
2010-11-25 18:00:26 +00:00
|
|
|
if (ret) {
|
|
|
|
if (ret == -EFAULT) {
|
2016-03-16 11:00:36 +00:00
|
|
|
ret = i915_gem_execbuffer_relocate_slow(dev, args, file,
|
|
|
|
engine,
|
2015-05-20 14:00:13 +00:00
|
|
|
eb, exec, ctx);
|
2010-11-25 18:00:26 +00:00
|
|
|
BUG_ON(!mutex_is_locked(&dev->struct_mutex));
|
|
|
|
}
|
|
|
|
if (ret)
|
|
|
|
goto err;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Set the pending read domains for the batch buffer to COMMAND */
|
2016-08-04 15:32:31 +00:00
|
|
|
if (params->batch->obj->base.pending_write_domain) {
|
2012-01-31 20:08:14 +00:00
|
|
|
DRM_DEBUG("Attempting to use self-modifying batch buffer\n");
|
2010-11-25 18:00:26 +00:00
|
|
|
ret = -EINVAL;
|
|
|
|
goto err;
|
|
|
|
}
|
2016-08-18 16:17:12 +00:00
|
|
|
if (args->batch_start_offset > params->batch->size ||
|
|
|
|
args->batch_len > params->batch->size - args->batch_start_offset) {
|
|
|
|
DRM_DEBUG("Attempting to use out-of-bounds batch\n");
|
|
|
|
ret = -EINVAL;
|
|
|
|
goto err;
|
|
|
|
}
|
2010-11-25 18:00:26 +00:00
|
|
|
|
2015-05-29 16:43:27 +00:00
|
|
|
params->args_batch_start_offset = args->batch_start_offset;
|
2016-11-24 12:58:51 +00:00
|
|
|
if (engine->needs_cmd_parser && args->batch_len) {
|
2016-08-04 15:32:31 +00:00
|
|
|
struct i915_vma *vma;
|
|
|
|
|
|
|
|
vma = i915_gem_execbuffer_parse(engine, &shadow_exec_entry,
|
|
|
|
params->batch->obj,
|
|
|
|
eb,
|
|
|
|
args->batch_start_offset,
|
|
|
|
args->batch_len,
|
|
|
|
drm_is_current_master(file));
|
|
|
|
if (IS_ERR(vma)) {
|
|
|
|
ret = PTR_ERR(vma);
|
2014-12-11 20:13:09 +00:00
|
|
|
goto err;
|
|
|
|
}
|
2015-01-14 11:20:57 +00:00
|
|
|
|
2016-08-04 15:32:31 +00:00
|
|
|
if (vma) {
|
2015-05-08 13:26:50 +00:00
|
|
|
/*
|
|
|
|
* Batch parsed and accepted:
|
|
|
|
*
|
|
|
|
* Set the DISPATCH_SECURE bit to remove the NON_SECURE
|
|
|
|
* bit from MI_BATCH_BUFFER_START commands issued in
|
|
|
|
* the dispatch_execbuffer implementations. We
|
|
|
|
* specifically don't want that set on batches the
|
|
|
|
* command parser has accepted.
|
|
|
|
*/
|
|
|
|
dispatch_flags |= I915_DISPATCH_SECURE;
|
2015-05-29 16:43:27 +00:00
|
|
|
params->args_batch_start_offset = 0;
|
2016-08-04 15:32:31 +00:00
|
|
|
params->batch = vma;
|
2015-05-08 13:26:50 +00:00
|
|
|
}
|
2014-02-18 18:15:46 +00:00
|
|
|
}
|
|
|
|
|
2016-08-04 15:32:31 +00:00
|
|
|
params->batch->obj->base.pending_read_domains |= I915_GEM_DOMAIN_COMMAND;
|
2014-12-11 20:13:09 +00:00
|
|
|
|
2012-10-17 11:09:54 +00:00
|
|
|
/* snb/ivb/vlv conflate the "batch in ppgtt" bit with the "non-secure
|
|
|
|
* batch" bit. Hence we need to pin secure batches into the global gtt.
|
2013-11-03 04:07:26 +00:00
|
|
|
* hsw should have this fixed, but bdw mucks it up again. */
|
2015-02-13 11:48:10 +00:00
|
|
|
if (dispatch_flags & I915_DISPATCH_SECURE) {
|
2016-08-04 15:32:31 +00:00
|
|
|
struct drm_i915_gem_object *obj = params->batch->obj;
|
2016-08-15 09:49:06 +00:00
|
|
|
struct i915_vma *vma;
|
2016-08-04 15:32:31 +00:00
|
|
|
|
2014-08-11 10:08:58 +00:00
|
|
|
/*
|
|
|
|
* So on first glance it looks freaky that we pin the batch here
|
|
|
|
* outside of the reservation loop. But:
|
|
|
|
* - The batch is already pinned into the relevant ppgtt, so we
|
|
|
|
* already have the backing storage fully allocated.
|
|
|
|
* - No other BO uses the global gtt (well contexts, but meh),
|
2015-02-28 16:20:41 +00:00
|
|
|
* so we don't really have issues with multiple objects not
|
2014-08-11 10:08:58 +00:00
|
|
|
* fitting due to fragmentation.
|
|
|
|
* So this is actually safe.
|
|
|
|
*/
|
2016-08-15 09:49:06 +00:00
|
|
|
vma = i915_gem_object_ggtt_pin(obj, NULL, 0, 0, 0);
|
|
|
|
if (IS_ERR(vma)) {
|
|
|
|
ret = PTR_ERR(vma);
|
2014-08-11 10:08:58 +00:00
|
|
|
goto err;
|
2016-08-15 09:49:06 +00:00
|
|
|
}
|
2012-10-17 11:09:54 +00:00
|
|
|
|
2016-08-15 09:49:06 +00:00
|
|
|
params->batch = vma;
|
2016-08-04 15:32:31 +00:00
|
|
|
}
|
2012-10-17 11:09:54 +00:00
|
|
|
|
2015-05-29 16:43:25 +00:00
|
|
|
/* Allocate a request for this batch buffer nice and early. */
|
2016-08-02 21:50:26 +00:00
|
|
|
params->request = i915_gem_request_alloc(engine, ctx);
|
|
|
|
if (IS_ERR(params->request)) {
|
|
|
|
ret = PTR_ERR(params->request);
|
2015-05-29 16:43:25 +00:00
|
|
|
goto err_batch_unpin;
|
drm/i915: simplify allocation of driver-internal requests
There are a number of places where the driver needs a request, but isn't
working on behalf of any specific user or in a specific context. At
present, we associate them with the per-engine default context. A future
patch will abolish those per-engine context pointers; but we can already
eliminate a lot of the references to them, just by making the allocator
allow NULL as a shorthand for "an appropriate context for this ring",
which will mean that the callers don't need to know anything about how
the "appropriate context" is found (e.g. per-ring vs per-device, etc).
So this patch renames the existing i915_gem_request_alloc(), and makes
it local (static inline), and replaces it with a wrapper that provides
a default if the context is NULL, and also has a nicer calling
convention (doesn't require a pointer to an output parameter). Then we
change all callers to use the new convention:
OLD:
err = i915_gem_request_alloc(ring, user_ctx, &req);
if (err) ...
NEW:
req = i915_gem_request_alloc(ring, user_ctx);
if (IS_ERR(req)) ...
OLD:
err = i915_gem_request_alloc(ring, ring->default_context, &req);
if (err) ...
NEW:
req = i915_gem_request_alloc(ring, NULL);
if (IS_ERR(req)) ...
v4: Rebased
Signed-off-by: Dave Gordon <david.s.gordon@intel.com>
Reviewed-by: Nick Hoath <nicholas.hoath@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1453230175-19330-2-git-send-email-david.s.gordon@intel.com
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2016-01-19 19:02:53 +00:00
|
|
|
}
|
2015-05-29 16:43:25 +00:00
|
|
|
|
2017-01-27 09:40:08 +00:00
|
|
|
if (in_fence) {
|
|
|
|
ret = i915_gem_request_await_dma_fence(params->request,
|
|
|
|
in_fence);
|
|
|
|
if (ret < 0)
|
|
|
|
goto err_request;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (out_fence_fd != -1) {
|
|
|
|
out_fence = sync_file_create(¶ms->request->fence);
|
|
|
|
if (!out_fence) {
|
|
|
|
ret = -ENOMEM;
|
|
|
|
goto err_request;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2016-08-10 12:41:46 +00:00
|
|
|
/* Whilst this request exists, batch_obj will be on the
|
|
|
|
* active_list, and so will hold the active reference. Only when this
|
|
|
|
* request is retired will the the batch_obj be moved onto the
|
|
|
|
* inactive_list and lose its active reference. Hence we do not need
|
|
|
|
* to explicitly hold another reference here.
|
|
|
|
*/
|
2016-08-15 09:49:06 +00:00
|
|
|
params->request->batch = params->batch;
|
2016-08-10 12:41:46 +00:00
|
|
|
|
2015-05-29 16:43:27 +00:00
|
|
|
/*
|
|
|
|
* Save assorted stuff away to pass through to *_submission().
|
|
|
|
* NB: This data should be 'persistent' and not local as it will
|
|
|
|
* kept around beyond the duration of the IOCTL once the GPU
|
|
|
|
* scheduler arrives.
|
|
|
|
*/
|
|
|
|
params->dev = dev;
|
|
|
|
params->file = file;
|
2016-03-16 11:00:38 +00:00
|
|
|
params->engine = engine;
|
2015-05-29 16:43:27 +00:00
|
|
|
params->dispatch_flags = dispatch_flags;
|
|
|
|
params->ctx = ctx;
|
|
|
|
|
2017-02-21 09:13:44 +00:00
|
|
|
trace_i915_gem_request_queue(params->request, dispatch_flags);
|
|
|
|
|
2016-08-02 21:50:38 +00:00
|
|
|
ret = execbuf_submit(params, args, &eb->vmas);
|
drm/i915: Late request cancellations are harmful
Conceptually, each request is a record of a hardware transaction - we
build up a list of pending commands and then either commit them to
hardware, or cancel them. However, whilst building up the list of
pending commands, we may modify state outside of the request and make
references to the pending request. If we do so and then cancel that
request, external objects then point to the deleted request leading to
both graphical and memory corruption.
The easiest example is to consider object/VMA tracking. When we mark an
object as active in a request, we store a pointer to this, the most
recent request, in the object. Then we want to free that object, we wait
for the most recent request to be idle before proceeding (otherwise the
hardware will write to pages now owned by the system, or we will attempt
to read from those pages before the hardware is finished writing). If
the request was cancelled instead, that wait completes immediately. As a
result, all requests must be committed and not cancelled if the external
state is unknown.
All that remains of i915_gem_request_cancel() users are just a couple of
extremely unlikely allocation failures, so remove the API entirely.
A consequence of committing all incomplete requests is that we generate
excess breadcrumbs and fill the ring much more often with dummy work. We
have completely undone the outstanding_last_seqno optimisation.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=93907
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
Cc: stable@vger.kernel.org
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: http://patchwork.freedesktop.org/patch/msgid/1460565315-7748-16-git-send-email-chris@chris-wilson.co.uk
2016-04-13 16:35:15 +00:00
|
|
|
err_request:
|
2016-08-10 12:41:46 +00:00
|
|
|
__i915_add_request(params->request, ret == 0);
|
2017-03-02 12:25:25 +00:00
|
|
|
add_to_client(params->request, file);
|
|
|
|
|
2017-01-27 09:40:08 +00:00
|
|
|
if (out_fence) {
|
|
|
|
if (ret == 0) {
|
|
|
|
fd_install(out_fence_fd, out_fence->file);
|
|
|
|
args->rsvd2 &= GENMASK_ULL(0, 31); /* keep in-fence */
|
|
|
|
args->rsvd2 |= (u64)out_fence_fd << 32;
|
|
|
|
out_fence_fd = -1;
|
|
|
|
} else {
|
|
|
|
fput(out_fence->file);
|
|
|
|
}
|
|
|
|
}
|
2010-11-25 18:00:26 +00:00
|
|
|
|
2015-05-29 16:43:25 +00:00
|
|
|
err_batch_unpin:
|
2014-08-11 10:08:58 +00:00
|
|
|
/*
|
|
|
|
* FIXME: We crucially rely upon the active tracking for the (ppgtt)
|
|
|
|
* batch vma for correctness. For less ugly and less fragility this
|
|
|
|
* needs to be adjusted to also track the ggtt batch vma properly as
|
|
|
|
* active.
|
|
|
|
*/
|
2015-02-13 11:48:10 +00:00
|
|
|
if (dispatch_flags & I915_DISPATCH_SECURE)
|
2016-08-04 15:32:31 +00:00
|
|
|
i915_vma_unpin(params->batch);
|
2010-11-25 18:00:26 +00:00
|
|
|
err:
|
drm/i915: Get context early in execbuf
We need to have the address space when reserving space for the objects.
Since the address space and context are tied together, and reserve
occurs before context switch (for good reason), we must lookup our
context earlier in the process.
This leaves some room for optimizations where we no longer need to use
ctx_id in certain places. This will be addressed in a subsequent patch.
Important tricky bit:
Because slow relocations during execbuffer drop struct_mutex
Perhaps it would be best to acquire the reference when we get the
context, but I'll save that for another day (note I have written the
patch before, and I found the changes required to be uglier than this).
Note that since we currently access everything via context id, and not
the data structure this is fine, though not desirable. The next change
attempts to get the context only once via the context ID idr lookup, and
as such, the following can happen:
CTX-A is created, refcount = 1
CTX-A execbuf, mutex dropped
close IOCTL called on CTX-A, refcount = 0
CTX-A resumes in execbuf.
v2: Rebased on top of
commit b6359918b885da7c7b58c050674278dbd06020ab
Author: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Date: Wed Oct 30 15:44:16 2013 +0200
drm/i915: add i915_get_reset_stats_ioctl
v3: Rebased on top of
commit 25b3dfc87bff80317d67ddd2cd4cfb91e6fe7d79
Author: Mika Westerberg <mika.westerberg@linux.intel.com>
Date: Tue Nov 12 11:57:30 2013 +0200
Author: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Date: Tue Nov 26 16:14:33 2013 +0200
drm/i915: check context reset stats before relocations
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2013-12-06 22:11:21 +00:00
|
|
|
/* the request owns the ref now */
|
2016-07-20 12:31:50 +00:00
|
|
|
i915_gem_context_put(ctx);
|
2010-12-08 10:38:14 +00:00
|
|
|
eb_destroy(eb);
|
2010-11-25 18:00:26 +00:00
|
|
|
|
|
|
|
mutex_unlock(&dev->struct_mutex);
|
|
|
|
|
|
|
|
pre_mutex_err:
|
2013-11-27 20:20:34 +00:00
|
|
|
/* intel_gpu_busy should also get a ref, so it will free when the device
|
|
|
|
* is really idle. */
|
|
|
|
intel_runtime_pm_put(dev_priv);
|
2017-01-27 09:40:08 +00:00
|
|
|
if (out_fence_fd != -1)
|
|
|
|
put_unused_fd(out_fence_fd);
|
2017-02-03 22:45:29 +00:00
|
|
|
err_in_fence:
|
2017-01-27 09:40:08 +00:00
|
|
|
dma_fence_put(in_fence);
|
2010-11-25 18:00:26 +00:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Legacy execbuffer just creates an exec2 list from the original exec object
|
|
|
|
* list array and passes it to the real function.
|
|
|
|
*/
|
|
|
|
int
|
|
|
|
i915_gem_execbuffer(struct drm_device *dev, void *data,
|
|
|
|
struct drm_file *file)
|
|
|
|
{
|
|
|
|
struct drm_i915_gem_execbuffer *args = data;
|
|
|
|
struct drm_i915_gem_execbuffer2 exec2;
|
|
|
|
struct drm_i915_gem_exec_object *exec_list = NULL;
|
|
|
|
struct drm_i915_gem_exec_object2 *exec2_list = NULL;
|
|
|
|
int ret, i;
|
|
|
|
|
|
|
|
if (args->buffer_count < 1) {
|
2012-01-31 20:08:14 +00:00
|
|
|
DRM_DEBUG("execbuf with %d buffers\n", args->buffer_count);
|
2010-11-25 18:00:26 +00:00
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Copy in the exec list from userland */
|
|
|
|
exec_list = drm_malloc_ab(sizeof(*exec_list), args->buffer_count);
|
|
|
|
exec2_list = drm_malloc_ab(sizeof(*exec2_list), args->buffer_count);
|
|
|
|
if (exec_list == NULL || exec2_list == NULL) {
|
2012-01-31 20:08:14 +00:00
|
|
|
DRM_DEBUG("Failed to allocate exec list for %d buffers\n",
|
2010-11-25 18:00:26 +00:00
|
|
|
args->buffer_count);
|
|
|
|
drm_free_large(exec_list);
|
|
|
|
drm_free_large(exec2_list);
|
|
|
|
return -ENOMEM;
|
|
|
|
}
|
|
|
|
ret = copy_from_user(exec_list,
|
2016-04-26 15:32:27 +00:00
|
|
|
u64_to_user_ptr(args->buffers_ptr),
|
2010-11-25 18:00:26 +00:00
|
|
|
sizeof(*exec_list) * args->buffer_count);
|
|
|
|
if (ret != 0) {
|
2012-01-31 20:08:14 +00:00
|
|
|
DRM_DEBUG("copy %d exec entries failed %d\n",
|
2010-11-25 18:00:26 +00:00
|
|
|
args->buffer_count, ret);
|
|
|
|
drm_free_large(exec_list);
|
|
|
|
drm_free_large(exec2_list);
|
|
|
|
return -EFAULT;
|
|
|
|
}
|
|
|
|
|
|
|
|
for (i = 0; i < args->buffer_count; i++) {
|
|
|
|
exec2_list[i].handle = exec_list[i].handle;
|
|
|
|
exec2_list[i].relocation_count = exec_list[i].relocation_count;
|
|
|
|
exec2_list[i].relocs_ptr = exec_list[i].relocs_ptr;
|
|
|
|
exec2_list[i].alignment = exec_list[i].alignment;
|
|
|
|
exec2_list[i].offset = exec_list[i].offset;
|
2016-11-16 08:55:32 +00:00
|
|
|
if (INTEL_GEN(to_i915(dev)) < 4)
|
2010-11-25 18:00:26 +00:00
|
|
|
exec2_list[i].flags = EXEC_OBJECT_NEEDS_FENCE;
|
|
|
|
else
|
|
|
|
exec2_list[i].flags = 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
exec2.buffers_ptr = args->buffers_ptr;
|
|
|
|
exec2.buffer_count = args->buffer_count;
|
|
|
|
exec2.batch_start_offset = args->batch_start_offset;
|
|
|
|
exec2.batch_len = args->batch_len;
|
|
|
|
exec2.DR1 = args->DR1;
|
|
|
|
exec2.DR4 = args->DR4;
|
|
|
|
exec2.num_cliprects = args->num_cliprects;
|
|
|
|
exec2.cliprects_ptr = args->cliprects_ptr;
|
|
|
|
exec2.flags = I915_EXEC_RENDER;
|
2012-06-04 21:42:55 +00:00
|
|
|
i915_execbuffer2_set_context_id(exec2, 0);
|
2010-11-25 18:00:26 +00:00
|
|
|
|
drm/i915: Get context early in execbuf
We need to have the address space when reserving space for the objects.
Since the address space and context are tied together, and reserve
occurs before context switch (for good reason), we must lookup our
context earlier in the process.
This leaves some room for optimizations where we no longer need to use
ctx_id in certain places. This will be addressed in a subsequent patch.
Important tricky bit:
Because slow relocations during execbuffer drop struct_mutex
Perhaps it would be best to acquire the reference when we get the
context, but I'll save that for another day (note I have written the
patch before, and I found the changes required to be uglier than this).
Note that since we currently access everything via context id, and not
the data structure this is fine, though not desirable. The next change
attempts to get the context only once via the context ID idr lookup, and
as such, the following can happen:
CTX-A is created, refcount = 1
CTX-A execbuf, mutex dropped
close IOCTL called on CTX-A, refcount = 0
CTX-A resumes in execbuf.
v2: Rebased on top of
commit b6359918b885da7c7b58c050674278dbd06020ab
Author: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Date: Wed Oct 30 15:44:16 2013 +0200
drm/i915: add i915_get_reset_stats_ioctl
v3: Rebased on top of
commit 25b3dfc87bff80317d67ddd2cd4cfb91e6fe7d79
Author: Mika Westerberg <mika.westerberg@linux.intel.com>
Date: Tue Nov 12 11:57:30 2013 +0200
Author: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Date: Tue Nov 26 16:14:33 2013 +0200
drm/i915: check context reset stats before relocations
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2013-12-06 22:11:21 +00:00
|
|
|
ret = i915_gem_do_execbuffer(dev, data, file, &exec2, exec2_list);
|
2010-11-25 18:00:26 +00:00
|
|
|
if (!ret) {
|
2014-05-23 09:45:52 +00:00
|
|
|
struct drm_i915_gem_exec_object __user *user_exec_list =
|
2016-04-26 15:32:27 +00:00
|
|
|
u64_to_user_ptr(args->buffers_ptr);
|
2014-05-23 09:45:52 +00:00
|
|
|
|
2010-11-25 18:00:26 +00:00
|
|
|
/* Copy the new buffer offsets back to the user's exec list. */
|
2014-05-23 09:45:52 +00:00
|
|
|
for (i = 0; i < args->buffer_count; i++) {
|
2015-12-29 17:24:52 +00:00
|
|
|
exec2_list[i].offset =
|
|
|
|
gen8_canonical_addr(exec2_list[i].offset);
|
2014-05-23 09:45:52 +00:00
|
|
|
ret = __copy_to_user(&user_exec_list[i].offset,
|
|
|
|
&exec2_list[i].offset,
|
|
|
|
sizeof(user_exec_list[i].offset));
|
|
|
|
if (ret) {
|
|
|
|
ret = -EFAULT;
|
|
|
|
DRM_DEBUG("failed to copy %d exec entries "
|
|
|
|
"back to user (%d)\n",
|
|
|
|
args->buffer_count, ret);
|
|
|
|
break;
|
|
|
|
}
|
2010-11-25 18:00:26 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
drm_free_large(exec_list);
|
|
|
|
drm_free_large(exec2_list);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
int
|
|
|
|
i915_gem_execbuffer2(struct drm_device *dev, void *data,
|
|
|
|
struct drm_file *file)
|
|
|
|
{
|
|
|
|
struct drm_i915_gem_execbuffer2 *args = data;
|
|
|
|
struct drm_i915_gem_exec_object2 *exec2_list = NULL;
|
|
|
|
int ret;
|
|
|
|
|
2012-04-23 08:06:41 +00:00
|
|
|
if (args->buffer_count < 1 ||
|
|
|
|
args->buffer_count > UINT_MAX / sizeof(*exec2_list)) {
|
2012-01-31 20:08:14 +00:00
|
|
|
DRM_DEBUG("execbuf2 with %d buffers\n", args->buffer_count);
|
2010-11-25 18:00:26 +00:00
|
|
|
return -EINVAL;
|
|
|
|
}
|
|
|
|
|
2016-04-08 11:11:13 +00:00
|
|
|
exec2_list = drm_malloc_gfp(args->buffer_count,
|
|
|
|
sizeof(*exec2_list),
|
|
|
|
GFP_TEMPORARY);
|
2010-11-25 18:00:26 +00:00
|
|
|
if (exec2_list == NULL) {
|
2012-01-31 20:08:14 +00:00
|
|
|
DRM_DEBUG("Failed to allocate exec list for %d buffers\n",
|
2010-11-25 18:00:26 +00:00
|
|
|
args->buffer_count);
|
|
|
|
return -ENOMEM;
|
|
|
|
}
|
|
|
|
ret = copy_from_user(exec2_list,
|
2016-04-26 15:32:27 +00:00
|
|
|
u64_to_user_ptr(args->buffers_ptr),
|
2010-11-25 18:00:26 +00:00
|
|
|
sizeof(*exec2_list) * args->buffer_count);
|
|
|
|
if (ret != 0) {
|
2012-01-31 20:08:14 +00:00
|
|
|
DRM_DEBUG("copy %d exec entries failed %d\n",
|
2010-11-25 18:00:26 +00:00
|
|
|
args->buffer_count, ret);
|
|
|
|
drm_free_large(exec2_list);
|
|
|
|
return -EFAULT;
|
|
|
|
}
|
|
|
|
|
drm/i915: Get context early in execbuf
We need to have the address space when reserving space for the objects.
Since the address space and context are tied together, and reserve
occurs before context switch (for good reason), we must lookup our
context earlier in the process.
This leaves some room for optimizations where we no longer need to use
ctx_id in certain places. This will be addressed in a subsequent patch.
Important tricky bit:
Because slow relocations during execbuffer drop struct_mutex
Perhaps it would be best to acquire the reference when we get the
context, but I'll save that for another day (note I have written the
patch before, and I found the changes required to be uglier than this).
Note that since we currently access everything via context id, and not
the data structure this is fine, though not desirable. The next change
attempts to get the context only once via the context ID idr lookup, and
as such, the following can happen:
CTX-A is created, refcount = 1
CTX-A execbuf, mutex dropped
close IOCTL called on CTX-A, refcount = 0
CTX-A resumes in execbuf.
v2: Rebased on top of
commit b6359918b885da7c7b58c050674278dbd06020ab
Author: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Date: Wed Oct 30 15:44:16 2013 +0200
drm/i915: add i915_get_reset_stats_ioctl
v3: Rebased on top of
commit 25b3dfc87bff80317d67ddd2cd4cfb91e6fe7d79
Author: Mika Westerberg <mika.westerberg@linux.intel.com>
Date: Tue Nov 12 11:57:30 2013 +0200
Author: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Date: Tue Nov 26 16:14:33 2013 +0200
drm/i915: check context reset stats before relocations
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
2013-12-06 22:11:21 +00:00
|
|
|
ret = i915_gem_do_execbuffer(dev, data, file, args, exec2_list);
|
2010-11-25 18:00:26 +00:00
|
|
|
if (!ret) {
|
|
|
|
/* Copy the new buffer offsets back to the user's exec list. */
|
2014-06-13 13:42:51 +00:00
|
|
|
struct drm_i915_gem_exec_object2 __user *user_exec_list =
|
2016-04-26 15:32:27 +00:00
|
|
|
u64_to_user_ptr(args->buffers_ptr);
|
2014-05-23 09:45:52 +00:00
|
|
|
int i;
|
|
|
|
|
|
|
|
for (i = 0; i < args->buffer_count; i++) {
|
2015-12-29 17:24:52 +00:00
|
|
|
exec2_list[i].offset =
|
|
|
|
gen8_canonical_addr(exec2_list[i].offset);
|
2014-05-23 09:45:52 +00:00
|
|
|
ret = __copy_to_user(&user_exec_list[i].offset,
|
|
|
|
&exec2_list[i].offset,
|
|
|
|
sizeof(user_exec_list[i].offset));
|
|
|
|
if (ret) {
|
|
|
|
ret = -EFAULT;
|
|
|
|
DRM_DEBUG("failed to copy %d exec entries "
|
|
|
|
"back to user\n",
|
|
|
|
args->buffer_count);
|
|
|
|
break;
|
|
|
|
}
|
2010-11-25 18:00:26 +00:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
drm_free_large(exec2_list);
|
|
|
|
return ret;
|
|
|
|
}
|