The "err" label is not really "err", but rather "out" since the return
path is shared between error condition and normal path. This broke when
commit 03cea61076 ("drm/i915/dsb: fix extra warning on error path
handling") added a "dsb->cmd_buf = NULL;" there, making DSB to stop
working since now all writes would pass-through via mmio.
Remove the set to NULL since it's actually not needed: we only set it if
all steps are successful. While at it, rename the label so this confusion
doesn't happen again.
Fixes: 03cea61076 ("drm/i915/dsb: fix extra warning on error path handling")
Resolves: https://gitlab.freedesktop.org/drm/intel/issues/8
Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com>
Reviewed-by: Animesh Manna <animesh.manna@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191127221119.384754-1-lucas.demarchi@intel.com
When we call intel_dsb_get(), the dsb initialization may fail for
various reasons. We already log the error message in that path, making
it unnecessary to trigger a warning that refcount == 0 when calling
intel_dsb_put().
So here we simplify the logic and do lazy shutdown: leaving the extra
refcount alive so when we call intel_dsb_put() we end up calling
i915_vma_unpin_and_release().
Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com>
Reviewed-by: Matt Roper <matthew.d.roper@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191111205024.22853-3-lucas.demarchi@intel.com
The current dsb API is not really prepared to handle multithread access.
I was debugging an issue that ended up fixed by commit a096883dda
("drm/i915/dsb: Remove PIN_MAPPABLE from the DSB object VMA") and was
puzzled how these atomic operations were guaranteeing atomicity.
if (atomic_add_return(1, &dsb->refcount) != 1)
return dsb;
Thread A could still be initializing dsb struct (and even fail in the
middle) while thread B would take a reference and use it (even
derefencing a NULL cmd_buf).
I don't think the atomic operations here will help much if this were
to support multithreaded scenario in future, so just remove them to
avoid confusion.
v2: Use refcount++ != 0 instead of ++refcount != 1 (from Ville)
Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com>
Reviewed-by: Matt Roper <matthew.d.roper@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191111205024.22853-2-lucas.demarchi@intel.com
Link: https://patchwork.freedesktop.org/patch/msgid/20191116011539.18230-1-lucas.demarchi@intel.com
Replace the struct_mutex requirement for pinning the i915_vma with the
local vm->mutex instead. Note that the vm->mutex is tainted by the
shrinker (we require unbinding from inside fs-reclaim) and so we cannot
allocate while holding that mutex. Instead we have to preallocate
workers to do allocate and apply the PTE updates after we have we
reserved their slot in the drm_mm (using fences to order the PTE writes
with the GPU work and with later unbind).
In adding the asynchronous vma binding, one subtle requirement is to
avoid coupling the binding fence into the backing object->resv. That is
the asynchronous binding only applies to the vma timeline itself and not
to the pages as that is a more global timeline (the binding of one vma
does not need to be ordered with another vma, nor does the implicit GEM
fencing depend on a vma, only on writes to the backing store). Keeping
the vma binding distinct from the backing store timelines is verified by
a number of async gem_exec_fence and gem_exec_schedule tests. The way we
do this is quite simple, we keep the fence for the vma binding separate
and only wait on it as required, and never add it to the obj->resv
itself.
Another consequence in reducing the locking around the vma is the
destruction of the vma is no longer globally serialised by struct_mutex.
A natural solution would be to add a kref to i915_vma, but that requires
decoupling the reference cycles, possibly by introducing a new
i915_mm_pages object that is own by both obj->mm and vma->pages.
However, we have not taken that route due to the overshadowing lmem/ttm
discussions, and instead play a series of complicated games with
trylocks to (hopefully) ensure that only one destruction path is called!
v2: Add some commentary, and some helpers to reduce patch churn.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20191004134015.13204-4-chris@chris-wilson.co.uk
This patch adds a function, which will internally get the gem buffer
for DSB engine. The GEM buffer is from global GTT, and is mapped into
CPU domain, contains the data + opcode to be feed to DSB engine.
v1: Initial version.
v2:
- removed some unwanted code. (Chris)
- Used i915_gem_object_create_internal instead of _shmem. (Chris)
- cmd_buf_tail removed and can be derived through vma object. (Chris)
v3: vma realeased if i915_gem_object_pin_map() failed. (Shashank)
v4: for simplification and based on current usage added single dsb
object in intel_crtc. (Shashank)
v5: seting NULL to cmd_buf moved outside of mutex in dsb-put(). (Shashank)
v6:
- refcount machanism added.
- Used atomic_add_return and atomic_dec_and_test instead of
atomic_inc and atomic_dec. (Jani)
Cc: Imre Deak <imre.deak@intel.com>
Cc: Michel Thierry <michel.thierry@intel.com>
Cc: Jani Nikula <jani.nikula@intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Cc: Shashank Sharma <shashank.sharma@intel.com>
Signed-off-by: Animesh Manna <animesh.manna@intel.com>
[Jani: added #include <linux/types.h> while pushing]
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190920115930.27829-3-animesh.manna@intel.com