drm/i915: Prevent bonded requests from overtaking each other on preemption

Force bonded requests to run on distinct engines so that they cannot be
shuffled onto the same engine where timeslicing will reverse the order.
A bonded request will often wait on a semaphore signaled by its master,
creating an implicit dependency -- if we ignore that implicit dependency
and allow the bonded request to run on the same engine and before its
master, we will cause a GPU hang. [Whether it will hang the GPU is
debatable, we should keep on timeslicing and each timeslice should be
"accidentally" counted as forward progress, in which case it should run
but at one-half to one-third speed.]

We can prevent this inversion by restricting which engines we allow
ourselves to jump to upon preemption, i.e. baking in the arrangement
established at first execution. (We should also consider capturing the
implicit dependency using i915_sched_add_dependency(), but first we need
to think about the constraints that requires on the execution/retirement
ordering.)

Fixes: 8ee36e048c ("drm/i915/execlists: Minimalistic timeslicing")
References: ee1136908e ("drm/i915/execlists: Virtual engine bonding")
Testcase: igt/gem_exec_balancer/bonded-slice
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190923152844.8914-3-chris@chris-wilson.co.uk
(cherry picked from commit e2144503bf)
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
This commit is contained in:
Chris Wilson 2019-09-23 16:28:44 +01:00 committed by Rodrigo Vivi
parent dc7890995e
commit 7d0eb51dd9

View File

@ -3630,18 +3630,22 @@ static void
virtual_bond_execute(struct i915_request *rq, struct dma_fence *signal) virtual_bond_execute(struct i915_request *rq, struct dma_fence *signal)
{ {
struct virtual_engine *ve = to_virtual_engine(rq->engine); struct virtual_engine *ve = to_virtual_engine(rq->engine);
intel_engine_mask_t allowed, exec;
struct ve_bond *bond; struct ve_bond *bond;
bond = virtual_find_bond(ve, to_request(signal)->engine); allowed = ~to_request(signal)->engine->mask;
if (bond) {
intel_engine_mask_t old, new, cmp;
cmp = READ_ONCE(rq->execution_mask); bond = virtual_find_bond(ve, to_request(signal)->engine);
do { if (bond)
old = cmp; allowed &= bond->sibling_mask;
new = cmp & bond->sibling_mask;
} while ((cmp = cmpxchg(&rq->execution_mask, old, new)) != old); /* Restrict the bonded request to run on only the available engines */
} exec = READ_ONCE(rq->execution_mask);
while (!try_cmpxchg(&rq->execution_mask, &exec, exec & allowed))
;
/* Prevent the master from being re-run on the bonded engines */
to_request(signal)->execution_mask &= ~allowed;
} }
struct intel_context * struct intel_context *