drm/i915: Check for awaits on still currently executing requests

With the advent of preempt-to-busy, a request may still be on the GPU as
we unwind. And in the case of a unpreemptible [due to HW] request, that
request will remain indefinitely on the GPU even though we have
returned it back to our submission queue, and cleared the active bit.

We only run the execution callbacks on transferring the request from our
submission queue to the execution queue, but if this is a bonded request
that the HW is waiting for, we will not submit it (as we wait for a
fresh execution) even though it is still being executed.

As we know that there are always preemption points between requests, we
know that only the currently executing request may be still active even
though we have cleared the flag. However, we do not precisely know which
request is in ELSP[0] due to a delay in processing events, and
furthermore we only store the last request in a context in our state
tracker.

Fixes: 22b7a426bb ("drm/i915/execlists: Preempt-to-busy")
Testcase: igt/gem_exec_balancer/bonded-dual
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20200529143926.3245-1-chris@chris-wilson.co.uk
(cherry picked from commit b55230e5e8)
Signed-off-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
This commit is contained in:
Chris Wilson 2020-05-29 15:39:26 +01:00 committed by Joonas Lahtinen
parent 631a6582b7
commit 0e38695927

View File

@ -357,6 +357,53 @@ void i915_request_retire_upto(struct i915_request *rq)
} while (i915_request_retire(tmp) && tmp != rq);
}
static struct i915_request * const *
__engine_active(struct intel_engine_cs *engine)
{
return READ_ONCE(engine->execlists.active);
}
static bool __request_in_flight(const struct i915_request *signal)
{
struct i915_request * const *port, *rq;
bool inflight = false;
if (!i915_request_is_ready(signal))
return false;
/*
* Even if we have unwound the request, it may still be on
* the GPU (preempt-to-busy). If that request is inside an
* unpreemptible critical section, it will not be removed. Some
* GPU functions may even be stuck waiting for the paired request
* (__await_execution) to be submitted and cannot be preempted
* until the bond is executing.
*
* As we know that there are always preemption points between
* requests, we know that only the currently executing request
* may be still active even though we have cleared the flag.
* However, we can't rely on our tracking of ELSP[0] to known
* which request is currently active and so maybe stuck, as
* the tracking maybe an event behind. Instead assume that
* if the context is still inflight, then it is still active
* even if the active flag has been cleared.
*/
if (!intel_context_inflight(signal->context))
return false;
rcu_read_lock();
for (port = __engine_active(signal->engine); (rq = *port); port++) {
if (rq->context == signal->context) {
inflight = i915_seqno_passed(rq->fence.seqno,
signal->fence.seqno);
break;
}
}
rcu_read_unlock();
return inflight;
}
static int
__await_execution(struct i915_request *rq,
struct i915_request *signal,
@ -387,7 +434,7 @@ __await_execution(struct i915_request *rq,
}
spin_lock_irq(&signal->lock);
if (i915_request_is_active(signal)) {
if (i915_request_is_active(signal) || __request_in_flight(signal)) {
if (hook) {
hook(rq, &signal->fence);
i915_request_put(signal);