drm/i915: Remove the global cache shrink & rcu barrier on allocation failure
Earlier, we reasoned that having idled the gpu under mempressure, that would be a good time to trim our request slabs in order to perform the next request allocation. We have stopped performing the global operation on the device (no idling) and wish to make the allocation failure handling more local, so out with the global barrier that may take a long time. Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com> Link: https://patchwork.freedesktop.org/patch/msgid/20181005080300.9908-2-chris@chris-wilson.co.uk
This commit is contained in:
parent
88a83f3c2d
commit
33373258cf
@ -655,17 +655,6 @@ i915_request_alloc(struct intel_engine_cs *engine, struct i915_gem_context *ctx)
|
||||
if (rq)
|
||||
cond_synchronize_rcu(rq->rcustate);
|
||||
|
||||
/*
|
||||
* We've forced the client to stall and catch up with whatever
|
||||
* backlog there might have been. As we are assuming that we
|
||||
* caused the mempressure, now is an opportune time to
|
||||
* recover as much memory from the request pool as is possible.
|
||||
* Having already penalized the client to stall, we spend
|
||||
* a little extra time to re-optimise page allocation.
|
||||
*/
|
||||
kmem_cache_shrink(i915->requests);
|
||||
rcu_barrier(); /* Recover the TYPESAFE_BY_RCU pages */
|
||||
|
||||
rq = kmem_cache_alloc(i915->requests, GFP_KERNEL);
|
||||
if (!rq) {
|
||||
ret = -ENOMEM;
|
||||
|
Loading…
Reference in New Issue
Block a user