Only has one user and is nothing more than a shim on top of
i915_vma_unbind, so let's just get rid of it.
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Suggested-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Signed-off-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1461842691-27575-1-git-send-email-matthew.auld@intel.com
Right now MST audio is causing too many kernel panics to really keep
around in the kernel. On top of that, even after fixing said panics it's
still basically non-functional (at least on all the setups I've tested
it on). Revert until we have a proper solution for this.
This reverts commit 3d52ccf52f.
Signed-off-by: Lyude <cpaul@redhat.com>
Fixes: 3d52ccf52f ("drm/i915: start adding dp mst audio")
Cc: stable@vger.kernel.org
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: http://patchwork.freedesktop.org/patch/msgid/1462287692-28570-1-git-send-email-cpaul@redhat.com
While browsing BSpec I bumped into a note saying we need to tune these
values based on actual measurements done after initial enabling. I've
checked that it indeed improves things on BXT. I haven't checked this on
CHV, but here it is if someone wants to give it a go.
v2:
- Add note about the discrepancy wrt. to the spec in the formula
calculating the credit encodings. (Mika, Ville)
- Move the WA comment to the new function. (Ville)
v3:
- Keep the comment about the SQC WA in the caller. (Ville)
CC: Ville Syrjälä <ville.syrjala@linux.intel.com>
CC: Mika Kuoppala <mika.kuoppala@intel.com>
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1462280061-1457-4-git-send-email-imre.deak@intel.com
No need for hard-coding the register value, the corresponding fields are
defined properly in BSpec.
No functional change.
v2:
- Rebased on BXT L3 SQC tuning patch merged meanwhile.
CC: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com> (v1)
Link: http://patchwork.freedesktop.org/patch/msgid/1462280061-1457-3-git-send-email-imre.deak@intel.com
i915_gem_shrink() will scan the bound list only if device is not
suspended but in OOM failure scenario it becomes absolutely necessary
to release as much memory as possible. Also in allocation failure from
vmap address space, it is incumbent on the Driver to reap all its
vmaps. So, adding rpm get/put in i915_gem_shrinker_oom() and
i915_gem_shrinker_vmap() to ensure shrinking of bound objects as well.
Signed-off-by: Praveen Paneri <praveen.paneri@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: http://patchwork.freedesktop.org/patch/msgid/1462178429-13449-2-git-send-email-praveen.paneri@intel.com
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
When the system is running low on memory, gem shrinker is invoked.
In this process objects will be unbounded from GTT and unbinding process
will require access to GTT(GTTADR) and also to fence register potentially.
That requires a resume of gfx device, if suspended, in the shrinker path.
Considering the power leakage due to intermediate resume, perform unbinding
operation only if device is already runtime active.
v2: Use newly implemented intel_runtime_pm_get_if_in_use (Chris)
Signed-off-by: Akash Goel <akash.goel@intel.com>
Signed-off-by: Praveen Paneri <praveen.paneri@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: http://patchwork.freedesktop.org/patch/msgid/1462178429-13449-1-git-send-email-praveen.paneri@intel.com
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
The LVDS border enable is independent from the panel fitter. Move the
readout of the "border bits" from i9xx_get_pfit_config() to
intel_lvds_get_config(), where it will be read if LVDS is enabled even
if the panel fitter is not.
This fixes the state checker warning:
[drm:intel_pipe_config_compare [i915]] *ERROR* mismatch in
gmch_pfit.lvds_border_bits (expected 0x00008000, found 0x00000000)
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Cc: drm-intel-fixes@lists.freedesktop.org
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=87632
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Tested-by: Sitsofe Wheeler <sitsofe@yahoo.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1461933243-2140-1-git-send-email-jani.nikula@intel.com
When the engine idles waiting upon a semaphore, it loses its
pagetables and we must reload them before executing the batch.
v2: Restrict w/a to non-RCS rings (RCS works correctly apparently).
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1461932305-14637-5-git-send-email-chris@chris-wilson.co.uk
In order for the MI_SEMAPHORE_SIGNAL command to wait until after the
pipecontrol writing the signal value is complete, we have to pause the
CS inside the PIPE_CONTROL with the CS_STALL bit.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1461932305-14637-4-git-send-email-chris@chris-wilson.co.uk
With the introduction of a distinct engine->id vs the hardware id, we need
to fix up the value we use for selecting the target engine when signaling
a semaphore. Note that these values can be merged with engine->guc_id.
Fixes: de1add3605
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1461932305-14637-3-git-send-email-chris@chris-wilson.co.uk
For legacy ringbuffer mode, we need the new ordered breadcrumb emission
tried and tested on execlists in order to avoid the dreaded "missed
interrupt" syndrome. A secondary advantage of the execlists method is
that it writes to an arbitrary address, useful if one wants to write a
breadcrumb elsewhere.
This fix is taken from commit 7c17d37737 (drm/i915: Use ordered seqno
write interrupt generation on gen8+ execlists).
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@intel.com>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1461932305-14637-1-git-send-email-chris@chris-wilson.co.uk
At the start of request emission, we flush some space for the request,
estimating the typical size for the request body. The common tail is now
much larger than the typical body, so we can shrink the flush
substantially.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1461917226-9132-3-git-send-email-chris@chris-wilson.co.uk
At the start of request emission, we flush some space for the request,
estimating the typical size for the request body. The tail is now much
larger than the typical body, so we can shrink the flush slightly.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1461917226-9132-2-git-send-email-chris@chris-wilson.co.uk
With 5 rings and a flush, we need 192 bytes of space to emit the
breadcrumb and semaphores. However, we need some spare room the size of
the single largest packet (36 dwords, 144 bytes) to accommodate
wraparound giving a grand total of 336 bytes
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1461917226-9132-1-git-send-email-chris@chris-wilson.co.uk
The code used by the DP and HDMI paths was very similar, so make them
share it. Note that this removes the write to signal level registers
from the HDMI pre pll enable path, but that's OK since those are set
in vlv_hdmi_pre_enable() function.
Signed-off-by: Ander Conselvan de Oliveira <ander.conselvan.de.oliveira@intel.com>
Reviewed-by: Jim Bride <jim.bride@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1461761065-21195-9-git-send-email-ander.conselvan.de.oliveira@intel.com
The logic for setting signal levels is used for both HDMI and DP with
small variations. But it is similar enough to put behind a function
called from the encoders.
v2: Remove unrelated MST changes due to rebase fumble. (Jim Bride)
Fix typo in the commit message. (Jim Bride)
v3: Really fix the typo. (Jim)
Cc: Jim Bride <jim.bride@linux.intel.com>
Signed-off-by: Ander Conselvan de Oliveira <ander.conselvan.de.oliveira@intel.com>
Reviewed-by: Jim Bride <jim.bride@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1461761065-21195-8-git-send-email-ander.conselvan.de.oliveira@intel.com
The exact same code was used by HDMI and DP encoders, so move it to
intel_dpio_phy.c.
v2: Fix typo in the commit message. (Jim Bride)
v3: Call the new function chv_phy_post_pll_disable() instead of
chv_phy_post_disable(), as it should be called after the pll
is disabled. (Ville)
Cc: Jim Bride <jim.bride@linux.intel.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Ander Conselvan de Oliveira <ander.conselvan.de.oliveira@intel.com>
Reviewed-by: Jim Bride <jim.bride@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1461761065-21195-7-git-send-email-ander.conselvan.de.oliveira@intel.com
The only difference between the DP and HDMI versions was the lane count.
Since lane_count is now set appropriately for HDMI too, get rid of the
duplication and move this to intel_dpio_phy.c
v2: Don't move comments about 2nd common lane staying alive. (Ville)
Signed-off-by: Ander Conselvan de Oliveira <ander.conselvan.de.oliveira@intel.com>
Reviewed-by: Jim Bride <jim.bride@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1461761065-21195-6-git-send-email-ander.conselvan.de.oliveira@intel.com
Set the lane count for HDMI to 4. This will make it easier to
unduplicate CHV phy code.
This also fixes the the soft reset programming for HDMI with CHV. After
commit a8f327fb84 ("drm/i915: Clean up CHV lane soft reset
programming"), it wouldn't set the right bits for PCS23 since it relied
on a lane count that was never set.
v2: Set lane_count in *_get_config() to please state checker. (0day)
v3: Set lane_count for DDI in DVI mode too. (CI)
v4: Add note about CHV soft lane reset. (Ander)
Fixes: a8f327fb84 ("drm/i915: Clean up CHV lane soft reset programming")
Signed-off-by: Ander Conselvan de Oliveira <ander.conselvan.de.oliveira@intel.com>
Reviewed-by: Jim Bride <jim.bride@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1461761065-21195-2-git-send-email-ander.conselvan.de.oliveira@intel.com
The comment about GMBUSFREQ is confused. The spec actually explains
the 4MHz thing perfectly by noting that the 4MHz divider values is
actually just bits [9:2] not [9:0], hence the divide by 1000 correct.
Replace the confused note with a quote from the spec, and eliminate
the duplicated comment that snuck in.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1461689194-6079-4-git-send-email-ville.syrjala@linux.intel.com
Reviewed-by: Mika Kahola <mika.kahola@intel.com>
Update CDCLK_FREQ on BDW after changing the cdclk frequency. Not sure
if this is a late addition to the spec, or if I simply overlooked this
step when writing the original code.
This is what Bspec has to say about CDCLK_FREQ:
"Program this field to the CD clock frequency minus one. This is used to
generate a divided down clock for miscellaneous timers in display."
And the "Broadwell Sequences for Changing CD Clock Frequency" section
clarifies this further:
"For CD clock 337.5 MHz, program 337 decimal.
For CD clock 450 MHz, program 449 decimal.
For CD clock 540 MHz, program 539 decimal.
For CD clock 675 MHz, program 674 decimal."
Cc: stable@vger.kernel.org
Cc: Mika Kahola <mika.kahola@intel.com>
Fixes: b432e5cfd5 ("drm/i915: BDW clock change support")
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1461689194-6079-2-git-send-email-ville.syrjala@linux.intel.com
Reviewed-by: Mika Kahola <mika.kahola@intel.com>
In BXT DSI there is no regs programmed with few horizontal timings
in Pixels but txbyteclkhs.. So retrieval process adds some
ROUND_UP ERRORS in the process of PIXELS<==>txbyteclkhs.
Actually here for the given adjusted_mode, we are calculating the
value programmed to the port and then back to the horizontal timing
param in pixels. This is the expected value at the end of get_config,
including roundup errors. And if that is same as retrieved value
from port, then retrieved (HW state) adjusted_mode's horizontal
timings are corrected to match with SW state to nullify the errors.
Signed-off-by: Ramalingam C <ramalingam.c@intel.com>
Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Acked-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1461053894-5058-2-git-send-email-ramalingam.c@intel.com
Both execlists and legacy need to reset the context (and mode) of the
GPU before we lose control of the system. By resetting the GPU, we
revert back to default settings. This simplifies the life of any
subsequent driver (in particular for virtualized setups) as it does not
then have to try and recover from an unknown condition. As both paths
need to reset for the same reason, move the reset to a common point.
This unifies the resets added in a647828afc (drm/i915: Also perform gpu
reset under execlist mode) and 8e96d9c4d9 (drm/i915: reset the GPU on
context fini).
v2: Restrict the reset to "modern" gen (where we enable HW contexts) to
try and avoid leaving the machine in an unusable state with a risky
reset on older GPU. This should keep the status quo as to who performs
resets (i.e. currently only GPUs with HW contexts perform a reset on
shutdown).
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
CC: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Mika Kuoppala <mika.kuoppala@intel.com>
Cc: "Niu, Bing" <bing.niu@intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1461833819-3991-25-git-send-email-chris@chris-wilson.co.uk
With the previous patch having extended the pinned lifetime of
contexts by referencing the previous context from the current
request until the latter is retired (completed by the GPU),
we can now remove usage of execlist retired queue entirely.
This is because the above now guarantees that all execlist
object access requirements are satisfied by this new tracking,
and we can stop taking additional references and stop keeping
request on the execlists retired queue.
The latter was a source of significant scalability issues in
the driver causing performance hits on some tests. Most
dramatical of which was igt/gem_close_race which had run time
in tens of minutes which is now reduced to tens of seconds.
Signed-off-by: Tvrtko Ursulin <tvrtko@ursulin.net>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: http://patchwork.freedesktop.org/patch/msgid/1461833819-3991-24-git-send-email-chris@chris-wilson.co.uk
As the contexts are accessed by the hardware until the switch is completed
to a new context, the hardware may still be writing to the context object
after the breadcrumb is visible. We must not unpin/unbind/prune that
object whilst still active and so we keep the previous context pinned until
the following request. We can generalise the tracking we already do via
the engine->last_context and move it to the request so that it works
equally for execlists and GuC.
v2: Drop the execlists double pin as that exposes a race inside the lrc
irq handler as it tries to access the context after it may be retired.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1461833819-3991-22-git-send-email-chris@chris-wilson.co.uk
If we move the release of the GEM request (i.e. decoupling it from the
various lists used for client and context tracking) after it is complete
(either by the GPU retiring the request, or by the caller cancelling the
request), we can remove the requirement that the final unreference of
the GEM request need to be under the struct_mutex.
The careful reader may notice that one or two impossible NULL pointer
tests are dropped for readability. These pointers cannot be NULL since
they are assigned during request construction and never unset.
v2,v3: Rebalance execlists by moving the context unpinning.
v4: Rebase onto -nightly
v5: Avoid trying to rebalance execlist/GuC context pinning, leave that
to the next step
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1461833819-3991-21-git-send-email-chris@chris-wilson.co.uk
Refactor pinning and unpinning of contexts, such that the default
context for an engine is pinned during initialisation and unpinned
during teardown (pinning of the context handles the reference counting).
Thus we can eliminate the special case handling of the default context
that was required to mask that it was not being pinned normally.
v2: Rebalance context_queue after rebasing.
v3: Rebase to -nightly (not 40 patches in)
v4: Rebase onto request_alloc unwinding
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1461833819-3991-19-git-send-email-chris@chris-wilson.co.uk
The hardware tracks contexts and expects all live contexts (those active
on the hardware) to have a unique identifier. This is used by the
hardware to assign pagefaults and the like to a particular context.
v2: Reorder to make sure ctx->link is not left dangling if the
assignment of a hw_id fails (Mika).
v3: We have 21bits of context space, not 20.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1461833819-3991-17-git-send-email-chris@chris-wilson.co.uk
Rather than being interrupted when we run out of space halfway through
the request, and having to restart from the beginning (and returning to
userspace), flush a little more free space when we prepare the request.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1461833819-3991-15-git-send-email-chris@chris-wilson.co.uk
In the next patches, we want to move the work out of freeing the request
and into its retirement (so that we can free the request without
requiring the struct_mutex). This means that we cannot rely on
unreferencing the request to completely teardown the request any more
and so we need to manually unwind the failed allocation. In doing so, we
reorder the allocation in order to make the unwind simple (and ensure
that we don't try to unwind a partial request that may have modified
global state) and so we end up pushing the initial preallocation down
into the engine request initialisation functions where we have the
requisite control over the state of the request.
Moving the initial preallocation into the engine is less than ideal: it
moves logic to handle a specific problem with request handling out of
the common code. On the other hand, it does allow those backends
significantly more flexibility in performing its allocations.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1461833819-3991-14-git-send-email-chris@chris-wilson.co.uk
Now that we share intel_ring_begin(), reserving space for the tail of
the request is identical between legacy/execlists and so the tautology
can be removed. In the process, we move the reserved space tracking
from the ringbuffer on to the request. This is to enable us to reorder
the reserved space allocation in the next patch.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1461833819-3991-13-git-send-email-chris@chris-wilson.co.uk
Combine the near identical implementations of intel_logical_ring_begin()
and intel_ring_begin() - the only difference is that the logical wait
has to check for a matching ring (which is assumed by legacy).
In the process some debug messages are culled as there were following a
WARN if we hit an actual error.
v2: Updated commentary
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1461833819-3991-12-git-send-email-chris@chris-wilson.co.uk
The code to switch_mm() is already handled by i915_switch_context(), the
only difference required to setup the aliasing ppgtt is that we need to
emit te switch_mm() on the first context, i.e. when transitioning from
engine->last_context == NULL. This allows us to defer the
initialisation of the GPU from early device initialisation to first use,
which should marginally speed up both. The caveat is that we then defer
the context initialisation until first use - i.e. we cannot assume that
the GPU engines are initialised. For example, this means that power
contexts for rc6 (Ironlake) need to explicitly loaded, as they are.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@intel.com>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1461833819-3991-11-git-send-email-chris@chris-wilson.co.uk
Since we do the l3-remap on context switch, we can remove the redundant
early call to set the mapping prior to performing the first context
switch.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1461833819-3991-10-git-send-email-chris@chris-wilson.co.uk