Replace the global device seqno with one for each engine, and account
for in-flight seqno on each separately. This is consistent with
dma-fence as each timeline has separate fence-contexts for each engine
and a seqno is only ordered within a fence-context (i.e. seqno do not
need to be ordered wrt to other engines, just ordered within a single
engine). This is required to enable request rewinding for preemption on
individual engines (we have to rewind the global seqno to avoid
overflow, and we do not have to rewind all engines just to preempt one.)
v2: Rename active_seqno to inflight_seqnos to more clearly indicate that
it is a counter and not equivalent to the existing seqno. Update
functions that operated on active_seqno similarly.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20170223074422.4125-3-chris@chris-wilson.co.uk
When dma_fence_signal() is called, it sets a flag to indicate the fence
is complete. Before the dma_fence is signaled, the seqno check will
first be passed. During an unlocked check (such as inside a waiter), it
is possible for the fence to be signaled even though the seqno has been
reset (by engine wraparound). In this case the waiter will be kicked,
but for an extra layer of protection we can check the persistent
signaled bit from the fence.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20170223074422.4125-2-chris@chris-wilson.co.uk
This reverts commit 7ee686034b "drm/i915/dp: Ratelimit DP aux timeout
messages" as although it successfully squelches the debug messages, when
it does so it generates a warning instead. CI lights up orange with all
the warnings!
In its current incarnation DRM_DEBUG_RATELIMITED is not usable for us,
and we need to first teach lib/ratelimit.c not to warn when used for
debug messages.
Fixes: 7ee686034b ("drm/i915/dp: Ratelimit DP aux timeout messages")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Lyude <lyude@redhat.com>
Cc: Daniel Vetter <daniel.vetter@intel.com>
Cc: Jani Nikula <jani.nikula@linux.intel.com>
Cc: intel-gfx@lists.freedesktop.org
Link: http://patchwork.freedesktop.org/patch/msgid/20170223115102.7059-1-chris@chris-wilson.co.uk
Acked-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Geminilake scalers can do 7x7 filtering for all supported input sizes,
so it doesn't need the "high quality" mode programming, which was
actually removed from that platform.
v2: Split dev_priv parameter change out. (Ville)
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>,
Signed-off-by: Ander Conselvan de Oliveira <ander.conselvan.de.oliveira@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20170223071600.14356-5-ander.conselvan.de.oliveira@intel.com
Geminilake has a third sprite plane (or fourth universal plane) that is
independent from the cursor. Make sure that for_each_plane_id_on_crtc()
is aware of that extra plane so that the watermark code takes it into
account.
Fixes: e9c9882556 ("drm/i915/glk: Configure number of sprite planes properly")
Cc: Ander Conselvan de Oliveira <ander.conselvan.de.oliveira@intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Cc: Daniel Vetter <daniel.vetter@intel.com>
Cc: Jani Nikula <jani.nikula@linux.intel.com>
Cc: intel-gfx@lists.freedesktop.org
Cc: <drm-intel-fixes@lists.freedesktop.org>
Signed-off-by: Ander Conselvan de Oliveira <ander.conselvan.de.oliveira@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20170223071600.14356-2-ander.conselvan.de.oliveira@intel.com
Testing with concurrent GGTT accesses no longer show the coherency
problems from yonder, commit 5bab6f60cb ("drm/i915: Serialise updates
to GGTT with access through GGTT on Braswell"). My presumption is that
the root cause was more likely fixed by commit 3b5724d702 ("drm/i915:
Wait for writes through the GTT to land before reading back"), along
with the use of WC updates to the global gTT in commit 8448661d65
("drm/i915: Convert clflushed pagetables over to WC maps". Given
that the original symptoms can no longer be reproduced, time to remove
the workaround.
Testcase: igt/gem_concurrent_blit
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: http://patchwork.freedesktop.org/patch/msgid/20170220124718.14796-1-chris@chris-wilson.co.uk
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Right now this is just leaving a lot of spam in dmesg that makes real
issues more difficult to debug. As well (as noted by the comment right
above the DRM_DEBUG_KMS() call) this is normal behavior when there's
nothing connected to the DisplayPort connector.
Signed-off-by: Lyude <lyude@redhat.com>
Setting retire=true is identical to using origin=ORIGIN_CS, so make the
same simplification to intel_fb_obj_flush() as already employed for
intel_fb_obj_invalidate().
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20170222114049.28456-6-chris@chris-wilson.co.uk
Flushing the cachelines for an object is slow, can be as much as 100ms
for a large framebuffer. We currently do this under the struct_mutex BKL
on execution or on pageflip. But now with the ability to add fences to
obj->resv for both flips and execbuf (and we naturally wait on the fence
before CPU access), we can move the clflush operation to a workqueue and
signal a fence for completion, thereby doing the work asynchronously and
not blocking the driver or its clients.
v2: Introduce i915_gem_clflush.h and use a new name, split out some
extras into separate patches.
Suggested-by: Akash Goel <akash.goel@intel.com>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Matthew Auld <matthew.auld@intel.com>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20170222114049.28456-5-chris@chris-wilson.co.uk
We have three different paths by which userspace wants to flush the
display plane (i.e. objects with obj->pin_display). Use a common helper
to identify those paths and to simplify a later change.
v2: Include the conditional in the name, i915_gem_object_flush_if_display
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20170222114049.28456-3-chris@chris-wilson.co.uk
The change_domain tracepoint has been inaccurate for a few years - it
doesn't fully capture the domains, especially with userspace bypassing
them. It is defunct, misleading and time to be removed.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20170222114049.28456-1-chris@chris-wilson.co.uk
Handling the dynamic charp module parameter requires us to copy it for
the error state, or remember to lock it when reading (in case it used
with 0600).
v2: Use __always_inline and __builtin_strcmp
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20170221162619.15954-1-chris@chris-wilson.co.uk
Reviewed-by: Jani Nikula <jani.nikula@intel.com>
When not using GuC submission, the ring buffer size for GVT context is
512KB which is the max size. When switching to GuC submission, the ring
buffer size is required to be less than 16KB. So use the GVT context
default ring buffer size if GuC submission is enabled.
Signed-off-by: Chuanxiao Dong <chuanxiao.dong@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20170216063639.GA17107@intel.com
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Two new tracepoints placed at the call sites where requests are
actually passed to the GPU enable userspace to track engine
utilisation.
These tracepoints are only enabled when the
DRM_I915_LOW_LEVEL_TRACEPOINTS Kconfig option is enabled.
v2: Fix compilation with !CONFIG_DRM_I915_LOW_LEVEL_TRACEPOINTS.
v3: Name global seqno consistently across tracepoints.
v4: Remove port info from request out tracepoint. (Chris Wilson)
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
i915_gem_ring_notify is more appropriate since we do not have
the request information at this point, but it is simply a
signal from the engine that some request has been completed.
v2:
* Always trace and log if there were any waiters.
* Rename to intel_engine_notify. (Chris Wilson)
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
These new tracepoints are emitted once the request is ready to
be submitted to the GPU and once the request is about to
be submitted to the GPU, respectively.
Former condition triggers as soon as all the fences and
dependencies have been resolved, and the latter once the
backend is about to submit it to the GPU.
New tracepoint are enabled via the new
DRM_I915_LOW_LEVEL_TRACEPOINTS Kconfig option which is disabled
by default to alleviate the performance impact concerns.
v2: Move execute tracepoint to __i915_gem_request_submit.
(Chris Wilson)
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Tracepoint is not used and won't be suitable for its replacement.
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Provide the same information as the other request event classes.
v2: Pass in flags so we can properly report the blocking status.
(Chris Wilson)
v3: Log hex with 0x prefix for clarity.
v4: Derive blocking status from flags. (Chris Wilson)
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Rename it to i915_gem_request_queue and fix the logged info
equivalent to the i915_gem_request even class. Also moved it
a bit further apart from the i915_gem_request_add tracepoint
since they otherwise provide similar information too close in
time.
v2: Remove sw fence singalling. We will rely on the soon to
come GuC scheduling backend to enable that. (Chris Wilson)
v3: Log hex with 0x prefix for clarity.
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
At the moment only the global seqno is logged which is not set
until the request is ready for submission.
Add the per-contex seqno and the context hardware id which are
both interesting data points.
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Compact the name of the macro and reg_state variable, and cache
some data in local variables to make the function more compact
and more readable.
v2: Fixup some checkpatch warnings.
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: http://patchwork.freedesktop.org/patch/msgid/20170221095839.30525-1-tvrtko.ursulin@linux.intel.com
The hardware requires that the tail pointer only advance in qword units,
so assert that the value we write is aligned to qwords, and similarly
enforce this restriction onto the request->tail.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20170217163833.731-1-chris@chris-wilson.co.uk
Reviewed-by: Michał Winiarski <michal.winiarski@intel.com>
Verify that the refcount of all power wells match their HW enabled
state at the end of modeset HW state readout.
Also add documentation on how the reference count for each power well is
supposed to be acquired during initialization and HW state readout.
Suggested by Ander.
Cc: Ander Conselvan de Oliveira <conselvan2@gmail.com>
Cc: David Weinehall <david.weinehall@linux.intel.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Ander Conselvan de Oliveira <conselvan2@gmail.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1487345986-26511-6-git-send-email-imre.deak@intel.com
Atm, power wells that BIOS has enabled, but which we don't explicitly
enable during power domain initialization would get disabled as we clear
the BIOS request bit in the given power well sync_hw hook. To prevent
this copy over any set request bits in the BIOS request register to the
driver request register and clear the BIOS request bit only afterwards.
This doesn't make a difference now, since we enable all power wells
during power domain initialization. A follow-up patchset will add power
wells for which this isn't true, so fix up the inconsistency.
Cc: Ander Conselvan de Oliveira <conselvan2@gmail.com>
Cc: David Weinehall <david.weinehall@linux.intel.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Ander Conselvan de Oliveira <conselvan2@gmail.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1487345986-26511-5-git-send-email-imre.deak@intel.com
Atm, in the power well sync_hw hook we are clearing all BIOS request
bits, not just the one corresponding to the given power well. This could
turn off an unrelated power well inadvertently if it didn't have a
request bit set in the driver request register.
This didn't cause a problem so far, since we enabled all power wells
explicitly before clearing the BIOS request register. A follow-up
patchset will add power wells that won't get enabled this way, so fix up
the inconsistency.
Note that this patch only makes the clearing of the BIOS req register
more logical. Power wells without a reference would still get disabled
by the end of power domain initialization, that is fixed by the next
patch.
v2:
- Clarify in the commit log that this patch doesn't address the case of
power wells without a reference. (Ander)
Cc: Ander Conselvan de Oliveira <conselvan2@gmail.com>
Cc: David Weinehall <david.weinehall@linux.intel.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Ander Conselvan de Oliveira <conselvan2@gmail.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1487345986-26511-4-git-send-email-imre.deak@intel.com
So far the sync_hw hook wasn't called for power wells not belonging to
any power domain, that is the GEN9 PW1 and MISC_IO power wells. This
wasn't a problem so far since the goal of the sync_hw hook - to clear
the corresponding BIOS request bit - was guaranteed by clearing the
whole BIOS request register elsewhere. This will change with the next
patch, so fix up the inconsistency.
While at it clean up the power well iterator helpers and move them to
the rest of iterators.
v2:
- Clean up the power well iterator helpers. (Ander)
- Move the helpers to i915_drv.h.
Cc: Ander Conselvan de Oliveira <conselvan2@gmail.com>
Cc: David Weinehall <david.weinehall@linux.intel.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Ander Conselvan de Oliveira <conselvan2@gmail.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1487345986-26511-3-git-send-email-imre.deak@intel.com
Doing an explicit enable/disable in the power well sync_hw hook based on
the power well's reference count is redundant, since by the time these
hooks are called all the power wells are enabled and have a reference.
So remove the redundant toggling.
This is needed by a follow-up patchset that adds power wells which we
can't enable/disable during power domain initialization and so want to
preserve their state until modeset init time.
Cc: Ander Conselvan de Oliveira <conselvan2@gmail.com>
Cc: David Weinehall <david.weinehall@linux.intel.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Ander Conselvan de Oliveira <conselvan2@gmail.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1487345986-26511-2-git-send-email-imre.deak@intel.com
The uncached mmio is sufficient to queue the mmio writes without raising
forcewake. The forced flush along with acquiring forcewake from the
posting read is not required for adjusting the RPS frequency.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: http://patchwork.freedesktop.org/patch/msgid/20170220094713.22874-3-chris@chris-wilson.co.uk
Reviewed-by: Radoslaw Szwichtenberg <radoslaw.szwichtenberg@intel.com>
If intel_set_rps() is called whilst the hw is disabled, just store the
requested frequency (from the user) for application when we wake the hw
up.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: http://patchwork.freedesktop.org/patch/msgid/20170220094713.22874-2-chris@chris-wilson.co.uk
Reviewed-by: Radoslaw Szwichtenberg <radoslaw.szwichtenberg@intel.com>
Instead of having each back-end provide identical guards, just have a
singular set in intel_set_rps() to verify that the caller is obeying the
rules.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Radoslaw Szwichtenberg <radoslaw.szwichtenberg@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20170220094713.22874-1-chris@chris-wilson.co.uk
We don't need struct_mutex for acquiring an rpm wakeref, and do not need
to serialise those register read (it's the wrong mutex for those
registers in any case). Begone!
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: http://patchwork.freedesktop.org/patch/msgid/20170218150050.10414-1-chris@chris-wilson.co.uk
Reviewed-by: Radoslaw Szwichtenberg <radoslaw.szwichtenberg@intel.com>
Prevent the overflow check from firing on machines with the full 4lvl
page tables, that are not restricted to GEN8_LEGACY_PDES.
v2: Also fix the off-by-one in the compare
Fixes: 894ccebee2 ("drm/i915: Micro-optimise gen8_ppgtt_insert_entries()")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@intel.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20170217141455.19877-1-chris@chris-wilson.co.uk
Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
There is a new version of DMC available for Geminilake.
It's release notes only mention:
- Enhancement in the FW to restore the PG2 state
v2: Fixed the platform name on commit message.
Noticed by Jani S.
v3: cook on top of drm-tip without depending on kbl
one so CI can check.
v4: make v3 on top of v2.
Cc: David Weinehall <tao@kernel.org>
Cc: Jani Saarinen <jani.saarinen@intel.com>
Cc: Ander Conselvan de Oliveira <ander.conselvan.de.oliveira@intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Reviewed-by: Ander Conselvan de Oliveira <conselvan2@gmail.com>
Signed-off-by: Ander Conselvan de Oliveira <ander.conselvan.de.oliveira@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1487295515-15396-1-git-send-email-rodrigo.vivi@intel.com
If we wait upon the full (i.e. all shared fences, or upon an exclusive
fence) reservation object successfully, we know that all fences beneath
it have been signaled, so long as no new fences were added whilst we
slept. If the reservation_object remains the same, as detected by its
seqcount, we can then reap all the fences upon completion.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20170217151304.16665-6-chris@chris-wilson.co.uk
As a backup to waiting on a user-interrupt from the GPU, we use a heavy
and frequent timer to wake up the waiting process should we detect an
inconsistency whilst waiting. After seeing a "missed interrupt", the
next time we wait, we restart the heavy timer. This patch is more
reluctant to restart the timer and will only do so if we have not see any
interrupts since when we started the fake irq timer. If we are seeing
interrupts, then the waiters are being woken normally and we had an
incoherency that caused to miss last time - that is unlikely to reoccur
and so taking the risk of stalling again seems pragmatic.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20170217151304.16665-5-chris@chris-wilson.co.uk
If the waiter was currently running, assume it hasn't had a chance
to process the pending interrupt (e.g, low priority task on a loaded
system) and wait until it sleeps before declaring a missed interrupt.
References: https://bugs.freedesktop.org/show_bug.cgi?id=99816
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20170217151304.16665-4-chris@chris-wilson.co.uk
If an interrupt has been posted, and we were spinning on the active
seqno waiting for it to advance but it did not, then we can expect that
it will not see its advance in the immediate future and should call into
the irq-seqno barrier. We can stop spinning at this point, and leave the
difficulty of handling the coherency to the caller.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20170217151304.16665-3-chris@chris-wilson.co.uk
When the timer expires for checking on interrupt processing, check to
see if any interrupts arrived within the last time period. If real
interrupts are still being delivered, we can be reassured that we
haven't missed the final interrupt as the waiter will still be woken.
Only once all activity ceases, do we have to worry about the waiter
never being woken and so need to install a timer to kick the waiter for
a slow arrival of a seqno.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20170217151304.16665-2-chris@chris-wilson.co.uk
In Geminilake, the degamma table is enabled or disabled by the pipe CSC
enable bit, so its active even when running in the legacy gamma mode.
So always set sane values for that table, since the default value is all
zeroes.
This fixes blank screens after a suspend/resume cycle while legacy gamma
is in use.
Signed-off-by: Ander Conselvan de Oliveira <ander.conselvan.de.oliveira@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20170217120630.6143-2-ander.conselvan.de.oliveira@intel.com
We have a few open coded instances in the execlists code and an
almost suitable helper in intel_ringbuf.c
We can consolidate to a single helper if we change the existing
helper to emit directly to ring buffer memory and move the space
reservation outside it.
v2: Drop memcpy for memset. (Chris Wilson)
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: http://patchwork.freedesktop.org/patch/msgid/20170216122325.31391-2-tvrtko.ursulin@linux.intel.com
Use the "*batch++ = " style as in the ring emission for better
readability and also simplify the logic a bit by consolidating
the offset and size calculations and overflow checking. The
latter is a programming error so it is not required to check
for it after each write to the object, but rather do it once the
whole state has been written and fail the driver if something
went wrong.
v2: Rebase.
v3: Keep track of offsets and sizes in bytes for simplicity
and rename function pointer variable to _fn suffix.
(Chris Wilson)
v4: Fix size calc broken in v3 and add alignment warning. (Chris Wilson)
v5: Fix return code.
v6: I added an exit from loop in v5 but forgot to put back
the object teardown.
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk> (v5)
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>