Looking at DPCD DP_MAX_LINK_RATE may be completely bogus for eDP 1.4
which is allowed to use link rate select method and have 0 in max link
rate. With this change, it makes sense to store the max rate as the
actual rate rather than as a bw code.
Cc: Manasi Navare <manasi.d.navare@intel.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/3e8baadb406d59f414cab36fed9f0b35d207fde5.1491485983.git.jani.nikula@intel.com
We want to refer to the index of the engine consistently throughout the
userspace ABI. We already have such an index through the execbuffer
engine specifier, that needs to be able to refer to each engine
specifically, so rename it the index to uabi_id to reflect its
generality beyond execbuf.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20170411124306.15448-1-chris@chris-wilson.co.uk
There are some properties that logically belong to the engine class, and some
that belong to the engine instance. Make it explicit.
v2: Commit message (Tvrtko)
v3:
- Rebased
- Exec/uabi id should be per instance (Chris)
v4:
- Rebased
- Avoid re-ordering fields for smaller diff (Tvrtko)
- Bug on oob access to the class array (Michal)
v5: Bug on the right thing (Michal)
v6: Rebased
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Paulo Zanoni <paulo.r.zanoni@intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Reviewed-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
Signed-off-by: Oscar Mateo <oscar.mateo@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1491834873-9345-5-git-send-email-oscar.mateo@intel.com
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Not really needed, but makes the next change a little bit more compact.
v2:
- Use zero-based numbering for engine names: xcs0, xcs1.. xcsN (Tvrtko, Chris)
- Make sure the mock engine name is null-terminated (Tvrtko, Chris)
v3: Because I'm stupid (Chris)
v4: Verify engine name wasn't truncated (Michal)
v5:
- Kill the warning in mock engine (Chris)
Cc: Paulo Zanoni <paulo.r.zanoni@intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
Signed-off-by: Oscar Mateo <oscar.mateo@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1491834873-9345-4-git-send-email-oscar.mateo@intel.com
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
If we needed to do something different for the init functions, we could
always look at the engine instance to make the distinction. But, in any
case, the two functions are virtually identical already (please notice
that BSD2_RING is only used from gen8 onwards).
With this, the init functions depends excusively on the engine class
(a fact that we will use soon).
v2: Commit message
Cc: Paulo Zanoni <paulo.r.zanoni@intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Signed-off-by: Oscar Mateo <oscar.mateo@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1491834873-9345-3-git-send-email-oscar.mateo@intel.com
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
In such a way that vcs and vcs2 are just two different instances (0 and 1)
of the same engine class (VIDEO_DECODE_CLASS).
v2: Align the instance types (Tvrtko)
v3: Don't use enums for bspec-defined stuff (Michal)
Cc: Paulo Zanoni <paulo.r.zanoni@intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
Signed-off-by: Oscar Mateo <oscar.mateo@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1491834873-9345-2-git-send-email-oscar.mateo@intel.com
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
While we do hold the forcewake for legacy ringbuffer initialisation, we
don't guard our access with the uncore.lock spinlock. In theory, we only
initialise when no others should be accessing the same mmio cachelines,
but in practice be safe as this is an infrequently used path and not
worth risky micro-optimisations.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20170411101340.31994-5-chris@chris-wilson.co.uk
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Since the sandybridge_pcode_read() may be called from
skl_pcode_request() inside an atomic context (with preempt disabled), we
should avoid hitting any sleeping paths. Currently is being called with
a 500ms timeout, irrespective of being inside an atomic context or not.
This is reduced down to 500us to play nice with the atomic context, and
that appears to be sufficient to keep BAT happy (we have a DRM_ERROR
should it timeout), i.e. we do not see any 500us pcode timeouts for
normal use. So leave it as a pure spin without having to introduce new
code paths to separate atomic/normal contexts.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20170411101340.31994-4-chris@chris-wilson.co.uk
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
We acquire the forcewake and use I915_READ_FW instead for the atomic
wait within intel_uncore_wait_for_register. However, this still leaves
us vulnerable to concurrent mmio access to the register, which can cause
system hangs on gen7. The protection is to acquire uncore.lock around
each register, so lets add it back.
v2: Wrap __intel_wait_for_register_fw() to re-use its atomic wait_for
loop and spare adding another for ourselves.
v3: Add might_sleep() annotation
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Michal Wajdeczko <michal.wajdeczko@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20170411101340.31994-3-chris@chris-wilson.co.uk
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
submit_request() is called from an atomic context, it's not allowed to
sleep. We have to be careful in our parameters to
intel_uncore_wait_for_register() to limit ourselves to the atomic wait
loop and not incur the wrath of our warnings.
Fixes: 6976e74b5f ("drm/i915: Don't allow overuse of __intel_wait_for_register_fw()")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Michal Wajdeczko <michal.wajdeczko@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20170410143807.22725-1-chris@chris-wilson.co.uk
Link: http://patchwork.freedesktop.org/patch/msgid/20170411101340.31994-2-chris@chris-wilson.co.uk
Reviewed-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Allow the caller to use the fast_timeout_us to specify how long to wait
within the atomic section, rather than transparently switching to a
sleeping loop for larger values. This is required as some callsites may
need a long wait and are in an atomic section.
v2: Reinforce kerneldoc fast_timeout_us limit with a GEM_BUG_ON
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Michal Wajdeczko <michal.wajdeczko@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20170411112705.12656-1-chris@chris-wilson.co.uk
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
This function should not be called with long timeouts in atomic context.
Annotate it as might_sleep if timeout is longer than 10us.
v2: fix comment (Michal)
Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
Suggested-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Link: http://patchwork.freedesktop.org/patch/msgid/20170410121747.209200-1-michal.wajdeczko@intel.com
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
These params are passed by value, const qualifiers are ignored any way.
While around, unify timeout_ms type from long to int.
Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
Suggested-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Link: http://patchwork.freedesktop.org/patch/msgid/20170410093817.151280-1-michal.wajdeczko@intel.com
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Waiting for the response status in scratch register can be done
using our generic function. Let's use it.
v2: rebased
Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
Suggested-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Link: http://patchwork.freedesktop.org/patch/msgid/20170407160145.181328-2-michal.wajdeczko@intel.com
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
In some cases we may want to spend more time in atomic wait than
hardcoded 2us. Let's add additional fast timeout parameter to allow
flexible configuration of atomic timeout before switching into heavy wait.
Add also possibility to return registry value to avoid extra read.
v2: use explicit fast timeout (Tvrtko/Chris)
allow returning register value (Chris)
Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
Suggested-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Link: http://patchwork.freedesktop.org/patch/msgid/20170407160145.181328-1-michal.wajdeczko@intel.com
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
There is no need to specify timeout as unsigned long since this parameter
will be consumed by usecs_to_jiffies() which expects unsigned int only.
Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20170407133212.174608-1-michal.wajdeczko@intel.com
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
As we may have very many objects to free, check to see if the task needs
to be rescheduled whilst freeing them.
Suggested-by: Andrea Arcangeli <aarcange@redhat.com>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20170407102552.5781-4-chris@chris-wilson.co.uk
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Before freeing the next batch of objects from the worker, check if the
worker's timeslice has expired and if so, defer the next batch to the
next invocation of the worker.
Suggested-by: Andrea Arcangeli <aarcange@redhat.com>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20170407102552.5781-3-chris@chris-wilson.co.uk
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
By using the same structure for both interruptible and
uninterruptible locking in shrinker code, combined with the
information that mm.interruptible is only being written to, the
code can be greatly simplified.
Also removing the i915_gem_ prefix from the locking functions so
that nobody in their wildest dreams considers exporting them.
Signed-off-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: http://patchwork.freedesktop.org/patch/msgid/1491562175-27680-1-git-send-email-joonas.lahtinen@linux.intel.com
Only call synchronize_rcu_expedited after unlocking struct_mutex to
avoid deadlock because the workqueues depend on struct_mutex.
>From original patch by Andrea:
synchronize_rcu/synchronize_sched/synchronize_rcu_expedited() will
hang until its own workqueues are run. The i915 gem workqueues will
wait on the struct_mutex to be released. So we cannot wait for a
quiescent state using those rcu primitives while holding the
struct_mutex or it creates a circular lock dependency resulting in
kernel hangs (which is reproducible but goes undetected by lockdep).
kswapd0 D 0 700 2 0x00000000
Call Trace:
? __schedule+0x1a5/0x660
? schedule+0x36/0x80
? _synchronize_rcu_expedited.constprop.65+0x2ef/0x300
? wake_up_bit+0x20/0x20
? rcu_stall_kick_kthreads.part.54+0xc0/0xc0
? rcu_exp_wait_wake+0x530/0x530
? i915_gem_shrink+0x34b/0x4b0
? i915_gem_shrinker_scan+0x7c/0x90
? i915_gem_shrinker_scan+0x7c/0x90
? shrink_slab.part.61.constprop.72+0x1c1/0x3a0
? shrink_zone+0x154/0x160
? kswapd+0x40a/0x720
? kthread+0xf4/0x130
? try_to_free_pages+0x450/0x450
? kthread_create_on_node+0x40/0x40
? ret_from_fork+0x23/0x30
plasmashell D 0 4657 4614 0x00000000
Call Trace:
? __schedule+0x1a5/0x660
? schedule+0x36/0x80
? schedule_preempt_disabled+0xe/0x10
? __mutex_lock.isra.4+0x1c9/0x790
? i915_gem_close_object+0x26/0xc0
? i915_gem_close_object+0x26/0xc0
? drm_gem_object_release_handle+0x48/0x90
? drm_gem_handle_delete+0x50/0x80
? drm_ioctl+0x1fa/0x420
? drm_gem_handle_create+0x40/0x40
? pipe_write+0x391/0x410
? __vfs_write+0xc6/0x120
? do_vfs_ioctl+0x8b/0x5d0
? SyS_ioctl+0x3b/0x70
? entry_SYSCALL_64_fastpath+0x13/0x94
kworker/0:0 D 0 29186 2 0x00000000
Workqueue: events __i915_gem_free_work
Call Trace:
? __schedule+0x1a5/0x660
? schedule+0x36/0x80
? schedule_preempt_disabled+0xe/0x10
? __mutex_lock.isra.4+0x1c9/0x790
? del_timer_sync+0x44/0x50
? update_curr+0x57/0x110
? __i915_gem_free_objects+0x31/0x300
? __i915_gem_free_objects+0x31/0x300
? __i915_gem_free_work+0x2d/0x40
? process_one_work+0x13a/0x3b0
? worker_thread+0x4a/0x460
? kthread+0xf4/0x130
? process_one_work+0x3b0/0x3b0
? kthread_create_on_node+0x40/0x40
? ret_from_fork+0x23/0x30
Fixes: 3d3d18f086 ("drm/i915: Avoid rcu_barrier() from reclaim paths (shrinker)")
Reported-by: Andrea Arcangeli <aarcange@redhat.com>
Signed-off-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Jani Nikula <jani.nikula@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
As we call into the shrinker during freeze, we may have freed more
objects since we idled during i915_gem_suspend. Make sure we flush the
i915_gem_free_objects worker prior to saving the unwanted pages into the
hibernation image.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20170407102552.5781-2-chris@chris-wilson.co.uk
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
The shrinker is prepared to be called unlocked (and at other times with
struct_mutex held for DIRECT_RECLAIM) so we can skip acquiring the
struct_mutex prior to calling the shrinker during freeze. This improves
our ability to shrink as we can be more aggressive when we know the
caller isn't holding struct_mutex.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20170407102552.5781-1-chris@chris-wilson.co.uk
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
When we retire the last request on the ring, before we ever access that
ring again we know it will be completely idle and so we can advance the
ring->head fully to the end (i.e. ring->tail) and not just to the start
of the breadcrumb. This allows us to skip re-emitting the breadcrumb
after resetting the GPU if the ring was entirely idle. This prevents us
from overwriting a seqno wraparound by re-executing a stale breadcrumb,
i.e.
submit_request(1)
intel_engine_init_global_seqno(0)
i915_reset()
would then leave 1 in the HWS, but the next request to execute would
also be with seqno 1. The sanity checks upon submission detect this as a
timewarp and explode. By setting the ring as empty, upon reset the HWS
is left as 0, leaving it consistent with the timeline.
v2: Fix check for deleting last element of list. We know that this
request is always the first element of the ring, so only if next
points back to the start will this be the only request in flight.
v3: Remove opencoding of list_is_last()
v4: Move the block to its own function for some clarity.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=100144
Testcase: igt/gem_exec_whisper/hang-*
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20170406170028.26871-1-chris@chris-wilson.co.uk
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
When we update the global seqno (on the engine timeline), we modify HW
state (both registers and mapped pages). As we do this, we should be
sure that the HW is idle and we are not causing a conflict. The caller
is supposed to wait_for_idle before calling us to update the seqno, so
let's assert they have and the engine is indeed idle.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: http://patchwork.freedesktop.org/patch/msgid/20170405153055.28123-1-chris@chris-wilson.co.uk
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Load HuC version 1.07.1748 on GLK.
v2: rebased.
v3: Use name of the right platform(John Spotswood)
v4: rebased.
Cc: Jeff Mcgee <jeff.mcgee@intel.com>
Cc: John Spotswood <john.a.spotswood@intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Anusha Srivatsa <anusha.srivatsa@intel.com>
Reviewed-by: John Spotswood <john.a.spotswood@intel.com>
Signed-off-by: Ander Conselvan de Oliveira <ander.conselvan.de.oliveira@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1490905447-15815-2-git-send-email-anusha.srivatsa@intel.com
Load GuC 10.56 on GLK. Work on firmware is still
in progress. Testing has not been done yet.
This patch addresses the initial need to load the GuC
firmware for HuC authentication
v2: rebased.
Cc: Jeff mcgee <jeff.mcgee@intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Cc: John Spotswood <john.a.spotswood@intel.com>
Signed-off-by: Anusha Srivatsa <anusha.srivatsa@intel.com>
Reviewed-by: John Spotswood <john.a.spotswood@intel.com>
Signed-off-by: Ander Conselvan de Oliveira <ander.conselvan.de.oliveira@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1490905447-15815-1-git-send-email-anusha.srivatsa@intel.com
Many sightings report the greater prevalence of allocation failures.
This is all due to the incorrect use of mapping_gfp_constraint(), so
remove it in favour of just querying the mapping_gfp_mask() which are
the exact gfp_t we wanted in the first place.
We still do expect a higher chance of reporting ENOMEM, as that is the
intention of using __GFP_NORETRY -- to fail rather than oom after having
reclaimed from our bo caches, and having done a direct|kswapd reclaim
pass.
Reported-by: Jason Ekstrand <jason.ekstrand@intel.com>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=100594
Fixes: 24f8e00a8a ("drm/i915: Prefer to report ENOMEM rather than incur the oom for gfx allocations")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: http://patchwork.freedesktop.org/patch/msgid/20170405221514.23251-1-chris@chris-wilson.co.uk
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
As per BSPEC, valid cdclk values for glk are 79.2, 158.4, 316.8 Mhz.
Practically we can achive only 99% of these cdclk values (HW team
checking on this). So cdclk should be calculated for the given pixclk as
per that otherwise it may lead to screen corruption, explained below:
1. For DSI AUO panel(1920x1200 @60) required pixclk is 157100 KHZ
2. glk_calc_cdclk returns 79200 KHZ for this pixclk, For 2PPC it
will be 158400 KHZ
3. Practically 100% of the cdclk can’t be achieved, so 99% of 158400
KHZ = 156816 which is less than the desired pixlclk and causes
panel corruption.
v2: Rebased to new CDLCK code framework
v3: Addressed review comments from Ander/Jani
- Add comment in code about 99% usage of CDCLK
- Calculate max dot clock as well with 99% limit
v4 by Jani:
- drop superfluous whitespace change
- rewrite code comments to clarify
v5: Added details of non-working scenario in commit message
Cc: Ander Conselvan de Oliveira <ander.conselvan.de.oliveira@intel.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Madhav Chauhan <madhav.chauhan@intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Reviewed-by: Ander Conselvan de Oliveira <conselvan2@gmail.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1491397463-13637-1-git-send-email-madhav.chauhan@intel.com
There is some conflation related to sink rates, making this change more
complicated than it would otherwise have to be. There are three changes
here that are rather difficult to split up:
1) Use the intel_dp->sink_rates array for all DP, not just eDP 1.4. We
initialize it from DPCD on eDP 1.4 like before, but generate it based
on DP_MAX_LINK_RATE on others. This reduces code complexity when we
need to use the sink rates; they are all always in the sink_rates
array.
2) Update the sink rate array whenever we read DPCD, and use the
information from there. This increases code readability when we need
the sink rates.
3) Disentangle fallback rate limiting from sink rates. In the code, the
max rate is a dynamic property of the *link*, not of the *sink*. Do
the limiting after intersecting the source and sink rates, which are
static properties of the devices.
This paves the way for follow-up refactoring that I've refrained from
doing here to keep this change as simple as it possibly can.
v2: introduce use_rate_select and handle non-confirming eDP (Ville)
v3: don't clobber cached eDP rates on short pulse (Ville)
Cc: Manasi Navare <manasi.d.navare@intel.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Manasi Navare <manasi.d.navare@intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/071bad76467f8ab2e73f3f61ad52d5a468004c71.1490712890.git.jani.nikula@intel.com
We need the source rates array so often that it makes sense to set it
once at init. This reduces function calls when we need the rates, making
the code easier to follow.
Cc: Manasi Navare <manasi.d.navare@intel.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Manasi Navare <manasi.d.navare@intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/aa998882d2b824f671272c60e9d26621ab9d2d17.1490712890.git.jani.nikula@intel.com
Rename the function, move it at the top, and reuse in
intel_dp_link_rate_index(). If there was a reason in the past to use
reverse search order here, there isn't now.
The names may be slightly confusing now, but intel_dp_link_rate_index()
will go away in follow-up patches.
v2: Use name intel_dp_rate_index (Dhinakaran)
Cc: Manasi Navare <manasi.d.navare@intel.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Manasi Navare <manasi.d.navare@intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/c7b6197aaa12e368a0d024dc142fa574fd0443a7.1490712890.git.jani.nikula@intel.com
We shouldn't silently use the first element if we can't find the rate
we're looking for. Make rate_to_index() more generally useful, and
fallback to the first element in the caller, with a big warning.
Cc: Manasi Navare <manasi.d.navare@intel.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Manasi Navare <manasi.d.navare@intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/8a6e83b7bf35da0cbbc703ae157944107ff145be.1490712890.git.jani.nikula@intel.com
I can't think of a real world bug this could cause now, but this will be
required in follow-up work. While at it, change the parameter order to
be slightly more sensible.
Cc: Manasi Navare <manasi.d.navare@intel.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Manasi Navare <manasi.d.navare@intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/ff5b08f45a72c2247f5326b080027e2f5d8cc4ee.1490712890.git.jani.nikula@intel.com
i915 is currently doing a full GPU reset at the end of
i915_gem_suspend() followed by GuC suspend in i915_drm_suspend(). This
GPU reset clobbers the GuC, causing the suspend request to then fail,
leaving the GuC in an undefined state. We need to tell the GuC to
suspend before we do the direct intel_gpu_reset().
v2: Commit message update. (Chris, Daniele)
Fixes: 1c777c5d1d ("drm/i915/hsw: Fix GPU hang during resume from S3-devices state")
Cc: Jeff McGee <jeff.mcgee@intel.com>
Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Imre Deak <imre.deak@intel.com>
Cc: Mika Kuoppala <mika.kuoppala@intel.com>
Signed-off-by: Sagar Arun Kamble <sagar.a.kamble@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1491387710-20553-1-git-send-email-sagar.a.kamble@intel.com
Reviewed-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Acked-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
VLV/CHV watermarks are now able to handle the radiation, so
mark these platforms as ready for atomic.
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Suggested-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20170303151928.23053-5-ville.syrjala@linux.intel.com
Reviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
The VLV/CHV watermark calculation is really interested in the hardware
plane type rather than the plane type (which is more of a software
concept). Let's check plane->id rather plane->type.
No functional changes.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20170303151928.23053-3-ville.syrjala@linux.intel.com
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Almost all other GuC fw definitions are using GUC|guc prefix.
While around, in get_core_family() change explicit WARN into MISSING_CASE
as it looks more appropriate, since GuC support capability we are controlling
by intel_device_info.has_guc flag.
Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20170404133836.125736-1-michal.wajdeczko@intel.com
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Noticed this while I was looking at some debug output,
[drm:intel_hdmi_compute_config [i915]] picking bpc to 12 for HDMI output
[drm:intel_hdmi_compute_config [i915]] forcing pipe bpc to 36 for HDMI
I believe the second line should be pipe *bpp*
Signed-off-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/1491329765-14340-1-git-send-email-dhinakaran.pandiyan@intel.com
We're clearing the legacy_cursor_update flag before calling
drm_atomic_helper_setup_commit() which means the helper will
wait for the flip to complete before cleaning up the framebuffers.
That's not what we want for the legacy cursor, so let's clear
the flag after setting up the commit.
Also toss in a FIXME about solving these problems in a nicer
way using the fabled vblank workers.
v2: Also unsync with legacy page flips
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Uwe Kleine-König <uwe@kleine-koenig.org>
Cc: Rafael Ristovski <rafael.ristovski@gmail.com>
Fixes: a5509abda4 ("drm/i915: Fix legacy cursor vs. watermarks for ILK-BDW")
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20170329142123.5923-1-ville.syrjala@linux.intel.com
Reviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
On last guc/huc cleanup series we've simplified guc init hw
function but missed the one for the huc. While here, change
its signature as we don't care about huc loading status.
Signed-off-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
Cc: Anusha Srivatsa <anusha.srivatsa@intel.com>
Cc: Arkadiusz Hiler <arkadiusz.hiler@intel.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Anusha Srivatsa <anusha.srivatsa@intel.com>
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20170331115709.181940-1-michal.wajdeczko@intel.com
If the engine is continually completing nops, we can saturate the
signaler and keep it working indefinitely. This angers the NMI watchdog!
A good example is to disable semaphores on snb and run igt/gem_exec_nop -
the parallel, multi-engine workloads are more than sufficient to hog the
CPU, preventing the system from even processing ICMP echo replies.
v2: Tvrtko dug into cond_resched() on x86 and found that it only
depended upon preempt_count and not tif_need_resched() - which means
that we would always call schedule() at that point.
Fixes: c81d46138d ("drm/i915: Convert trace-irq to the breadcrumb waiter")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@intel.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20170404120531.10737-1-chris@chris-wilson.co.uk
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
If the signal to park arrives before we sleep, then we need to check
kthread_should_park() before sleeping to avoid missing the signal.
Otherwise, if the signal arrives whilst we are processing completed
requests, we will reset the current->state back to TASK_INTERRUPTIBLE
and so miss the wakeup.
Fixes: fe3288b5da ("drm/i915: Park the breadcrumbs signaler across a GPU reset")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@intel.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20170403105124.8969-1-chris@chris-wilson.co.uk
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Rather than call intel_engine_cleanup() with a partially constructed
engine, unwind the error during intel_init_ring_common().
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20170403113426.25707-2-chris@chris-wilson.co.uk
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Or rather it is used only by intel_ring_pin() to extract the
drm_i915_private which we can easily pass in. As this is a relatively
rare operation, save the space in the struct, and as such it is even
break even in the extra code for passing around the parameter:
add/remove: 0/0 grow/shrink: 2/3 up/down: 15/-15 (0)
function old new delta
intel_init_ring_buffer 906 918 +12
execlists_context_pin 1308 1311 +3
mock_engine 407 403 -4
intel_engine_create_ring 367 363 -4
intel_ring_pin 326 319 -7
Total: Before=1261794, After=1261794, chg +0.00%
v2: Reorder intel_init_ring_buffer to keep the ring setup together:
add/remove: 0/0 grow/shrink: 2/3 up/down: 9/-15 (-6)
function old new delta
intel_init_ring_buffer 906 912 +6
execlists_context_pin 1308 1311 +3
mock_engine 407 403 -4
intel_engine_create_ring 367 363 -4
intel_ring_pin 326 319 -7
Total: Before=1261794, After=1261788, chg -0.00%
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20170403113426.25707-1-chris@chris-wilson.co.uk
commit 8490ae207f ("drm/i915: Suppress busy status for engines if
wedged") moved the check for inflight requests to the
intel_engines_are_idle() check to protect the idle worker. However, the
request selftests were also checking the engine idle status and erroring
out if they did not become idle within a short period of time after the
final wait. In order to accommodate the new check, call retire requests
prior to the engine check so that we flush all the waits.
Fixes: 8490ae207f ("drm/i915: Suppress busy status for engines if wedged")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Matthew Auld <matthew.william.auld@gmail.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: http://patchwork.freedesktop.org/patch/msgid/20170331192121.10024-1-chris@chris-wilson.co.uk
Reviewed-by: Matthew Auld <matthew.william.auld@gmail.com>