Remove some needless variables and parameter passing.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Similar in vain in reducing the number of unrequired spinlocks used for
execlist command submission (where the forcewake is required but
manually controlled), we know that the IRQ registers are outside of the
powerwell and so we can access them directly. Since we now have direct
access exported via I915_READ_FW/I915_WRITE_FW, lets put those to use in
the irq handlers as well.
In the process, reorder the execlist submission to happen as early as
possible.
v2: Restrict the untraced register mmio to just the GT path (i.e. the
hotpath for execlists)
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The issue is that by computing the last_adj value after applying the
clamping, we can end up with a bogus value for feeding into the next RPS
autotuning step.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Deepak S <deepak.s@linux.intel.com>
Reviewed-by: Deepak S <deepak.s@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Reuse the same reclocking strategy for Baytail as on its bigger brethren,
Sandybridge and Ivybridge. In particular, this makes the device quicker
to reclock (both up and down) though the tendency now is to downclock
more aggressively to compensate for the RPS boosts.
v2: Rebase
v3: Exclude Cherrytrail as Deepak was concerned that the increased
number of register writes would wake the common powerwell too often.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Deepak S <deepak.s@linux.intel.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Reviewed-by: Deepak S <deepak.s@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The logical place for clearing the RPS latched interrupt bits is when
resetting the RPS interrupts, so move the corresponding part from the RPS
disable function to the reset function. During resetting we already
cleared the IIR bits, so the only thing missing there was clearing pm_iir.
Note that we call gen6_disable_rps_interrupts() also during driver load
and resume time via intel_uncore_sanitize() when i915 interrupts are
still not installed. If there are any pending RPS bits at this point
(which after this patch wouldn't be cleared) they will be cleared by the
reset code via the interrupt preinstall hooks.
Signed-off-by: Imre Deak <imre.deak@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
When disabling RPS interrupts there is a race where we disable RPS
inerrupts while the interrupt handler is running and the handler has
already latched the pending RPS interrupt from the master IIR register.
Afterwards the disabling path clears the PM IIR bits, making the state
of pending interrupts inconsistent from the interrupt handler's point of
view. This triggers the following warning: "The master control interrupt
lied (PM)!".
To fix this make sure that any running interrupt handler (which may
have already latched the master IIR) finishes before clearing the IIR
bits.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=87347
Signed-off-by: Imre Deak <imre.deak@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Use both up/down manual ei calcuations for symmetry and greater
flexibility for reclocking, instead of faking the down interrupt based
on a fixed integer number of up interrupts.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Deepak S<deepak.s@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Rewrite commit 31685c258e
Author: Deepak S <deepak.s@linux.intel.com>
Date: Thu Jul 3 17:33:01 2014 -0400
drm/i915/vlv: WA for Turbo and RC6 to work together.
Other than code clarity, the major improvement is to disable the extra
interrupts generated when idle. However, the reclocking remains rather
slow under the new manual regime, in particular it fails to downclock as
quickly as desired. The second major improvement is that for certain
workloads, like games, we need to combine render+media activity counters
as the work of displaying the frame is split across the engines and both
need to be taken into account when deciding the global GPU frequency as
memory cycles are shared.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Deepak S <deepak.s@linux.intel.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Reviewed-by: Deepak S<deepak.s@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Earlier Turbo interrupts were not being processed for SKL,
as something was amiss in turbo programming for SKL.
Now missing changes have been added, so enabling the Turbo
interrupt processing for SKL.
Signed-off-by: Akash Goel <akash.goel@intel.com>
Reviewed-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The pipe interrupt registers are in the actual pipe power well, so we
need to restore them when re-enable the corresponding power well.
I've also copied what we do on HSW/BDW for VGA, even if the we haven't
enabled unclaimed registers just yet.
v2: Don't run skl_power_well_post_enable() if the power well is already
enabled (Paulo)
Signed-off-by: Damien Lespiau <damien.lespiau@intel.com>
Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
While we only need to restore pipe B/C interrupt registers on BDW when
enabling the power well, skylake a bit more flexible and we'll also need
to restore the pipe A registers as it has its own power well that can be
toggled.
Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Signed-off-by: Damien Lespiau <damien.lespiau@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iQEcBAABAgAGBQJU/NacAAoJEHm+PkMAQRiGdUcIAJU5dHclwd9HRc7LX5iOwYN6
mN0aCsYjMD8Pjx2VcPCgJvkIoESQO5pkwYpFFWCwILup1bVEidqXfr8EPOdThzdh
kcaT0FwUvd19K+0jcKVNCX1RjKBtlUfUKONk6sS2x4RrYZpv0Ur8Gh+yXV8iMWtf
fAusNEYlxQJvEz5+NSKw86EZTr4VVcykKLNvj+/t/JrXEuue7IG8EyoAO/nLmNd2
V/TUKKttqpE6aUVBiBDmcMQl2SUVAfp5e+KJAHmizdDpSE80nU59UC1uyV8VCYdM
qwHXgttLhhKr8jBPOkvUxl4aSXW7S0QWO8TrMpNdEOeB3ZB8AKsiIuhe1JrK0ro=
=Xkue
-----END PGP SIGNATURE-----
Merge tag 'v4.0-rc3' into drm-next
Linux 4.0-rc3 backmerge to fix two i915 conflicts, and get
some mainline bug fixes needed for my testing box
Conflicts:
drivers/gpu/drm/i915/i915_drv.h
drivers/gpu/drm/i915/intel_display.c
As vendors transition their drivers from legacy to atomic there's some
duplication of data between drm_crtc and drm_crtc_state (since
unconverted drivers likely won't have a state structure).
i915 is partially converted and does have a crtc->state structure, but
still uses direct crtc fields internally in many places, which causes
the two sets of data to get out of sync. As of commit
commit 31c946e85c
Author: Daniel Vetter <daniel.vetter@ffwll.ch>
Date: Sun Feb 22 12:24:17 2015 +0100
drm: If available use atomic state in getcrtc ioctl
This way drivers fully converted to atomic don't need to update these
legacy state variables in their modeset code any more.
Reviewed-by: Rob Clark <robdclark@gmail.com>
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
the DRM core starts assuming that the presence of a ->state structure
implies that it should make use of the values stored there which, on
i915, leads to the core code using stale values for CRTC 'enabled'
status.
Let's switch over to using the state value of 'enable' internally rather
than using the drm_crtc field. This ensures that our driver internals
are working from the same data that the DRM core is, avoiding
mismatches.
This patch was generated with Coccinelle using the following semantic
patch:
<smpl>
@@
struct drm_crtc C;
struct drm_crtc *CP;
@@
(
- C.enabled
+ C.state->enable
|
- CP->enabled
+ CP->state->enable
)
// For assignments, we still update the legacy value as well as the state value
// so add an extra assignment statement for that.
@@
struct drm_crtc C;
struct drm_crtc *CP;
expression E;
@@
(
C.state->enable = E;
+ C.enabled = E;
|
CP->state->enable = E;
+ CP->enabled = E;
)
</smpl>
The crtc->mode and crtc->hwmode fields should probably be transitioned
over as well eventually, but we seem to do an okay job of keeping those
up-to-date already so I want to minimize the changes that will clash
with Ander's in-progress atomic work.
v2: Don't remove the assignments to the legacy value when we assign to
the state value. A second cocci stanza takes care of adding the
legacy assignment back where appropriate. (Daniel)
Cc: Daniel Vetter <daniel@ffwll.ch>
Cc: Ander Conselvan de Oliveira <ander.conselvan.de.oliveira@intel.com>
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Atm, it's possible that the interrupt handler is called when the device
is in D3 or some other low-power state. It can be due to another device
that is still in D0 state and shares the interrupt line with i915, or on
some platforms there could be spurious interrupts even without sharing
the interrupt line. The latter case was reported by Klaus Ethgen using a
Lenovo x61p machine (gen 4). He noticed this issue via a system
suspend/resume hang and bisected it to the following commit:
commit e11aa36230
Author: Jesse Barnes <jbarnes@virtuousgeek.org>
Date: Wed Jun 18 09:52:55 2014 -0700
drm/i915: use runtime irq suspend/resume in freeze/thaw
This is a problem, since in low-power states IIR will always read
0xffffffff resulting in an endless IRQ servicing loop.
Fix this by handling interrupts only when the driver explicitly enables
them and so it's guaranteed that the interrupt registers return a valid
value.
Note that this issue existed even before the above commit, since during
runtime suspend/resume we never unregistered the handler.
v2:
- clarify the purpose of smp_mb() vs. synchronize_irq() in the
code comment (Chris)
v3:
- no need for an explicit smp_mb(), we can assume that synchronize_irq()
and the mmio read/writes in the install hooks provide for this (Daniel)
- remove code comment as the remaining synchronize_irq() is self
explanatory (Daniel)
v4:
- drm_irq_uninstall() implies synchronize_irq(), so no need to call it
explicitly (Daniel)
Reference: https://lkml.org/lkml/2015/2/11/205
Reported-and-bisected-by: Klaus Ethgen <Klaus@Ethgen.ch>
Cc: stable@vger.kernel.org
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
UMS is no more!
Cc: Imre Deak <imre.deak@intel.com>
Reviewed-by: Imre Deak <imre.deak@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
With Ville's rework to use drm_crtc_vblank_on/off the core will take
care of rejecting drm_vblank_get calls when the pipe is off. Also the
core won't call the get_vblank_counter hooks in that case either. And
since we've dropped ums support recently we can now remove these
hacks, yay!
Noticed while trying to answer questions Laurent had about how the new
atomic helpers work.
Cc: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Reviewed-by: Imre Deak <imre.deak@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
Replace the valleyview_set_rps() and gen6_set_rps() calls with
intel_set_rps() which itself does the IS_VALLEYVIEW() check. The
code becomes simpler since the callers don't have to do this check
themselves.
Most of the change was performe with the following semantic patch:
@@
expression E1, E2, E3;
@@
- if (IS_VALLEYVIEW(E1)) {
- valleyview_set_rps(E2, E3);
- } else {
- gen6_set_rps(E2, E3);
- }
+ intel_set_rps(E2, E3);
Adding intel_set_rps() and making valleyview_set_rps() and gen6_set_rps()
static was done manually. Also valleyview_set_rps() had to be moved a
bit avoid a forward declaration.
v2: Use a less greedy semantic patch
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Suggested-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
You can _never_ assert that a lock is not held, except in some very
restricted corner cases where it's guranteed that your code is running
single-threade (e.g. driver load before you've published any pointers
leading to that lock).
In addition the early return breaks a bunch of testcases since with
highly concurrent hangcheck stress tests the reset fails to work and
the test doesn't recover and time out.
This regression has been introduced in
commit b8d24a0656
Author: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Date: Wed Jan 28 17:03:14 2015 +0200
drm/i915: Remove nested work in gpu error handling
Aside: It is possible to check whether a given task doesn't hold a
lock, but only when lockdep is enabled, using the lockdep_assert_held
stuff.
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@intel.com>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=88908
Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
Now when we declare gpu errors only through our own dedicated
hangcheck workqueue there is no need to have a separate workqueue
for handling the resetting and waking up the clients as the deadlock
concerns are no more.
The only exception is i915_debugfs::i915_set_wedged, which triggers
error handling through process context. However as this is only used through
test harness it is responsibility for test harness not to introduce hangs
through both debug interface and through hangcheck mechanism at the same time.
Remove gpu_error.work and let the hangcheck work do the tasks it used to.
v2: Add a big warning sign into i915_debugfs::i915_set_wedged (Chris)
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Mika Kuoppala <mika.kuoppala@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
When run as a timer, i915_hangcheck_elapsed() must adhere to all the
rules of running in a softirq context. This is advantageous to us as we
want to minimise the risk that a driver bug will prevent us from
detecting a hung GPU. However, that is irrelevant if the driver bug
prevents us from resetting and recovering. Still it is prudent not to
rely on mutexes inside the checker, but given the coarseness of
dev->struct_mutex doing so is extremely hard.
Give in and run from a work queue, i.e. outside of softirq.
v2: Use own workqueue to avoid deadlocks (Daniel)
Cleanup commit msg and add comment to i915_queue_hangcheck() (Chris)
Cc: Jani Nikula <jani.nikula@intel.com>
Cc: Daniel Vetter <dnaiel.vetter@ffwll.chm>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk> (v1)
Signed-off-by: Mika Kuoppala <mika.kuoppala@intel.com>
[danvet: Remove accidental kerneldoc comment starter, to appease the 0
day builder.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Self-explanatory code is better code.
Cc: Dave Airlie <airlied@redhat.com>
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
Reviewed-by: Jani Nikula <jani.nikula@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
To match the semantics of drm_crtc->state, which this will eventually
become. The allocation of the memory for config will be fixed in a
followup patch. By adding the extra _config field to intel_crtc it was
possible to generate this entire patch with the cocci script below.
@@ @@
struct intel_crtc {
...
-struct intel_crtc_state config;
+struct intel_crtc_state _config;
+struct intel_crtc_state *config;
...
}
@@ struct intel_crtc *crtc; @@
-memset(&crtc->config, 0, sizeof(crtc->config));
+memset(crtc->config, 0, sizeof(*crtc->config));
@@ @@
__intel_set_mode(...) {
<...
-to_intel_crtc(crtc)->config = *pipe_config;
+(*(to_intel_crtc(crtc)->config)) = *pipe_config;
...>
}
@@ @@
intel_crtc_init(...) {
...
WARN_ON(drm_crtc_index(&intel_crtc->base) != intel_crtc->pipe);
+intel_crtc->config = &intel_crtc->_config;
return;
...
}
@@ struct intel_crtc *crtc; @@
-&crtc->config
+crtc->config
@@ struct intel_crtc *crtc; identifier member; @@
-crtc->config.member
+crtc->config->member
@@ expression E; @@
-&(to_intel_crtc(E)->config)
+to_intel_crtc(E)->config
@@ expression E; identifier member; @@
-to_intel_crtc(E)->config.member
+to_intel_crtc(E)->config->member
v2: Clarify manual changes by splitting them into another patch. (Matt)
Improve cocci script to generate even more of the changes. (Ander)
Signed-off-by: Ander Conselvan de Oliveira <ander.conselvan.de.oliveira@intel.com>
Reviewed-by: Matt Roper <matthew.d.roper@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
- refactor i915/snd-hda interaction to use the component framework (Imre)
- psr cleanups and small fixes (Rodrigo)
- a few perf w/a from Ken Graunke
- switch to atomic plane helpers (Matt Roper)
- wc mmap support (Chris Wilson & Akash Goel)
- smaller things all over
* tag 'drm-intel-next-2015-01-17' of git://anongit.freedesktop.org/drm-intel: (40 commits)
drm/i915: Update DRIVER_DATE to 20150117
i915: reuse %ph to dump small buffers
drm/i915: Ensure the HiZ RAW Stall Optimization is on for Cherryview.
drm/i915: Enable the HiZ RAW Stall Optimization on Broadwell.
drm/i915: PSR link standby at debugfs
drm/i915: group link_standby setup and let this info visible everywhere.
drm/i915: Add missing vbt check.
drm/i915: PSR HSW/BDW: Fix inverted logic at sink main_link_active bit.
drm/i915: PSR VLV/CHV: Remove condition checks that only applies to Haswell.
drm/i915: VLV/CHV PSR needs to exit PSR on every flush.
drm/i915: Fix kerneldoc for i915 atomic plane code
drm/i915: Don't pretend SDVO hotplug works on 915
drm/i915: Don't register HDMI connectors for eDP ports on VLV/CHV
drm/i915: Remove I915_HAS_HOTPLUG() check from i915_hpd_irq_setup()
drm/i915: Make hpd arrays big enough to avoid out of bounds access
Revert "drm/i915/chv: Use timeout mode for RC6 on chv"
drm/i915: Improve HiZ throughput on Cherryview.
drm/i915: Reset CSB read pointer in ring init
drm/i915: Drop unused position fields (v2)
drm/i915: Move to atomic plane helpers (v9)
...
Backmerge Linus tree after rc5 + drm-fixes went in.
There were a few amdkfd conflicts I wanted to avoid,
and Ben requested this for nouveau also.
Conflicts:
drivers/gpu/drm/amd/amdkfd/Makefile
drivers/gpu/drm/amd/amdkfd/kfd_chardev.c
drivers/gpu/drm/amd/amdkfd/kfd_mqd_manager.c
drivers/gpu/drm/amd/amdkfd/kfd_priv.h
drivers/gpu/drm/amd/include/kgd_kfd_interface.h
drivers/gpu/drm/i915/intel_runtime_pm.c
drivers/gpu/drm/radeon/radeon_kfd.c
The dev_priv->display.hpd_irq_setup hook is optional, so we can move the
I915_HAS_HOTPLUG() check out of i915_hpd_irq_setup() and only set up the
hook when hotplug support is present.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Jani Nikula <jani.nikula@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
intel_hpd_irq_handler() walks the passed in hpd[] array assuming it
contains HPD_NUM_PINS elements. Currently that's not true as we don't
specify an explicit size for the arrays when initializing them. Avoid
the out of bounds accesses by specifying the size for the arrays.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Jani Nikula <jani.nikula@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Conflicts:
drivers/gpu/drm/i915/intel_runtime_pm.c
Separate branch so that Takashi can also pull just this refactoring
into sound-next.
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
We apply the RPS interrupt workaround on VLV everywhere except when
writing the mask directly during idling the GPU. For consistency do this
also there.
While at it also extend the code comment about affected platforms.
I couldn't reproduce the issue on VLV fixed by this workaround, by
removing the workaround from everywhere, while it's 100% reproducible on
SNB using igt/gem_reset_stats/ban-ctx-render. So also add a note that
it hasn't been verified if the workaround really applies to VLV/CHV.
Suggested-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
In
commit dbea3cea69
Author: Imre Deak <imre.deak@intel.com>
Date: Mon Dec 15 18:59:28 2014 +0200
drm/i915: sanitize RPS resetting during GPU reset
we disable RPS interrupts during GPU resetting, but don't apply the
necessary GEN6 HW workaround. This leads to a HW lockup during a
subsequent "looping batchbuffer" workload. This is triggered by the
testcase that submits exactly this kind of workload after a simulated
GPU reset. I'm not sure how likely the bug would have triggered
otherwise, since we would have applied the workaround anyway shortly
after the GPU reset, when enabling GT powersaving from the deferred
work.
This may also fix unrelated issues, since during driver loading /
suspending we also disable RPS interrupts and so we also had a short
window during the rest of the loading / resuming where a similar
workload could run without the workaround applied.
v2:
- separate the fix to route RPS interrupts to the CPU on GEN9 too
to a separate patch (Daniel)
Bisected-by: Ander Conselvan de Oliveira <ander.conselvan.de.oliveira@intel.com>
Testcase: igt/gem_reset_stats/ban-ctx-render
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=87429
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
- plane handling refactoring from Matt Roper and Gustavo Padovan in prep for
atomic updates
- fixes and more patches for the seqno to request transformation from John
- docbook for fbc from Rodrigo
- prep work for dual-link dsi from Gaurav Signh
- crc fixes from Ville
- special ggtt views infrastructure from Tvrtko Ursulin
- shadow patch copying for the cmd parser from Brad Volkin
- execlist and full ppgtt by default on gen8, for testing for now
* tag 'drm-intel-next-2014-12-19' of git://anongit.freedesktop.org/drm-intel: (131 commits)
drm/i915: Update DRIVER_DATE to 20141219
drm/i915: Hold runtime PM during plane commit
drm/i915: Organize bind_vma funcs
drm/i915: Organize INSTDONE report for future.
drm/i915: Organize PDP regs report for future.
drm/i915: Organize PPGTT init
drm/i915: Organize Fence registers for future enablement.
drm/i915: tame the chattermouth (v2)
drm/i915: Warn about missing context state workarounds only once
drm/i915: Use true PPGTT in Gen8+ when execlists are enabled
drm/i915: Skip gunit save/restore for cherryview
drm/i915/chv: Use timeout mode for RC6 on chv
drm/i915: Add GPGPU_THREADS_DISPATCHED to the register whitelist
drm/i915: Tidy up execbuffer command parsing code
drm/i915: Mark shadow batch buffers as purgeable
drm/i915: Use batch length instead of object size in command parser
drm/i915: Use batch pools with the command parser
drm/i915: Implement a framework for batch buffer pools
drm/i915: fix use after free during eDP encoder destroying
drm/i915/skl: Skylake also supports DP MST
...
The flip stall detector kicks in when pending>=INTEL_FLIP_COMPLETE. That
means if we first call intel_prepare_page_flip() but don't call
intel_finish_page_flip(), the next stall check will erroneosly think
the page flip was somehow stuck.
With enough debug spew emitted from the interrupt handler my 830 hangs
when this happens. My theory is that the previous vblank interrupt gets
sufficiently delayed that the handler will see the pending bit set in
IIR, but ISR still has the bit set as well (ie. the flip was processed
by CS but didn't complete yet). In this case the handler will proceed
to call intel_check_page_flip() immediately after
intel_prepare_page_flip(). It then tries to print a backtrace for the
stuck flip WARN, which apparetly results in way too much debug spew
delaying interrupt processing further. That then seems to cause an
endless loop in the interrupt handler, and the machine is dead until
the watchdog kicks in and reboots. At least limiting the number of
iterations of the loop in the interrupt handler also prevented the
hang.
So it seems better to not call intel_prepare_page_flip() without
immediately calling intel_finish_page_flip(). The IIR/ISR trickery
avoids races here so this is a perfectly safe thing to do.
v2: Fix typo in commit message (checkpatch)
Cc: stable@vger.kernel.org
Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=88381
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=85888
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Paulo noticed that we don't enable RPS interrupts via PM_IER in
gen6_enable_rps_interrupts(). This wasn't a problem so far, since the
only place we disabled RPS interrupts was during system/runtime suspend
and after that we reenable all interrupts in the IRQ pre/postinstall
hooks.
In the next patch we'll disable/reenable RPS interrupts during GPU reset
too, but not call IRQ uninstall, pre/postinstall hooks, so there the
above wouldn't work. The logical place for programming PM_IER is
gen6_enable_rps_interrupts() and this also makes the function more
symmetric with gen6_disable_rps_interrupts(), so move the programming
there from the postinstall hooks.
Note that these changes don't affect the ILK RPS interrupt code, which
could be sanitized in a similar way. But that can be done as a
follow-up.
Credits-to: Paulo Zanoni <paulo.r.zanoni@intel.com>
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
We consistently use the _irq_handler postfix for functions called in
hardirq context. Especially when it's a non-static function hardirq is
a crazy enough calling context to warrant this level of ocd. So rename
it.
Cc: Thomas Daniel <thomas.daniel@intel.com>
Reviewed-by: Thomas Daniel <thomas.daniel@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
irq_mask should include all IRQ bits that we want to mask, but atm we
set it incorrectly to the inverse of this. If the mask is used
subsequently to enable/disable some IRQ bits, we may unintentionally
unmask unrelated IRQs. I can't see any way that this can lead to a real
problem in the current -nightly code, since the first place the mask
will be used next (after a suspend/resume cycle) is in
valleyview_irq_postinstall(), but the mask is reset there to its proper
value.
This causes a problem in the upstream kernel though, where - due to another
issue - the mask is used in the above way to disable only the display IRQs.
This other issue is fixed by:
commit 950eabaf5a
Author: Imre Deak <imre.deak@intel.com>
Date: Mon Sep 8 15:21:09 2014 +0300
drm/i915: vlv: fix display IRQ enable/disable
Interestingly, even with the above two bugs, we shouldn't in theory have
any real problems (arguably a famous last sentence:). That's because
even if we unmask something unintentionally via the VLV_IMR/VLV_IER
register the master IRQ masking bit in VLV_MASTER_IER is still set and
should prevent all i915 interrupts. According to my testing on an ASUS
T100 with DSI output this isn't the case at least with the
MIPIA_INTERRUPT. Leaving this one unmasked in IMR/IER, while having
VLV_MASTER_IER set to 0 may lead to a lockup during system suspend as
shown in the bugzilla ticket below. This fix should get rid of the
problem reported there in upstream and older kernels.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=85920
Cc: stable@vger.kernel.org (v3.15+)
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
After a bit of irc discussion we've concluded that it would be prudent
to check that callers use the mask/enable paramters correctly. So add
a WARN_ON.
Spurred by Damien's bugfix which added _MASKED_FIELD.
v2: We use WARN_ON(1) a lot to catch default cases in switch blocks
which should always be extended. So this doesn't work really. Dunno
why gcc only started complaining when I've moved the WARN out of the
static inline helper to address a feedback from Jani.
Cc: Damien Lespiau <damien.lespiau@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Jani Nikula <jani.nikula@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
Added the request structure's 'uniq' identifier to the trace information. Also
renamed the '_complete' trace event to '_notify' as it actually happens in the
IRQ 'notify_ring()' function. The intention is to add a new '_complete' trace
event which occurs when a request structure is actually marked as complete.
However, at the moment the completion status is re-tested every time the query
is made so there isn't a completion event as such.
v2: New patch added to series.
v3: Rebased to remove completion caching as that is apparently contentious.
Change-Id: Ic9bcde67d175c6c03b96217cdcb6e4cc4aa45d67
For: VIZ-4377
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Reviewed-by: Thomas Daniel <Thomas.Daniel@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Almost everywhere that caled i915_seqno_passed() was really asking 'has the
given seqno popped out of the hardware yet?'. Thus it had to query the current
hardware seqno and then do a signed delta comparison (which copes with wrapping
around zero but not with seqno values more than 2GB apart, although the latter
is unlikely!).
Now that the majority of seqno instances have been replaced with request
structures, it is possible to convert this test to be request based as well.
There is now a 'i915_gem_request_completed()' function which takes a request and
returns true or false as appropriate. Note that this currently just wraps up the
original _passed() test but a later patch in the series will reduce this to
simply returning a cached internal value, i.e.:
_completed(req) { return req->completed; }'
This checkin converts almost all _seqno_passed() calls. The only one left is in
the semaphore code which still requires seqnos not request structures.
For: VIZ-4377
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Reviewed-by: Thomas Daniel <Thomas.Daniel@intel.com>
[danvet: Drop hunk touching the trace_irq code since I've dropped the
patch which converts that, and resolve resulting conflict.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
More seqno value to request structure conversions. Note, this change temporarily
moves the 'get_seqno()' call inside ring_idle() but this will disappear again in
a later patch when i915_seqno_passed() itself is converted.
For: VIZ-4377
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Reviewed-by: Thomas Daniel <Thomas.Daniel@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Atm, igt/gem_reset_stats can trigger the recently added WARN on
left-over PM_IIR bits in gen6_enable_rps_interrupts(). There are two
reasons for this:
1. we call intel_enable_gt_powersave() without a preceeding
intel_disable_gt_powersave()
2. gen6_disable_rps_interrupts() doesn't mask interrupts in PM_IMR
1. means RPS interrupts will remain enabled and can be serviced during
the HW initialization after a GPU reset. 2. means even if we called
gen6_disable_rps_interrupts() any new RPS interrupt during RPS
initialization would still propagate to PM_IIR too early (though
wouldn't be serviced).
This patch solves the 2. issue by also masking interrupts in PM_IMR, the
following patch fixes 1. getting rid of the WARN. This also makes
intel_enable_gt_powersave() and intel_disable_gt_powersave() more
symmetric.
Since gen6_disable_rps_interrupts() is called during driver loading with
i915 interrupts disabled add a new version of gen6_disable_pm_irq() that
doesn't WARN for this.
Also while at it, get the irq_lock around the whole PM_IMR/IER/IIR
programming sequence and make sure that any queued PM_IIR bit is also
cleared.
The WARN was caught by PRTS after I sent my previous RPS sanitizing
patchset and I could easily reproduce it on HSW. To actually fix it we
also need the next patch.
Reported-by: He, Shuang <shuang.he@intel.com>
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
We don't really synchronously turn them off from debugfs. We try to
avoid hitting them too badly by waiting one vblank, but apparently the
irq handler can still race through that gap.
Since this isn't really all that important for testcases, only for
debugging CRC issues let's tune it down to a debug message.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=82602
Cc: Damien Lespiau <damien.lespiau@intel.com>
Acked-by: Damien Lespiau <damien.lespiau@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
On gen4 and earlier the GPU reset also resets the display, so we should
protect against concurrent modeset operations. Grab all the modeset locks
around the entire GPU reset dance, remebering first ti dislogde any
pending page flip to make sure we don't deadlock. Any pageflip coming
in between these two steps should fail anyway due to reset_in_progress,
so this should be safe.
This fixes a lot of failed asserts in the modeset code when there's a
modeset racing with the reset. Naturally the asserts aren't happy when
the expected state has disappeared.
v2: Drop UMS checks, complete pending flips after the reset (Daniel)
Cc: Daniel Vetter <daniel@ffwll.ch>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
There's quite a few bug reports with error states where the error
reasons makes just about no sense at all. Like dying on tlbs for a
display plane that's not even there. Also users don't really report a
lot of bad side effects generally, just the error states.
Furthermore we don't even enable these interrupts any more on gen5+
(though the handling code is still there). So this mostly concerns old
platforms.
Given all that lets make our lives a bit easier and stop capturing
error states, in the hopes that we can just ignore them. In case
that's not true and the gpu indeed dies the hangcheck should
eventually kick in. And I've left some debug log in to make this case
noticeble. Referenced bug is just an example.
v2: Fix missing \n Jani spotted.
References: https://bugs.freedesktop.org/show_bug.cgi?id=82095
References: https://bugs.freedesktop.org/show_bug.cgi?id=85944
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The final arrangement of updating timer->expires and calling mod_timer()
used in
commit 672e7b7c18
Author: Chris Wilson <chris@chris-wilson.co.uk>
Date: Wed Nov 19 09:47:19 2014 +0000
drm/i915: Don't continually defer the hangcheck
turns out to be very unsafe. Try again.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
With the deprecation of UMS, and by association DRI1, we have a tough
choice when updating the ring access routines. We either rewrite the
DRI1 routines blindly without testing (so likely to be broken) or take
the liberty of declaring them no longer supported and remove them
entirely. This takes the latter approach.
v2: Also remove the DRI1 sarea updates
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
[danvet: Fix rebase conflicts.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
When disabling the RPS interrupts there is a tricky dependency between
the thread disabling the interrupts, the RPS interrupt handler and the
corresponding RPS work. The RPS work can reenable the interrupts, so
there is no straightforward order in the disabling thread to (1) make
sure that any RPS work is flushed and to (2) disable all RPS
interrupts. Currently this is solved by masking the interrupts using two
separate mask registers (first level display IMR and PM IMR) and doing
the disabling when all first level interrupts are disabled.
This works, but the requirement to run with all first level interrupts
disabled is unnecessary making the suspend / unload time ordering of RPS
disabling wrt. other unitialization steps difficult and error prone.
Removing this restriction allows us to disable RPS early during suspend
/ unload and forget about it for the rest of the sequence. By adding a
more explicit method for avoiding the above race, it also becomes easier
to prove its correctness. Finally currently we can hit the WARN in
snb_update_pm_irq(), when a final RPS work runs with the first level
interrupts already disabled. This won't lead to any problem (due to the
separate interrupt masks), but with the change in this and the next
patch we can get rid of the WARN, while leaving it in place for other
scenarios.
To address the above points, add a new RPS interrupts_enabled flag and
use this during RPS disabling to avoid requeuing the RPS work and
reenabling of the RPS interrupts. Since the interrupt disabling happens
now in intel_suspend_gt_powersave(), we will disable RPS interrupts
explicitly during suspend (and not just through the first level mask),
but there is no problem doing so, it's also more consistent and allows
us to unify more of the RPS disabling during suspend and unload time in
the next patch.
v2/v3:
- rebase on patch "drm/i915: move rps irq disable one level up" in the
patchset
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Atm we first enable the RPS interrupts then we clear any pending ones.
By this we could lose an interrupt arriving after we unmasked it. This
may not be a problem as the caller should handle such a race, but logic
still calls for the opposite order. Also we can delay enabling the
interrupts until after all the RPS initialization is ready with the
following order:
1. disable left-over RPS (earlier via intel_uncore_sanitize)
2. clear any pending RPS interrupts
3. initialize RPS
4. enable RPS interrupts
This also allows us to do the 2. and 4. step the same way for all
platforms, so let's follow this order to simplifying things.
Also make sure any queued interrupts are also cleared.
v2:
- rebase on the GEN9 patches where we don't support RPS yet, so we
musn't enable RPS interrupts on it (Paulo)
v3:
- avoid enabling RPS interrupts on GEN>9 too (Paulo)
- clarify the RPS init sequence in the log message (Chris)
- add POSTING_READ to gen6_reset_rps_interrupts() (Paulo)
- WARN if any PM_IIR bits are set in gen6_enable_rps_interrupts()
(Paulo)
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
This extends
commit 132f3f1767
Author: Imre Deak <imre.deak@intel.com>
Date: Mon Nov 10 15:34:33 2014 +0200
drm/i915: WARN if we receive any gen9 rps interrupts
to GEN>9 platforms as suggested by Paulo.
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
With multiple rings, we may continue to render on the blitter whilst
executing an infinite shader on the render ring. As we currently, rearm
the timer with each execbuf, in this scenario the hangcheck will never
fire and we will never detect the lockup on the render ring. Instead,
only arm the timer once per hangcheck, so that hangcheck runs more
frequently.
v2: Rearrange code to avoid triggering a BUG_ON in add_timer from
softirq context.
Testcase: igt/gem_reset_stats/defer-hangcheck*
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=86225
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Throw away the hand rolled display irq setup code on chv, and instead
just call vlv_display_irq_postinstall() and vlv_display_irq_uninstall().
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>