For HPD storm detection we now mask out individual interrupt source
bits. We have already seen a case where HPD interrupt enable bits
were assigned to the wrong pins. To track these conditions more
easily add some debugging messages.
v2: Spelling fixes as suggested by Jani Nikula <jani.nikula@linux.intel.com>
Signed-off-by: Egbert Eich <eich@suse.de>
Reviewed-by: Jani Nikula <jani.nikula@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Currently, the register access code is split between i915_drv.c and
intel_pm.c. It only bares a superficial resemblance to the reset of the
powermanagement code, so move it all into its own file. This is to ease
further patches to enforce serialised register access.
v2: Scan for random abuse of I915_WRITE_NOTRACE
v3: Take the opportunity to rename the GT functions as uncore. Uncore is
the term used by the hardware design (and bspec) for all functions
outside of the GPU (and CPU) cores in what is also known as the System
Agent.
v4: Rebase onto SNB rc6 fixes
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Ben Widawsky <ben@bwidawsk.net>
[danvet: Wrestle patch into applying and inline
intel_uncore_early_sanitize (plus move the old comment to the new
function). Also keep the _santize postfix for intel_uncore_sanitize.]
[danvet: Squash in fixup spotted by Chris on irc: We need to call
intel_pm_init before intel_uncore_sanitize since the later will call
cancel_work on the delayed rps setup work the former initializes.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Make the uevent strings part of the user API for people who wish to
write their own listeners.
v2: Make a space in the string concatenation. (Chad)
Use the "UEVENT" suffix intead of "EVENT" (Chad)
Make kernel-doc parseable Docbook comments (Daniel)
v3: Undid reset change introduced in last submission (Daniel)
Fixed up comments to address removal changes.
Thanks to Daniel Vetter for a majority of the parity error comments.
CC: Chad Versace <chad.versace@linux.intel.com>
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
It was very similar to ironlake_irq_postinstall, so IMHO merging both
functions results in a code that is easier to maintain.
With this change, all the irq handler vfuncs between ironlake and
ivybridge are now unified.
v2: Add "(" and ")" to make at least one vim user much happier (Chris)
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The IVB funtions are exactly the same as the ILK ones, with the
exception of the bit register. So add IVB/HSW support to
ironlake_enable_vblank and ironlake_disable_vblank, then kill the
ivybridge functions.
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
And then rename it to ironlake_irq_handler. Also move
ilk_gt_irq_handler up to avoid forward declarations.
In the previous patches I did small modifications to both
ironlake_irq_handler an ivybridge_irq_handler so they became very
similar functions. Now it should be very easy to verify that all we
need to add ILK/SNB support is to call ilk_gt_irq_handler, call
ilk_display_irq_handler and avoid reading pm_iir on gen 5.
v2: - Rebase due to changes on the previous patches
- Move pm_iir to a tighter scope (Chris)
- Change some Gen checks for readability
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
We have this POSTING_READ inside ironlake_irq_handler. I suppose we
also want it on IVB because we want to stop the IRQ handler as soon as
possible at this point.
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The ironlake_irq_handler and ivybridge_irq_handler functions do
basically the same thing, but they have different implementation
styles. With this patch we reorganize ironlake_irq_handler in a way
that makes it look very similar to ivybridge_irq_handler.
One of the advantages of this new function style is that we don't
write 0 to the IIR registers anymore.
v2: - Rebase due to changes on previous patches
- Move pm_iir to a tighter scope (Chris)
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The register doesn't exist on Gen 5.
v2: Simplify checks since pm_iir is always 0 on Gen 5 (Chris)
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Just like we did with ilk_display_irq_handler.
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
It's the code that deals with de_iir.
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
After Daniel's latest changes it's now equal to
ironlake_irq_preinstall.
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
To run hangcheck in near future.
Signed-off-by: Mika Kuoppala <mika.kuoppala@intel.com>
Reviewed-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Again extract a common helper. For the postinstall hook things are a
bit more complicated since we have more cases on ilk-hsw/vlv here.
But since vlv was clearly broken by failing to initialize
dev_priv->gt_irq_mask correctly the shared code is clearly justified.
Also kill the PMIER setting in the async rps enable work. I should
have been save, but also clearly looked rather fragile. PMIER setup is
now all down in the irq pre/postinstall hooks.
With this we now have the usual interrupt register sequence for GT/PM
irq registers:
- IER is setup once with all the interrupts we ever need in the
postinstall hook and never touched again. Exceptions are SDEIER,
which is touched in the preinstall hook (when the irq handler isn't
enabled) and then only from the irq handler. And DEIER/VLV_IER with
is used in the irq handler but also written to once in the
postinstall hook. But since that write is essentially what enables
the interrupt and we should always have MSI interrupts we should be
save. In case we ever have non-MSI interrupts we'd be screwed.
- IIR is cleared in the postinstall hook before we enable/unmask the
respective interrupt sources. Hence we can't steal an interrupt
event an accidentally trigger the spurious interrupt logic in the
core kernel. Note that after some discussion with Ben Widawsky we
think that we actually should clear the IIR registers in the
preinstall hook. But doing that is a much larger patch series.
- IMR regs are (usually) all masked off. Those are the only regs
changed at runtime, which is all protected by dev_priv->irq_lock.
This unification also kills the cargo-culted read-modify-write PM
register setup for VECS. Interrupt setup is done without userspace
being able to interfere, so we better know what values we want to put
into those registers. RMW cycles otoh are really good at papering over
races, until stuff magically blows up and no one has a clue why.
v2: Touch the gen6+ PM interrupt registers only on gen6+.
v3: Improve the commit message to more clearly spell out why we want
to unify the code and what exactly changes.
Cc: Ben Widawsky <ben@bwidawsk.net>
Cc: Paulo Zanoni <przanoni@gmail.com>
Reviewed-by: Ben Widawsky <ben@bwidawsk.net>
[danvet: Add a comment to explain why the l3 parity interrupt is
special.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Since the addition of VECS we have a slightly different enable
sequence for PM interrupts on ivb/hsw vs snb and vlv. Usually that
will end up in hard to track down surprises.
Hence unifiy things and since we have copies of this code in 3 places
now, extract it into its own little helper.
Note that this changes the irq preinstall sequence a bit for snb and
vlv: We now also clear the PM registers in the preinstall hook, in
addition to the PM register clearing/setup already done when actually
enabling rps. So this doesn't fix a bug but simply unifies the code
across all platforms. After the postinstall hook is similarly unified
we can rip out the then redundant PM interrupt setup from the rps
code.
v3: Rebase on top of the retained double-GTIIR clearing. Also
resurrect the masking/disabling of the gen6+ PM interrupts as spotted
by Ben Widaswky.
v4: Move the DE interrupt reset code out of gen5_gt_irq_preinstall
back to ironlake_irq_preinstall where it really belongs. Spotted by
Paulo.
v3: Improve the commit message to more clearly spell out why we want
to unify the code and what exactly changes.
Cc: Paulo Zanoni <przanoni@gmail.com>
Reviewed-by: Ben Widawsky <ben@bwidawsk.net>
[danvet: s/GT/PM/ to fix up a comment which Ben spotted while
reviewing.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Move error state generation and stringification to it's
own compilation unit. Sysfs also uses this so it can't be
under CONFIG_DEBUG_FS
This fixes a regression introduced in
commit ef86ddced7
Author: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Date: Thu Jun 6 17:38:54 2013 +0300
drm/i915: add error_state sysfs entry
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=66814
Signed-off-by: Mika Kuoppala <mika.kuoppala@intel.com>
Reported-by: kbuild test robot <fengguang.wu@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The code to handle it is broken - there's simply no code to clear CS
parser errors on gen5+. And behold, for all the other rings we also
don't enable it!
Leave the handling code itself in place just to be consistent with the
existing mess though. And in case someone feels like fixing it all up.
This has been errornously enabled in
commit 12638c57f3
Author: Ben Widawsky <ben@bwidawsk.net>
Date: Tue May 28 19:22:31 2013 -0700
drm/i915: Enable vebox interrupts
Cc: Damien Lespiau <damien.lespiau@intel.com>
Cc: Ben Widawsky <ben@bwidawsk.net>
Reviewed-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Now that the rps interrupt locking isn't clearly separated (at elast
conceptually) from all the other interrupt locking having a different
lock stopped making sense: It protects much more than just the rps
workqueue it started out with. But with the addition of VECS the
separation started to blurr and resulted in some more complex locking
for the ring interrupt refcount.
With this we can (again) unifiy the ringbuffer irq refcounts without
causing a massive confusion, but that's for the next patch.
v2: Explain better why the rps.lock once made sense and why no longer,
requested by Ben.
Reviewed-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
And kill the comment about it. Queueing work is a barrier type event,
no amount of locking will help in ordering things (as long as we queue
the work after having updated all relevant data structures). Also, the
queue_work works itself as a sufficient memory barrier.
Again on the surface this is just a tiny micro-optimization to reduce
the hold-time of dev_priv->irq_lock. But the better reason is that it
reduces superficial locking and so makes it clearer what we actually
need for correctness.
Reviewed-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The if (pm_iir & ~GEN6_PM_RPS_EVENTS) check was redunandant. Otoh
adding a check for rps events allows us to avoid the spinlock grabbing
for VECS interrupts.
v2: Drop misplaced hunk which now moved to the right patch.
Reviewed-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Since we only have one interrupt handler and interrupt handlers are
non-reentrant.
To drive the point really home give them all an _irq_handler suffix.
This is a tiny micro-optimization but even more important it makes it
clearer what locking we actually need. And in case someone screws this
up: lockdep will catch hardirq vs. other context deadlocks.
v2: Fix up compile fail.
Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Reviewed-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
It's racy: There's no guarantee that we won't walk this code (due to a
pch fifo underrun interrupt) while someone is changing the pointers
around.
The only reason we do this is to use the righ crtc for the pch fifo
underrun accounting. But we never expose this to userspace, so
essentially no one really cares if we use the "wrong" crtc.
So let's just rip it out.
With this patch fifo underrun code will always use crtc A for tracking
underruns on the (only) pch transcoder on LPT.
v2: Add a big comment explaining what's going on. Requested by Paulo.
v3: Fixup spelling in comment as spotted by Paulo.
Cc: Paulo Zanoni <przanoni@gmail.com>
Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Same treatment as for SERR_INT: If we clear only the bit for the pipe
we're enabling (but unconditionally) then we can always check for
possible underruns after having disabled the interrupt. That way pipe
underruns won't be lost, but at worst only get reported in a delayed
fashion.
v2: The same logic bug as in the SERR handling change also existed
here. The same bugfix of only reporting missed underruns when the
error interrupt was masked applies, too.
v3: Do the same fixes as for the SERR handling that Paulo suggested in
his review:
- s/%i/%c/ fix in the debug output
- move the DE_ERR_INT_IVB read into the respective if block
Cc: Paulo Zanoni <przanoni@gmail.com>
Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
[danvet: Fix up the checkpatch bikeshed Paulo noticed.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The current code won't report any fifo underruns on cpt if just one
pipe has fifo underrun reporting disabled. We can't enable the
interrupts, but we can still check the per-transcoder bits and so
report the underrun delayed if:
- We always clear the transcoder's bit (and none of the other bits)
when enabling.
- We check the transcoder's bit after disabling (to avoid racing with
the interrupt handler).
v2: I've forgotten to actually remove the old SERR_INT clearing.
v3: Use transcoder_name as suggested by Paulo Zanoni. Paulo also
noticed a logic bug: When an underrun interrupt fires we report it
both in the interrupt handler and when checking for underruns when
disabling it in cpt_set_fifo_underrun_reporting. But that second check
is only required if the interrupt is disabled and we're switching of
underrun reporting (e.g. because we're disabling the crtc). Hence
check for that condition.
At first I wanted to rework the code to pass that bit of information
from the uppper functions down to cpt_set_fifo_underrun_reporting. But
that turned out too messy. Hence the quick&dirty check whether the
south error interrupt source is masked off or not.
v4: Streamline the control flow a bit.
v5: s/pipe/pch transcoder/ in the dmesg output, suggested by Paulo.
v6: Review from Paulo:
- Reorder the was_enabled assignment to only read the register when we
need it. Also add a comment that we need to do that before updating
the register.
- s/%i/%c/ fix for the debug output.
- Fix the checkpath complaint in the SERR_INT_TRANS_FIFO_UNDERRUN
#define.
v7: Hopefully put that elusive SERR hunk back into this patch, spotted
by Paulo.
Cc: Paulo Zanoni <przanoni@gmail.com>
Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
This way all changes to SDEIMR all go through the same function, with
the exception of the (single-threaded) setup/teardown code.
For paranoia again add an assert_spin_locked.
v2: For even more paranoia also sprinkle a spinlock assert over
cpt_can_enable_serr_int since we need to have that one there, too.
v3: Fix the logic of interrupt enabling, add enable/disable macros for
the simple cases in the fifo code and add a comment. All requested by
Paulo.
Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Soon we want to gut a lot of our existing assumptions how many address
spaces an object can live in, and in doing so, embed the drm_mm_node in
the object (and later the VMA).
It's possible in the future we'll want to add more getter/setter
methods, but for now this is enough to enable the VMAs.
v2: Reworked commit message (Ben)
Added comments to the main functions (Ben)
sed -i "s/i915_gem_obj_set_color/i915_gem_obj_ggtt_set_color/" drivers/gpu/drm/i915/*.[ch]
sed -i "s/i915_gem_obj_bound/i915_gem_obj_ggtt_bound/" drivers/gpu/drm/i915/*.[ch]
sed -i "s/i915_gem_obj_size/i915_gem_obj_ggtt_size/" drivers/gpu/drm/i915/*.[ch]
sed -i "s/i915_gem_obj_offset/i915_gem_obj_ggtt_offset/" drivers/gpu/drm/i915/*.[ch]
(Daniel)
v3: Rebased on new reserve_node patch
Changed DRM_DEBUG_KMS to actually work (will need fixing later)
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Just to keep the paranoia equal also sprinkle locking asserts over the
pipestat interrupt enable/disable functions.
Again this results in false positives in the interrupt setup. Add
bogo-locking for these and a big comment explaining why it's there and
that it's indeed unnecessary.
v2: Fix up the spelling fail Paulo spotted in comments.
Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
As getting error state doesn't anymore require big kmallocs,
make error state accessible also from sysfs.
v2: - error state clearing (Chris Wilson)
- user hint, proper access mode bits and name (Daniel Vetter)
v3: release resources in proper order (Chris Wilson)
Suggested-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Mika Kuoppala <mika.kuoppala@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
[danvet: Apply Chris' s/error_state/error/ bikeshed on the sysfs
name. Also update the dmesg message, spotted by Chris.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Not only was there an extra, but since we now kzalloc the error state,
we don't need either.
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Once we've found the the context object programmed in CCID, there's no
need to look the other objects in the list.
Signed-off-by: Damien Lespiau <damien.lespiau@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Our interrupt handler (in hardirq context) could race with the timer
(in softirq context), hence we need to hold the spinlock around the
call to ->hdp_irq_setup in intel_hpd_irq_handler, too.
But as an optimization (and more so to clarify things) we don't need
to do the irqsave/restore dance in the hardirq context.
Note also that on ilk+ the race isn't just against the hotplug
reenable timer, but also against the fifo underrun reporting. That one
also modifies the SDEIMR register (again protected by the same
dev_priv->irq_lock).
To lock things down again sprinkle a assert_spin_locked. But exclude
the functions touching SDEIMR for now, I want to extract them all into
a new helper function (like we do already for pipestate, display
interrupts and all the various gt interrupts).
v2: Add the missing 't' Egbert spotted in a comment.
v3: Actually fix the right misspelled comment (Paulo).
Cc: Egbert Eich <eich@suse.de>
Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The usual pattern for our sub-function irq_handlers is that they check
for the no-irq case themselves. This results in more streamlined code
in the upper irq handlers.
v2: Rebase on top of the i965g/gm sdvo hpd fix.
Cc: Egbert Eich <eich@suse.de>
Reviewed-by: Egbert Eich <eich@suse.de>
Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Everywhere the same.
Note that this patch leaves unnecessary braces behind, but the next
patch will kill those all anyway (including the if itself) so I've
figured I can keep the diff a bit smaller.
v2: Rebase on top of the i965g/gm sdvo hpd fix.
Cc: Egbert Eich <eich@suse.de>
Reviewed-by: Egbert Eich <eich@suse.de>
Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
We already have a vfunc for this (and other parts of the hpd storm
handling code already use it).
v2: Rebase on top of the i965g/gm sdvo hpd fix.
Cc: Egbert Eich <eich@suse.de>
Reviewed-by: Egbert Eich <eich@suse.de>
Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The combination of Paulo's fifo underrun detection code and Egbert's
hpd storm handling code unfortunately made the hpd storm handling code
racy.
To avoid duplicating tricky interrupt locking code over all platforms
start with a bit of refactoring. This patch is the very first step
since in the end the irq storm handling code will handle all hotplug
logic (and so also encapsulate the locking nicely).
v2: Rebase on top of the i965g/gm sdvo hpd fix.
Cc: Egbert Eich <eich@suse.de>
Reviewed-by: Egbert Eich <eich@suse.de>
Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
By the time we write DEIER in the postinstall hook the interrupt
handler could run any time. And it does modify DEIER to handle
interrupts.
Hence the DEIER read-modify-write cycle for enabling the PCU event
source is racy. Close this races the same way we handle vblank
interrupts: Unconditionally enable the interrupt in the IER register,
but conditionally mask it in IMR. The later poses no such race since
the interrupt handler does not touch DEIMR.
Also update the comment, the clearing has already happened
unconditionally above.
v2: Actually shove the updated comment into the right train^W commit,
as spotted by Paulo.
Cc: Paulo Zanoni <przanoni@gmail.com>
Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The haswell unclaimed register handling code forgot to take the
spinlock. Since this is in the context of the non-rentrant interupt
handler and we only have one interrupt handler it is sufficient to
just grab the spinlock - we do not need to exclude any other
interrupts from running on the same cpu.
To prevent such gaffles in the future sprinkle assert_spin_locked over
these functions. Unfornately this requires us to hold the spinlock in
the ironlake postinstall hook where it is not strictly required:
Currently that is run in single-threaded context and with userspace
exlcuded from running concurrent ioctls. Add a comment explaining
this.
v2: ivb_can_enable_err_int also needs to be protected by the spinlock.
To ensure this won't happen in the future again also sprinkle a
spinlock assert in there.
v3: Kill the 2nd call to ivb_can_enable_err_int I've accidentally left
behind, spotted by Paulo.
Cc: Paulo Zanoni <przanoni@gmail.com>
Reviewed-by: Paulo Zanoni <przanoni@gmail.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
If the current GPU frquency is below RPe, and we're asked to increase
it, just go directly to RPe. This should provide better performance
faster than letting the frequency trickle up in response to the up
threshold interrupts.
For now just do it for VLV, since that matches quite closely how VLV
used to operate when the rps delayed timer kept things at RPe always.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Eliminate the weird inverted logic from the rps new_delay comparison.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Bspec seems to be full of lies, at least it disagress with reality:
Two systems corrobated that SDVO hpd bits are the same as on gen3.
v2: Update comment a bit.
Cc: Arthur Ranyan <arthur.j.runyan@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Tested-by: Chris Wilson <chris@chris-wilson.co.uk>
Reported-and-tested-by: Alex Fiestas <afiestas@kde.org>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=58405
Cc: stable@vger.kernel.org
Acked-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The ring names already have "ring" in it.
CC: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
For guilty batchbuffer analysis later on when rings are reset,
store what state the ring was on when hang was declared.
This helps to weed out the waiting rings from the active ones.
Signed-off-by: Mika Kuoppala <mika.kuoppala@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Acked-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
This is of no value to the developer reading the report, let alone the
bamboozled user.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Acked-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
If we detect a ring is in a valid wait for another, just let it be.
Eventually it will either begin to progress again, or the entire system
will come grinding to a halt and then hangcheck will fire as soon as the
deadlock is detected.
This error was foretold by Ben in
commit 05407ff889
Author: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Date: Thu May 30 09:04:29 2013 +0300
drm/i915: detect hang using per ring hangcheck_score
"If ring B is waiting on ring A via semaphore, and ring A is making
progress, albeit slowly - the hangcheck will fire. The check will
determine that A is moving, however ring B will appear hung because
the ACTHD doesn't move. I honestly can't say if that's actually a
realistic problem to hit it probably implies the timeout value is too
low."
v2: Make sure we don't even incur the KICK cost whilst waiting.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=65394
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Ben Widawsky <ben@bwidawsk.net>
Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
After kicking a ring, it should be free to make progress again and so
should not be accused of being stuck until hangcheck fires once more. In
order to catch a denial-of-service within a batch or across multiple
batches, we still do increment the hangcheck score - just not as
severely so that it takes multiple kicks to fail.
This should address part of Ben's justified criticism of
commit 05407ff889
Author: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Date: Thu May 30 09:04:29 2013 +0300
drm/i915: detect hang using per ring hangcheck_score
"There's also another corner case on the kick. If the seqno = 2
(though not stuck), and on the 3rd hangcheck, the ring is stuck, and
we try to kick it... we don't actually try to find out if the kick
helped."
v2: Make sure we catch DoS attempts with batches full of invalid WAITs.
v3: Preserve the ability to detect loops by always charging the ring
if it is busy on the same request.
v4: Make sure we queue another check if on a new batch
References: https://bugs.freedesktop.org/show_bug.cgi?id=65394
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Ben Widawsky <ben@bwidawsk.net>
Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
So we can remove some duplicate code. All the PCHs are very similar
and right now the code is the same. I plan to add more code, so we
would have more duplicated code.
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Rework of per ring hangcheck made this obsolete.
Signed-off-by: Mika Kuoppala <mika.kuoppala@intel.com>
Reviewed-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Keep track of ring seqno progress and if there are no
progress detected, declare hang. Use actual head (acthd)
to distinguish between ring stuck and batchbuffer looping
situation. Stuck ring will be kicked to trigger progress.
This commit adds a hard limit for batchbuffer completion time.
If batchbuffer completion time is more than 4.5 seconds,
the gpu will be declared hung.
Review comment from Ben which nicely clarifies the semantic change:
"Maybe I'm just stating the functional changes of the patch, but in case
they were unintended here is what I see as potential issues:
1. "If ring B is waiting on ring A via semaphore, and ring A is making
progress, albeit slowly - the hangcheck will fire. The check will
determine that A is moving, however ring B will appear hung because
the ACTHD doesn't move. I honestly can't say if that's actually a
realistic problem to hit it probably implies the timeout value is too
low.
2. "There's also another corner case on the kick. If the seqno = 2
(though not stuck), and on the 3rd hangcheck, the ring is stuck, and
we try to kick it... we don't actually try to find out if the kick
helped"
v2: use atchd to detect stuck ring from loop (Ben Widawsky)
v3: Use acthd to check when ring needs kicking.
Declare hang on third time in order to give time for
kick_ring to take effect.
v4: Update commit msg
Signed-off-by: Mika Kuoppala <mika.kuoppala@intel.com>
Reviewed-by: Ben Widawsky <ben@bwidawsk.net>
[danvet: Paste in Ben's review comment.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>