This way only the dynamic WRPLL selection for hdmi ddi mode is
done in intel_ddi_pll_select.
v2: Don't clobber the precomputed values when selecting clocks fro
hdmi encoders.
v3 (from Paulo): Rebase on top of the s/IS_HASWELL/HAS_DDI/ patch.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Paulo Zanoni <przanoni@gmail.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Don't let it fall in the HAS_PCH_SPLIT() case.
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Reviewed-by: Damien Lespiau <damien.lespiau@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
To make things a bit more manageable extract a new function for
reading out common ddi port state. This means a bit of duplication
between encoders and the core since both look at the same registers,
but doesn't seem worth to make a fuzz about.
We can also remove the state readout code in intel_ddi_setup_hw_pll_state.
That code is only called from the hardware take over and not the cross
check code, and only after the crtc state is reconstructed. So we can
rely on an accurate value of crtc->config.ddi_pll_sel already.
Compared to the old code also trust the hw state more and don't
special-case port A - we want to cross-check the actual-state, not
bake in our own assumptions about how this is supposed to all be
linked up.
v2: Make use of the read-out ddi_pll_sel in intel_ddi_clock_get.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Reviewed-by: Damien Lespiau <damien.lespiau@intel.com>
[imre: rebased on patchset version w/o pch/crt/fdi refactoring]
Signed-off-by: Imre Deak <imre.deak@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Just boring sed job for preparation.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Reviewed-by: Damien Lespiau <damien.lespiau@intel.com>
[imre: rebased on patchset version w/o pch/crt/fdi refactoring]
Signed-off-by: Imre Deak <imre.deak@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Similar to how the ->crtc_mode_set hook should touch the hardware to
enable anything the ->crtc_off hook should disable anything in the
hardware. Otherwise runtime pm for dpms will not work.
Currently the only things left int the haswell_crtc_off hook is
disabling the ddi plls. We can't move the WRPLL enabling out yet
because the current ddi pll sharing code used by the haswell code
doesn't separately track active users and overall users. This must be
fixed by porting it to the generic shared display pll framework, which
is powerful enough.
But the SPLL source is only used by the crt encoder and so can be
moved already. We only need to make sure that the ddi port E is
already off, which hsw_fdi_disable does by calling
intel_ddi_post_disable.
With this the code reorg to shuffle hsw fdi/lpt specific code into a
new hsw-specific crt encoder type is now finally complete.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
[imre: rebased on patchset version w/o pch/crt/fdi refactoring]
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The call to intel_ddi_pll_enable in haswell_crtc_mode_set is the only
function that still touches the hardware state from the crtc mode_set
callback on hsw. Since the SPLL isn't ever shared we can easily take
it out into the hsw crt encoder functions.
Temporarily we'll loose a bit of WARN_ON coverage with this, but once
the WRPLLs are switched over that will be restored. For the SPLL
selection add a WARN in the hsw fdi link training code.
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
[imre: rebased on patchset version w/o pch/crt/fdi refactoring]
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
This is needed by an upcoming patch that moves the PCH/CRT PLL disabling
into the post_disable hook, after which we want to keep the modeset
sequence at its current state. At this point this won't have an effect
since the PCH/CRT post_disable hook is atm a NOP.
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
This is needed by an upcoming patch that moves the PCH/CRT PLL enabling
into the pre_enable hook, after which we want to keep the modeset
sequence at its current state. At this point this won't have an effect
since the PCH/CRT pre_enable hook is atm a NOP.
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Luckily the bit definitions match, but it's still confusing
to use one when handling the other. So sprinkle some OCD over
the #defines to make them match and use the right version in
each place.
Maybe we should unify these definitions completely, but that
can always be done sometime in the future.
Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
SPLL would be a reference clock we could potentially share,
especially if we want to use the SSC mode. But currently we
don't, so let's rip out this complexity for a simpler conversion
to the new display pll framework.
Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
All the other checks also check hw state, so checking our software
refcounts for the plls looks a bit odd. Also this will simplify the
conversion over to the shared dpll framework, which itself has massive
amounts of checks to make sure that we never leave a display pll
enabled when we shouldn't.
So after that conversion we should stil have a good enough coverage of
asserts for entering pc8/runtime pm on hsw/bdw.
Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Add !mutex_is_locked() checks to intel_pin_and_fence_fb_obj() and
intel_unpin_fb_obj() to help catch failures to grab struct_mutex when
operating on fb objects.
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
intel_primary_plane_{setplane,disable} were lacking struct_mutex locking
around their GEM operations.
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
Reported-by: Damien Lespiau <damien.lespiau@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Otherwise we will print some WARNs when we read registers and the
machine is suspended.
Testcase: igt/pm_rpm/debugfs-read
Cc: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Reviewed-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
On HSW, the D_COMP register can be accessed through the mailbox (read
and write) or through MMIO on a MCHBAR offset (read only). On BDW, the
access should be done through MMIO on another address. So to account
for all these cases, create hsw_read_dcomp() with the correct
implementation for reading, and also fix hsw_write_dcomp() to do the
correct thing on BDW.
With this patch, we can now get back from the PC8+ state on BDW. We
were previously getting a black screen and lots of dmesg errors.
Please notice that the bug only happens when you actually reach the
PC8+ states, not when you only allow it.
Testcase: igt/pm_rpm/rte
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Reviewed-by: Damien Lespiau <damien.lespiau@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
That function can be used to write anything on D_COMP, not just
disable it, so print a more appropriate message.
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Reviewed-by: Damien Lespiau <damien.lespiau@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
CHV hard hangs on reading on 0x11100
References: https://bugs.freedesktop.org/show_bug.cgi?id=80893
Signed-off-by: Mika Kuoppala <mika.kuoppala@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
CHV hard hangs on reading on 0x112f4.
References: https://bugs.freedesktop.org/show_bug.cgi?id=80893
Suggested-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Mika Kuoppala <mika.kuoppala@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
CHV hard hangs on reading these registers. As these have not
been used since cantiga & ilk, remove the debugfs entry.
References: https://bugs.freedesktop.org/show_bug.cgi?id=80893
Suggested-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Mika Kuoppala <mika.kuoppala@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
CHV hard hangs on reading these registers. As these have not
been used since cantiga & ilk, remove the debugfs entry.
References: https://bugs.freedesktop.org/show_bug.cgi?id=80893
Suggested-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Mika Kuoppala <mika.kuoppala@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
This should hopefully simplify the display code slightly and also
solves at least one mistake in intel_pipe_set_base() where
to_intel_framebuffer(fb)->obj is referenced during local variable
initialization, before 'if (!fb)' gets checked.
Potential uses of this macro were identified via the following
Coccinelle patch:
@@
expression E;
@@
* to_intel_framebuffer(E)->obj
@@
expression E;
identifier I;
@@
I = to_intel_framebuffer(E);
...
* I->obj
v2: Rewrite some NULL tests in terms of the obj rather than the fb.
Also add a WARN() if trying to pageflip with a disabled primary
plane. [Suggested by Chris Wilson]
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Add an intel_fb_obj() macro that returns the GEM object associated with
a DRM framebuffer. This macro is safe to call on NULL framebuffers (a
NULL object pointer will be returned in this case).
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
On g33, the documentation states
"HWS_PGA:
Format = Bits 28:12 of graphics memory address (bits 31:29 MBZ)."
which translates to that the address of the HWS must be below 256MiB,
which is conveniently the mappable aperture.
This also appears to be true (but not documented as so) for gen4 and
gen5. To generalise we force it into the low mappable region for all
non-LLC platforms. If we locate the HWS at the top of the GTT the
machine will hard hang during boot (fails on pnv, gm45, ilk and byt,
but works on snb, ivb, hsw).
v2: Add comments to explain why use PIN_MAPPABLE even though we have
no intention of mapping the object. (Ville)
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
With RC6 enabled, BYT has an HW issue in determining the right
Gfx busyness.
WA for Turbo + RC6: Use SW based Gfx busy-ness detection to decide
on increasing/decreasing the freq. This logic will monitor C0
counters of render/media power-wells over EI period and takes
necessary action based on these values
v2: Refactor duplicate code. (Ville)
v3: Reformat the comments. (Ville)
v4: Enable required counters and remove unwanted code (Ville)
v5: Added frequency change acceleration support and remove kernel-doc
style comments. (Ville)
v6: Updated comment section and Fix w/a comment. (Ville)
Signed-off-by: Deepak S <deepak.s@linux.intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
For whatever reason, MI_DISPLAY_FLIP fails to change tiling mode on
Baytrail, so just use CPU driven mmio flips instead.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=76176
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
We currently see random GPU hangs when using RCS flips with multiple
pipes on Ivybridge. Now that we have mmio flips, we can fairly cheaply
fallback to using CPU driven flips instead.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=77104
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
So that we isolate the legacy ringbuffer submission mechanism, which becomes
a good candidate to be abstracted away. This is prep-work for Execlists (which
will its own workload submission mechanism).
No functional changes.
Reviewed-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Oscar Mateo <oscar.mateo@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Again, it's low-level enough to simply take a ringbuf and nothing
else.
Trivial change.
Reviewed-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Oscar Mateo <oscar.mateo@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
It's simple enough that it doesn't need to know anything about the
engine.
Trivial change.
Reviewed-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Oscar Mateo <oscar.mateo@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
More prep work: with Execlists, we are going to start creating a lot
of extra ringbuffers soon, so these functions are handy.
No functional changes.
v2: rename allocate/destroy_ring_buffer to alloc/destroy_ringbuffer_obj
because the name is more meaningful and to mirror a similar function in
the context world: i915_gem_alloc_context_obj(). Change suggested by Brad
Volkin.
Reviewed-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Oscar Mateo <oscar.mateo@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
A bit of background on the context elements.
Cc: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Oscar Mateo <oscar.mateo@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
[danvet: Appease checkpatch.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
This is an Execlists preparatory patch, since they make context ID become an
overloaded term:
- In the software, it was used to distinguish which context userspace was
trying to use.
- In the BSpec, the term is used to describe the 20-bits long field the
hardware uses to it to discriminate the contexts that are submitted to
the ELSP and inform the driver about their current status (via Context
Switch Interrupts and Context Status Buffers).
Initially, I tried to make the different meanings converge, but it proved
impossible:
- The software ctx->id is per-filp, while the hardware one needs to be
globally unique.
- Also, we multiplex several backing states objects per intel_context,
and all of them need unique HW IDs.
- I tried adding a per-filp ID and then composing the HW context ID as:
ctx->id + file_priv->id + ring->id, but the fact that the hardware only
uses 20-bits means we have to artificially limit the number of filps or
contexts the userspace can create.
The ctx->user_handle renaming bits are done with this Cocci patch (plus
manual frobbing of the struct declaration):
@@
struct intel_context c;
@@
- (c).id
+ c.user_handle
@@
struct intel_context *c;
@@
- (c)->id
+ c->user_handle
Also, while we are at it, s/DEFAULT_CONTEXT_ID/DEFAULT_CONTEXT_HANDLE and
change the type to unsigned 32 bits.
v2: s/handle/user_handle and change the type to uint32_t as suggested by
Chris Wilson.
Reviewed-by: Jesse Barnes <jbarnes@virtuousgeek.org> (v1)
Signed-off-by: Oscar Mateo <oscar.mateo@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
We have already advanced that Logical Ring Contexts have their own kind
of backing objects, but everything will be better explained in the Execlists
series. For now, suffice it to say that the current backing object is only
ever used with the render ring, so we're making this fact more explicit
(which is a good reason on its own).
As for the is_initialized flag, we only use to signify that the render state
has been initialized (a.k.a. golden context, a.k.a. null context). It doesn't
mean anything for the other engines, so make that distinction obvious.
Done with the following Coccinelle patch (plus manual frobbing of the struct):
@@
struct intel_context c;
@@
- (c).obj
+ c.legacy_hw_ctx.rcs_state
@@
struct intel_context *c;
@@
- (c)->obj
+ c->legacy_hw_ctx.rcs_state
@@
struct intel_context c;
@@
- (c).is_initialized
+ c.legacy_hw_ctx.initialized
@@
struct intel_context *c;
@@
- (c)->is_initialized
+ c->legacy_hw_ctx.initialized
This Execlists prep-work patch has been suggested by Chris Wilson and Daniel
Vetter separately.
Initially, it was two separate patches:
drm/i915: Rename ctx->obj to ctx->rcs_state
drm/i915: Make it obvious that ctx->id is merely a user handle
Signed-off-by: Oscar Mateo <oscar.mateo@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
[danvet: s/id/is_initialized/ to fix the subject and resolve a
conflict in i915_gem_context_reset. Also introduce a new lctx local
variable to avoid overtly long lines.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
This is preparatory work for Execlists: we plan to use it later to
allocate our own context objects (since Logical Ring Contexts do
not have the same kind of backing objects).
No functional changes.
Reviewed-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Oscar Mateo <oscar.mateo@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
To achieve further power savings during system freeze (aka connected
standby, or s0ix) we have to send a PCI_D1 opregion notification. As
the information about the state we're entering (system freeze,
suspend to ram or suspend to disk) is only available through the ACPI
subsystem, make this support depend on the relevant kconfig option.
Things will still work if this option isn't set, albeit with less than
optimial power saving.
This also fixes a compile breakage when the option is not set introduced
in
commit e5747e3adc
Author: Jesse Barnes <jbarnes@virtuousgeek.org>
Date: Thu Jun 12 08:35:47 2014 -0700
drm/i915: send proper opregion notifications on suspend/resume
Reported-by: Randy Dunlap <rdunlap@infradead.org>
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Make the assumption that media workloads are not as latency sensitive
for __wait_seqno, and that upclocking the GPU does not affect the BLT
engine. Under that assumption, we only wait to forcibly upclock the GPU
when we are stalling for results from the render pipeline.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Deepak S<deepak.s@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
With the new checks in place, we can see we're doing things backwards,
so fix them up per the spec.
Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
As Ville points out, it's possible/probable we don't actually need this.
Potentially, this validates the letter of the spec, and not the spirit.
Ville:
> I discussed this on irc w/ Ben, and I was suggesting we don't need to
> poll. Polling apparently can be used as a workaround for certain
> hardware issues, but it looks like those issues shouldn't affect us,
> for the momemnt at least. So my suggestion was to try w/o polling
> first (since there could be some power cost to polling) and add the
> poll bit if problems arise.
Rodrigo: Spec suggests this as an W/A for GT3. However semaphores didn't
worked in my BDW GT2 on Signal Mode. So pool mode is definitely needed.
Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Tested-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Simple debugfs file to display the current state of semaphores. This is
useful if you want to see the state without hanging the GPU.
NOTE: This patch is optional to the series.
NOTE2: Like the GPU error state collection, the reads are currently
incoherent.
v2 (Rodrigo): * Iterate only on active rings.
* s/ring_buffer/engine_cs.
Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Since the semaphore information is in an object, just dump it, and let
the user parse it later.
NOTE: The page being used for the semaphores are incoherent with the
CPU. No matter what I do, I cannot figure out a way to read anything but
0s. Note that the semaphore waits are indeed working.
v2: Don't print signal, and wait (they should be the same). Instead,
print sync_seqno (Chris)
v3: Free the semaphore error object (Chris)
v4: Fix semaphore offset calculation during error state collection
(Ville)
v5: VCS2 rebase
Make semaphore object error capture coding style consistent (Ville)
Do the proper math for the signal offset (Ville)
v6: Fix small conflicts on rebase and s/ring_buffer/engine_cs (Rodrigo)
Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Ipehr just carries Dword 0 and on Gen 8, offsets are located
on Dword 2 and 3 of MI_SEMAPHORE_WAIT.
This implementation was based on Ben's work and on Ville's suggestion for Ben
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Cc: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Reviewed-by: Ben Widawsky <ben@bwidawsk.net>
[danvet: Fixup format string.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Semaphore waits use a new instruction, MI_SEMAPHORE_WAIT. The seqno to
wait on is all well defined by the table in the previous patch. There is
nothing else different from previous GEN's semaphore synchronization
code.
v2: Update macros to not require the other ring's ring->id (Chris)
v3: Add missing VCS2 gen8_ring_wait init besides
s/ring_buffer/engine_cs (Rodrigo)
Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Semaphore signalling works similarly to previous GENs with the exception
that the per ring mailboxes no longer exist. Instead you must define
your own space, somewhere in the GTT.
The comments in the code define the layout I've opted for, which should
be fairly future proof. Ie. I tried to define offsets in abstract terms
(NUM_RINGS, seqno size, etc).
NOTE: If one wanted to move this to the HWSP they could. I've decided
one 4k object would be easier to deal with, and provide potential wins
with cache locality, but that's all speculative.
v2: Update the macro to not need the other ring's ring->id (Chris)
Update the comment to use the correct formula (Chris)
v3: Move the macros the ringbuffer.h to prevent churn in next patch
(Ville)
v4: Fixed compilation rebase conflict
commit 1ec9e26dda
Author: Daniel Vetter <daniel.vetter@ffwll.ch>
Date: Fri Feb 14 14:01:11 2014 +0100
drm/i915: Consolidate binding parameters into flags
v5: VCS2 rebase
Replace hweight_long with hweight32
v6 (Rodrigo): * Add missed VC2 gen8 ring signal init
* fixing conflicst on rebase
* minor fixes on address table
* remove WARN_ON
Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
[danvet: s/BUG_ON/WARN_ON/]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
With the ring mask we now have an easy way to know the number of rings
in the system, and therefore can accurately predict the number of dwords
to emit for semaphore signalling. This was not possible (easily)
previously.
There should be no functional impact, simply fewer instructions emitted.
While we're here, simply do the round up to 2 instead of the fancier
rounding we did before, which rounding up per mbox, ie 4. This also
allows us to drop the unnecessary MI_NOOP, so not really 4, 3.
v2: Use 3 dwords instead of 4 (Ville)
Do the proper calculation to get the number of dwords to emit (Ville)
Conditionally set .sync_to when semaphores are enabled (Ville)
v3: Rebased on VCS2
Replace hweight_long with hweight32 (Ville)
v4: Pull out the accidentally squashed hunk from the next patch after
rebase (Daniel).
v5: Fix conflict after rebase (Rodrigo)
Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com> (v1)
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Gen8 has already had some differentiation with how it handles rings.
Semaphores bring yet more differences, and now is as good a time as any
to do the split.
Also, since gen8 doesn't actually use semaphores up until this point,
put the proper "NULL" values in for the mbox info.
v2: v1 had a stale commit message
v3: Move everything in the is_semaphore_enabled() check
v4: VCS2 rebase
Remove double assignment of signal in render ring (Ville)
v5: Adding missed VCS2 signal init on gen8+ (Rodrigo)
Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
ring index calculation table was out of date after other rings were added,
although the formula is flexible and scale when adding new rings.
So this patch just update the comments and add a brief explanation
why to use sync_seqno[ring index].
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Reviewed-by: Ben Widawsky <ben@bwidawsk.net>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
It just fix a typo.
v2: removing underscore to let this like all other ring names (Oscar)
Cc: Oscar Mateo <oscar.mateo@intel.com>
Reviewed-by (v1): Ben Widawsky <benjamin.widawsky@intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>