This function can be called now with i915 interrupts enabled, so the
corresponding WARN is incorrect, remove it. I think this was spotted by
Paulo during his review, but since I already removed the same WARN
from intel_suspend_gt_powersave() I missed then his point.
Spotted-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Signed-off-by: Imre Deak <imre.deak@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
I saw punit timeouts in vlv_set_rps_idle() while running various
subtests of pm_rpm. Increasing the timeout to 100ms got rid of the
issue.
Testcase: igt/pm_rpm
Reference: https://bugs.freedesktop.org/show_bug.cgi?id=82939
Signed-off-by: Imre Deak <imre.deak@intel.com>
Tested-by: Guo Jinxian <jinxianx.guo@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
In __gen6_update_ring_freq, use the full range of
possible gpu frequencies from max_freq to min_freq.
The actual gpu frequency could be outside the range
from max_freq_softlimit to min_freq_softlimit due
to power/thermal constraints.
Signed-off-by: Tom O'Rourke <Tom.O'Rourke@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
In gen8_enable_rps, change the initial rps setting
to the min_freq_softlimit (same as gen6_enable_rps).
Signed-off-by: Tom O'Rourke <Tom.O'Rourke@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Set the min_freq_softlimit to max(RPe, 450MHz).
Setting a floor can ensure a minimum experience
level. The 450MHz value came from a power and
performance study of various types of workloads
(3D, Media, GPGPU, idle, etc).
v2: rebased
Signed-off-by: Tom O'Rourke <Tom.O'Rourke@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Added gen6_init_rps_frequencies() to initialize
the rps frequency values. This function replaces
parse_rp_state_cap(). In addition to reading RPn,
RP0, and RP1 from RP_STATE_CAP register, the new
function reads efficient frequency (aka RPe) from
pcode for Haswell and Broadwell and sets the turbo
softlimits. The turbo minimum frequency softlimit
is set to RPe for Haswell and Broadwell and to RPn
otherwise.
For RPe, the efficiency is based on the frequency/power
ratio (MHz/W); this is considering GT power and not
package power. The efficent frequency is the highest
frequency for which the frequency/power ratio is within
some threshold of the highest frequency/power ratio.
A fixed decrease in frequency results in smaller
decrease in power at frequencies less than RPe than
at frequencies above RPe.
v2: Following suggestions from Chris Wilson and
Daniel Vetter to extend and rename parse_rp_state_cap
and to open-code a poorly named function.
Signed-off-by: Tom O'Rourke <Tom.O'Rourke@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
[danvet: Remove unused variables.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
So with all the code movement and extraction in intel_pm.c in -next
git is hopelessly confused with
commit 2208d655a9
Author: Daniel Vetter <daniel.vetter@ffwll.ch>
Date: Fri Nov 14 09:25:29 2014 +0100
drm/i915: drop WaSetupGtModeTdRowDispatch:snb
from -fixes. Worse even small changes in -next move around the
conflict context so rerere is equally useless. Let's just backmerge
and be done with it.
Conflicts:
drivers/gpu/drm/i915/i915_drv.c
drivers/gpu/drm/i915/intel_pm.c
Except for git getting lost no tricky conflicts really.
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
After the previous patch RPS disabling doesn't depend any more on the
first level interrupts being disabled, so we can move it everywhere
earlier. Doing so let's us think about the uninitialization steps
afterwards independently of any asynchronous RPS events that can happen
atm. It also makes the system/runtime suspend time RPS disabling more
uniform. Finally this gets rid of the WARN in
intel_suspend_gt_powersave(), which we can hit if a final RPS work runs
after we disabled the first level interrupts.
Testcase: igt/pm_rpm
Reference: https://bugs.freedesktop.org/show_bug.cgi?id=82939
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
When disabling the RPS interrupts there is a tricky dependency between
the thread disabling the interrupts, the RPS interrupt handler and the
corresponding RPS work. The RPS work can reenable the interrupts, so
there is no straightforward order in the disabling thread to (1) make
sure that any RPS work is flushed and to (2) disable all RPS
interrupts. Currently this is solved by masking the interrupts using two
separate mask registers (first level display IMR and PM IMR) and doing
the disabling when all first level interrupts are disabled.
This works, but the requirement to run with all first level interrupts
disabled is unnecessary making the suspend / unload time ordering of RPS
disabling wrt. other unitialization steps difficult and error prone.
Removing this restriction allows us to disable RPS early during suspend
/ unload and forget about it for the rest of the sequence. By adding a
more explicit method for avoiding the above race, it also becomes easier
to prove its correctness. Finally currently we can hit the WARN in
snb_update_pm_irq(), when a final RPS work runs with the first level
interrupts already disabled. This won't lead to any problem (due to the
separate interrupt masks), but with the change in this and the next
patch we can get rid of the WARN, while leaving it in place for other
scenarios.
To address the above points, add a new RPS interrupts_enabled flag and
use this during RPS disabling to avoid requeuing the RPS work and
reenabling of the RPS interrupts. Since the interrupt disabling happens
now in intel_suspend_gt_powersave(), we will disable RPS interrupts
explicitly during suspend (and not just through the first level mask),
but there is no problem doing so, it's also more consistent and allows
us to unify more of the RPS disabling during suspend and unload time in
the next patch.
v2/v3:
- rebase on patch "drm/i915: move rps irq disable one level up" in the
patchset
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Atm we first enable the RPS interrupts then we clear any pending ones.
By this we could lose an interrupt arriving after we unmasked it. This
may not be a problem as the caller should handle such a race, but logic
still calls for the opposite order. Also we can delay enabling the
interrupts until after all the RPS initialization is ready with the
following order:
1. disable left-over RPS (earlier via intel_uncore_sanitize)
2. clear any pending RPS interrupts
3. initialize RPS
4. enable RPS interrupts
This also allows us to do the 2. and 4. step the same way for all
platforms, so let's follow this order to simplifying things.
Also make sure any queued interrupts are also cleared.
v2:
- rebase on the GEN9 patches where we don't support RPS yet, so we
musn't enable RPS interrupts on it (Paulo)
v3:
- avoid enabling RPS interrupts on GEN>9 too (Paulo)
- clarify the RPS init sequence in the log message (Chris)
- add POSTING_READ to gen6_reset_rps_interrupts() (Paulo)
- WARN if any PM_IIR bits are set in gen6_enable_rps_interrupts()
(Paulo)
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
We disable the RPS interrupts for all platforms at the same spot, so
move it one level up in the callstack to simplify things.
No functional change.
v2:
- rebase on the GEN9 patches where RPS isn't supported yet, so we don't
need to disable RPS interrupts on it (Paulo)
v3:
- avoid disabling the interrupts on GEN>9 too (Paulo)
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
In sandybridge_pcode_read and sandybridge_pcode_write,
extend the mbox parameter from u8 to u32.
On Haswell and Sandybridge, bits 7:0 encode the mailbox
command and bits 28:8 are used for address control for
specific commands.
Based on suggestion from Ville Syrjälä.
Signed-off-by: Tom O'Rourke <Tom.O'Rourke@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
According to "Cherryview_GFXclocks_y14w36d1.xlsx" the GPU frequency
divider should be 10 in when the CZ clock is 400 MHz. Change the code
to agree so that we report the correct frequencies.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Deepak S<deepak.s@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The divider used in the GPU frequency calculations is compatible between
vlv and chv. vlv just wants doubled values compared to chv.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Deepak S<deepak.s@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Always print the final PCBR register value on both vlv and chv, and
also tell us whether the BIOS was a good citizen or not.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Deepak S<deepak.s@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Our freq<->opcode conversions assume that GPLL is always used.
Apparently that should be the case always, but let's scream if we
ever encounter something different.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Deepak S<deepak.s@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Remove the magic number for the GPLLENABLE bit by adding a name for it.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Deepak S<deepak.s@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Even with the rps debug messages signficantly recuced by
commit 67956867aa
Author: Ville Syrjälä <ville.syrjala@linux.intel.com>
Date: Tue Sep 2 15:12:17 2014 +0300
drm/i915: Don't spam dmesg with rps messages on vlv/chv
we still get an inordinate amount of spam from this. Just kill the debug
print. If someone wants to observe it they can just use the tracepoint.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Deepak S<deepak.s@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
This reverts the regressing
commit 6547fbdbff
Author: Daniel Vetter <daniel.vetter@ffwll.ch>
Date: Fri Dec 14 23:38:29 2012 +0100
drm/i915: Implement WaSetupGtModeTdRowDispatch
that causes GPU hangs immediately on boot.
Reported-by: Leo Wolf <jclw@ymail.com>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=79996
Cc: stable@vger.kernel.org (v3.8+)
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
[Jani: amended the commit message slightly.]
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Given the history, there's some chance we'll keep the same WM code for a
bit (previously, we were able to reuse the same WM code from ILK to BDW,
so that sounds like a fair assumption).
Signed-off-by: Damien Lespiau <damien.lespiau@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Ville found out that the DATA1 register exists since SNB with some
scarce apparitions in the specs throughout the times. In his own words:
Also according to Bspec the mailbox data1 register already existed
since snb. The hsw cdclk change sequence also mentions that it should
be set to 0, but eg. the bdw IPS sequence doesn't mention it. I guess
in theory some pcode command might cause it to be clobbered, so I'm
thinking we should just explicitly set it to 0 for all platforms in
the pcode read/write functions
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Suggested-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Damien Lespiau <damien.lespiau@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
When reading a CCK register we should obviously read it from CCK not
Punit. This problem has been present ever since this of code was
introduced in
commit 67c3bf6f55
Author: Deepak S <deepak.s@linux.intel.com>
Date: Thu Jul 10 13:16:24 2014 +0530
drm/i915: populate mem_freq/cz_clock for chv
The problem was raised during review by Mika [1] but somehow slipped
through the cracks, and the patch got applied with the problem unfixed.
[1] http://lists.freedesktop.org/archives/intel-gfx/2014-July/048937.html
Cc: Deepak S <deepak.s@linux.intel.com>
Cc: Mika Kuoppala <mika.kuoppala@intel.com>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Deepak S <deepak.s@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The logical place for these functions is in i915_irq.c next to the rest of
PM interrupt handling functions.
No functional change.
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The GEN6 and GEN8 versions differ only in the PM IIR and IER register
addresses and that on GEN8 we need to keep the
GEN8_PMINTR_REDIRECT_TO_NON_DISP PM interrupt unmasked. Abstract away
these 3 things in the GEN6 versions of the helpers and use them
everywhere.
No functional change.
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The helpers to enable/disable PM IRQs for GEN6 and GEN8 are the same
except for the PM interrupt mask register, so abstract away this
register in the GEN6 versions and use these everywhere.
No functional change.
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
-WaDisableDopClockGating:chv
-WaDisableSamplerPowerBypass:chv
-WaDisableGunitClockGating:chv
-WaDisableFfDopClockGating:chv
-WaDisableDopClockGating:chv
v2: Remove pre-production WA instead of restricting them
based on revision id (Ville)
For: VIZ-4090
Signed-off-by: Arun Siluvery <arun.siluvery@linux.intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Configure and enable RC6 for Gen9.
v2: Rebase on top of BDW rc6 support (Damien)
Reviewed-by: Mika Kuoppala <mika.kuoppala@intel.com>
Signed-off-by: Zhe Wang <zhe1.wang@intel.com>
Signed-off-by: Damien Lespiau <damien.lespiau@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
When we write new values for the DDB allocation and WM parameters, we now
need to trigger the double buffer update for the pipe to take the new
configuration into account.
As the DDB is a global resource shared between planes, enabling or
disabling one plane will result in changes for all planes that are
currently in use, thus the need write PLANE_SURF/CUR_BASE for more than
the plane we're touching.
v2: Don't wait for pipes that are off
v3: Split the staging results structure to not exceed the 1Kb stack
allocation in skl_update_wm()
v4: Rework and document the algorithm after Ville found that it was all
wrong.
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Damien Lespiau <damien.lespiau@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
To correctly flush the new DDB allocation we need to know about the pipe
allocation layout inside the DDB in order to sequence the re-allocation
to not cause a newly allocated pipe to fetch from a space that was
previously allocated to another pipe.
This patch preserves the per-pipe (start,end) allocation to be used in
the flush.
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Damien Lespiau <damien.lespiau@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
We can reduce the indentation level by continuing early.
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Damien Lespiau <damien.lespiau@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The transition WMs code was doing a shortcut and the values were copied
from the WM0 ones at compute_wm_results() time. Going forward, we want
to compute them like the other WMs and resolve their final register
values in the same way as well.
This patch does just that and isolate the transtion WM compute code in
skl_compute_transition_wm() while skl_compute_wm_results() takes care of
the register values.
We also take the opportunity to disable the transition WMs for now.
We've noticed underruns and they seem to be the culprit.
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Damien Lespiau <damien.lespiau@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The DDB allocation code managed to split in two the compute functions.
Bring back skl_compute_transition_wm() and skl_compute_linetime_wm()
with their little friends.
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Damien Lespiau <damien.lespiau@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
To align with the ilk WM code and because it makes sense to test against
the upper bounds as soon as possible on variables that are bigger than
the number of bits in the register, let's move the maximum checks from
skl_compute_wm_results() to skl_compute_plane_wm().
v2: Leave the result values to 0 when overflowing the limits (Ville)
Use 32 bits intermediate variables (Damien)
Instead of using the 16 and 8 bits space we have in the result
structure, use 32 bits local variables until we're sure they fit into
the constraints.
Suggested-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Damien Lespiau <damien.lespiau@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
What we're talking about here is the DDB allocation (in blocks). That's
more descriptive than 'max_page_buff_alloc'.
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Damien Lespiau <damien.lespiau@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Ville suggested that we should use the same semantics as C arrays to
reduce the number of those pesky +1/-1 in the allocation code.
This patch leaves the debugfs file as is, showing the internal DDB
allocation structure, not the values written in the registers.
v2: Remove the test on ->end in skl_ddb_entry_size() (Ville)
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Suggested-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Damien Lespiau <damien.lespiau@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
This logically belongs to the WM state, so do it there.
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Damien Lespiau <damien.lespiau@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
We're going to add a new step, let's not hide the copy of the new WM
state inside one inner function, but as a 1st level operation in the WM
update.
v2: Split the staging results structure to not exceed the 1Kb stack
allocation in skl_update_wm()
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Damien Lespiau <damien.lespiau@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
According to updated BSpec, If level 1 or any higher level has a value of 0x00,
that level and any higher levels are unused and the associated watermark
registers must not be enabled.
This patch checks for latency 0 for level >=1 and does not enable WM
corresponding to level m | m>=n, if level n (n != 0) has a 0us latency.
v2: Satheesh's review comments
- zero-out latency values (for all higher levels if latency of given
level is zero ) in read_wm_latency() function itself
v3: removed redundant check as per Satheesh's observation.
v4: rebase on top before merging (Damien)
v5: Rebase on top of the default value removal (Ville)
Reviewed-by: Satheeshakrishna M <satheeshakrishna.m@intel.com> (v3)
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Damien Lespiau <damien.lespiau@intel.com>
Signed-off-by: Vandana Kannan <vandana.kannan@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
According to the updated Bspec, The mailbox response data is not currently
accounting for memory read latency. Add 2 microseconds to the result for
each level.
This patch adds 2us to latency of level 0 for all cases and
for all other levels (1-7) only if latency[level] > 0.
v2: Slightly rework the patch and add a big comment (Damien)
v3: Rebase on top of the renames of the memory latency defines
Signed-off-by: Vandana Kannan <vandana.kannan@intel.com> (v1)
Signed-off-by: Damien Lespiau <damien.lespiau@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com> (v2)
Reviewed-by: M, Satheeshakrishna <satheeshakrishna.m@intel.com> (v1)
Cc: Lespiau, Damien <damien.lespiau@intel.com>
Cc: M, Satheeshakrishna <satheeshakrishna.m@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
This patch provides the implementation for reading the pipe wm HW
state.
v2: Incorporated Damien's review comments and also made modifications
to incorporate the plane/cursor split.
v3: No need to ident a line that was fitting 80 chars
Return early instead of indenting the remaining of a function
(Damien)
v4: Rebase on top of nightly (minor conflict in intel_drv.h)
v5: Rebase on top of nightly (minor conflict in intel_drv.h)
v6: Rebase on top of nightly (minor conflict in intel_drv.h)
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Pradeep Bhat <pradeep.bhat@intel.com>
Signed-off-by: Damien Lespiau <damien.lespiau@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
v2: Adapt to the planes/cursor split
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Damien Lespiau <damien.lespiau@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
v2: Fix the 3rd plane/cursor logic (Pradeep Bhat)
v3: Fix one-by-one error in the DDB allocation code
v4: Rebase on top of the skl_pipe_pixel_rate() argument change
v5: Replace the available/start/end output parameters of
skl_ddb_get_pipe_allocation_limits() by a single ddb entry constify
a few arguments
Make nth_active_pipe 0 indexed
Use sizeof(variable) instead of sizeof(type)
(Ville)
v6: Use the for_each_crtc() macro instead of list_for_each_entry()
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Damien Lespiau <damien.lespiau@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
This patch implements the watermark algorithm and its necessary
functions. Two function pointers skl_update_wm and
skl_update_sprite_wm are provided. The skl_update_wm will update
the watermarks for the crtc provided as an argument and then
checks for change in DDB allocation for other active pipes and
recomputes the watermarks for those Pipes and planes as well.
Finally it does the register programming for all dirty pipes.
The trigger of the Watermark double buffer registers will have
to be once the plane configurations are done by the caller.
v2: fixed the divide-by-0 error in the results computation func.
Also reworked the PLANE_WM register values computation func to
make it more compact. Incorporated all other review comments
from Damien.
v3: Changed the skl_compute_plane_wm function to now return success
or failure. Also the result blocks and lines are computed here
instead of in skl_compute_wm_results function.
v4: Adjust skl_ddb_alloc_changed() to the new planes/cursor split
(Damien)
v5: Reworked the affected functions to implement new plane/cursor
split.
v6: Rework the logic that triggers the DDB allocation and WM computation
of skl_update_other_pipe_wm() to not depend on non-computed DDB
values.
Always give a valid cursor_width (at boot it's 0) to keep the
invariant that we consider the cursor plane always enabled.
Otherwise we end up dividing by 0 in skl_compute_plane_wm()
(Damien Lespiau)
v7: Spell out allocation
skl_ddb_ functions should have the ddb as first argument
Make the skl_ddb_alloc_changed() parameters const
(Damien)
v8: Rebase on top of the crtc->primary changes
v9: Split the staging results structure to not exceed the 1Kb stack
allocation in skl_update_wm()
v10: Make skl_pipe_pixel_rate() take a pointer to the pipe config
Add a comment about overflow considerations for skl_wm_method1()
Various additions of const
Various use of sizeof(variable) instead of sizeof(type)
Various move of variable definitons to a narrower scope
Zero initialize some stack allocated structures to make sure we
don't have garbage in case we don't write all the values
(Ville)
v11: Remove non-necessary default number of blocks/lines when the plane
is disabled (Ville)
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Pradeep Bhat <pradeep.bhat@intel.com>
Signed-off-by: Damien Lespiau <damien.lespiau@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
This patch defines the structures needed for computation of
watermarks of pipes and planes for SKL.
v2: Incorporated Damien's review comments and removed unused fields
in structs for future features like rotation, drrs and scaling.
The skl_wm_values struct is now made more generic across planes
and cursor planes for all pipes.
v3: implemented the plane/cursor split.
v4: Change the wm union back to a structure (Ville, Daniel)
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Pradeep Bhat <pradeep.bhat@intel.com>
Signed-off-by: Damien Lespiau <damien.lespiau@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
This patch reads the memory latency values for all the 8 levels for
SKL. These values are needed for the Watermark computation.
v2: Incorporated the review comments from Damien on register
indentation.
v3: Updated the code to use the sandybridge_pcode_read for reading
memory latencies for GEN9.
v4: Don't put gen 9 in the middle of an ordered list of ifs
(Damien)
v5: take the rps.hw_lock around sandybridge_pcode_read() (Damien)
v6: Use gen >= 9 in the pcode_read() function for data1.
Move the defines near the gen6 ones and prefix them with PCODE.
Remove unused timeout define (the pcode_read() code has a larger
timeout already).
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Pradeep Bhat <pradeep.bhat@intel.com>
Signed-off-by: Damien Lespiau <damien.lespiau@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Current chv spec teels we can only use either 16 or 32 bits as precision.
Although in the past VLV went from 16/32 to 32/64 and spec might not be updated,
these precision values brings stability and fixes some issues Wayne was facing.
Cc: Wayne Boyer <wayne.boyer@intel.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Tested-by: Wayne Boyer <wayne.boyer@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
[danvet: Sprinkle const as requested by Ville.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>