Instead of pipe configuration reg, cck reg to be used for checking whether
DSI Pll is getting locked or not.
v2: dpio_lock unlocked now in case DSI PLL lock fails
Signed-off-by: Gaurav K Singh <gaurav.k.singh@intel.com>
Signed-off-by: Shobhit Kumar <shobhit.kumar@intel.com>
Reviewed-by: Jani Nikula <jani.nikula@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
For Dual link MIPI Panels, dsipll clock for both DSI0 and DSI1 needs to be enabled.
v2: Address review comments by Jani
- Added wait time for PLL to be locked.
v3: separate patch created for cck read for checking PLL to be locked
Signed-off-by: Gaurav K Singh <gaurav.k.singh@intel.com>
Signed-off-by: Shobhit Kumar <shobhit.kumar@intel.com>
Reviewed-by: Jani Nikula <jani.nikula@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
For dual link MIPI panels, SHUTDOWN packet needs to send to both Ports
A & C during MIPI encoder disabling sequence. Similarly, TURN ON packet
to be sent to both Ports during MIPI encoder enabling sequence.
v2: Address review comments by Jani
- Used a for loop instead of do-while loop.
v3: Used for_each_dsi_port macro instead of for loop
Signed-off-by: Gaurav K Singh <gaurav.k.singh@intel.com>
Signed-off-by: Shobhit Kumar <shobhit.kumar@intel.com>
Reviewed-by: Jani Nikula <jani.nikula@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
For dual link MIPI Panels, each port needs half of pixel clock. Pixel overlap
can be enabled if needed by panel, then in that case, pixel clock will be
increased for extra pixels.
v2 : Address review comments by Jani
- Removed the bit mask used for ->dual_link
- Used DSI instead of MIPI for #define variables
v3: Added the VLV_DISPLAY_BASE to VLV_CHICKEN_3 register
Signed-off-by: Gaurav K Singh <gaurav.k.singh@intel.com>
Signed-off-by: Shobhit Kumar <shobhit.kumar@intel.com>
Reviewed-by: Jani Nikula <jani.nikula@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
For Dual Link MIPI Panels, both Port A and Port C should be enabled
during the MIPI encoder enabling sequence. Similarly, during the
disabling sequence, both ports needs to be disabled.
v2: Used for_each_dsi_port macro instead of for loop
v3: Used intel_dsi->ports instead of dual_link var for dual link configuration check
v4: Masking of the required MIPI port bits before writing proper values
Signed-off-by: Gaurav K Singh <gaurav.k.singh@intel.com>
Signed-off-by: Shobhit Kumar <shobhit.kumar@intel.com>
Reviewed-by: Jani Nikula <jani.nikula@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
MI_STORE_DWORD_IMM length has been the same ever since gen4. Rename
the define to avoid potential confusion if someone tries to use this
on pre-gen8.
Also correct the comment on MI_MEM_VIRTUAL bit. It's present on 945,g33
and 965 only.
Cc: Oscar Mateo <oscar.mateo@intel.com>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
[danvet: Add USE_GGTT define for g4x+ too.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Release struct_mutex if init_rings() fails.
This is a regression introduced in
commit 35a57ffbb1
Author: Daniel Vetter <daniel.vetter@ffwll.ch>
Date: Thu Nov 20 00:33:07 2014 +0100
drm/i915: Only init engines once
Reported-by: Wei Yongjun <yongjun_wei@trendmicro.com.cn>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
This patch is in preparation of DSI dual link panels. For dual link
panels, few packets needs to be sent to Port A or Port C or both. Based
on the portno from MIPI Sequence Block#53, these sequences needs to be
sent accordingly.
v2: Addressed review comments by Jani
- port variables named properly
Signed-off-by: Gaurav K Singh <gaurav.k.singh@intel.com>
Reviewed-by: Jani Nikula <jani.nikula@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
This patch is in preparation for the DSI dual link
port enable and disable related changes.
Signed-off-by: Gaurav K Singh <gaurav.k.singh@intel.com>
Signed-off-by: Shobhit Kumar <shobhit.kumar@intel.com>
Reviewed-by: Jani Nikula <jani.nikula@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
When playing around with debugfs and a HSW machine I noticed that we
were displaying some garbled value in i915_ddb_info. This debugfs file
is only meaningful for gen9+, so don't display anything on earlier
platforms.
Signed-off-by: Damien Lespiau <damien.lespiau@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Multiple GGTT VMAs per object will be introduced in the near future which will
make it impossible to guarantee normal GGTT view is at the head of the list.
Purpose of this patch is to break this assumption straight away so any
potential hidden assumptions in the code base can be bisected to this
simple patch.
For: VIZ-4544
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Suggested-by: Daniel Vetter <daniel@ffwll.ch>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Hardware team updated the recommended translation values for DP/eDP 1.3.
This should help with some stability and HBR2 issues.
Signed-off-by: Damien Lespiau <damien.lespiau@intel.com>
Reviewed-by: Satheeshakrishna M<satheeshakrishna.m@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
We need to do that every time we resume the rings, not just at load.
I've overlooked this in my untangling of the ring init code.
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
Reviewed-by: Dave Gordon <david.s.gordon@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
A previous commit introduced engine init changes:
commit 372ee59699d9 ("drm/i915: Only init engines once")
This broke execlists as intel_lr_context_render_state_init was trying to emit
commands to the RCS for the default context before the ring->init_hw was called.
Made a new gen8_init_rcs_context function and assign in to render ring
init_context. Moved call to intel_logical_ring_workarounds_emit into
gen8_init_rcs_context to maintain previous functionality.
Moved call to render_state_init from lr_context_deferred_create into
gen8_init_rcs_context, and modified deferred_create to call ring->init_context
for non-default contexts.
Modified i915_gem_context_enable to call ring->init_context for the default
context.
So init_context will now always be called when the hw is ready - in
i915_gem_context_enable for the default context and in lr_context_deferred_create
for other contexts.
Signed-off-by: Thomas Daniel <thomas.daniel@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The conversion table can be replaced with simple enough function.
text data bss dec hex filename
839688 10987 24 850699 cfb0b drivers/gpu/drm/i915/i915.ko
839224 10987 24 850235 cf93b drivers/gpu/drm/i915/i915.ko
Result is 494 saved bytes (.05525%).
v2: - no run on sentences from subject (Chris, Jani)
- be verbose about the savings (Chris, Daniel)
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com> (v1)
Signed-off-by: Mika Kuoppala <mika.kuoppala@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Now that sanity prevails and we have the clean split between software
init and starting the engines we can drop all the "have we allocate
this struct already?" nonsense.
Execlist code could benefit quite a bit more still, but that's for
another patch.
Reviewed-by: Dave Gordon <david.s.gordon@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
The same logic can be implemented without it, and it even saves a line
of code.
Signed-off-by: Ander Conselvan de Oliveira <ander.conselvan.de.oliveira@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
We can do this.
And now there's finally the clean split between software setup and
hardware setup I kinda wanted since multi-ring support was merged
aeons ago. It only took almost 5 years.
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
Reviewed-by: Dave Gordon <david.s.gordon@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
With this all the ->init_hw hooks really only set up hw state needed
to start the ring, all the software state setup and memory/buffer
allocations happen beforehand.
v2: We need to call intel_init_pipe_control after the ring init since
otherwise engine->dev is NULL and it falls over. Currently that's
now after the hw ring is enabled but a) we'll be fine as long as no
one submits a batch b) this will change soon.
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
Reviewed-by: Dave Gordon <david.s.gordon@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
This is (mostly, some exceptions that need fixing) the hw setup
function which starts the ring. And not the function which allocates
all the resources.
Make this clear by giving it a better name.
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
Reviewed-by: Dave Gordon <david.s.gordon@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
There are numerous places in the code where the driver's idea of
how much space is left in a ring is updated using the driver's
latest notions of the positions of 'head' and 'tail' for the ring.
Among them are some that update one or both of these values before
(re)doing the calculation. In particular, there are four different
places in the code where 'last_retired_head' is copied to 'head'
and then set to -1; and two of these do not have a guard to check
that it has actually been updated since last time it was consumed,
leaving the possibility that the dummy -1 can be transferred from
'last_retired_head' to 'head', causing the space calculation to
produce 'impossible' results (previously seen on Android/VLV).
This code therefore consolidates all the calculation and updating of
these values, such that there is only one place where the ring space
is updated, and it ALWAYS uses (and consumes) 'last_retired_head' if
(and ONLY if) it has been updated since the last call.
Signed-off-by: Dave Gordon <david.s.gordon@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The used space in a ring is given by the cyclic distance from the
consumer (HEAD) to the producer (TAIL), i.e. ((tail-head) MOD size);
conversely, the available space in a ring is the cyclic distance
from the producer to the consumer, MINUS the amount reserved for a
"gap" that is supposed to guarantee that the producer never catches
up with or overruns the consumer. Note that some GEN h/w requires
that TAIL never approach to within one cacheline of HEAD, so the gap
is usually set to twice the cacheline size to ensure this.
While the existing code gives the correct answer for correct inputs,
if the producer HAS overrun into the reserved space, the result can
be a value larger than the maximum valid value (size-reserved). We
can improve this by reorganising the calculation, so that in the
event of overrun the result will be negative rather than over-large.
This means that the commonly-used test (available >= required)
will then reject further writes into the ring after an overrun,
giving some chance that we can recover from or at least diagnose
the original problem; whereas allowing more writes would likely both
confuse the h/w and destroy the evidence of what went wrong.
Signed-off-by: Dave Gordon <david.s.gordon@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Later on this can include multiple ports (e.g. (1 << PORT_A) | (1 <<
PORT_C)) to describe dual link DSI.
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Reviewed-by: Gaurav K Singh <gaurav.k.singh@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
MIPI DSI works on ports A and C, which map to pipes A and B,
respectively. Things are going to get more complicated with the
introduction of dual link DSI support, so clean up the register defines
and code to match reality.
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Reviewed-by: Gaurav K Singh <gaurav.k.singh@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Clear the video overlay state on GPU reset. Any pending overlay request
in the ring has been nuked, and the display itself gets reset. So we
pretty much lose all state here. Adjust the software state to match so
that the next "putimage" will restore things to working order.
v2: Ass a locking check into intel_overlay_release_old_vid() (Daniel)
Cc: Daniel Vetter <daniel@ffwll.ch>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
[danvet: s/0/NULL/ to appease sparse, reported by 0-day tester.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Updated the trace_irq code to use requests instead of seqnos. This includes
reference counting the request object to ensure it sticks around when required.
Note that getting access to the reference counting functions means moving the
inline i915_trace_irq_get() function from intel_ringbuffer.h to i915_drv.h.
For: VIZ-4377
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Reviewed-by: Thomas Daniel <Thomas.Daniel@intel.com>
[danvet: Resolve conflict due to shuffled merge order.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Similar to the patch from John which removed obj->ring.
Cc: John Harrison <John.C.Harrison@Intel.com>
Cc: Thomas Daniel <Thomas.Daniel@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
The ring member of the object structure was always updated with the
last_read_seqno member. Thus with the conversion to last_read_req, obj->ring is
now a direct copy of obj->last_read_req->ring. This makes it somewhat redundant
and potentially misleading (especially as there was no comment to explain its
purpose).
This checkin removes the redundant field. Many uses were simply testing for
non-null to see if the object is active on the GPU. Some of these have been
converted to check 'obj->active' instead. Others (where the last_read_req is
about to be used anyway) have been changed to check obj->last_read_req. The rest
simply pull the ring out from the request structure and proceed as before.
For: VIZ-4377
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Reviewed-by: Thomas Daniel <Thomas.Daniel@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Almost everywhere that caled i915_seqno_passed() was really asking 'has the
given seqno popped out of the hardware yet?'. Thus it had to query the current
hardware seqno and then do a signed delta comparison (which copes with wrapping
around zero but not with seqno values more than 2GB apart, although the latter
is unlikely!).
Now that the majority of seqno instances have been replaced with request
structures, it is possible to convert this test to be request based as well.
There is now a 'i915_gem_request_completed()' function which takes a request and
returns true or false as appropriate. Note that this currently just wraps up the
original _passed() test but a later patch in the series will reduce this to
simply returning a cached internal value, i.e.:
_completed(req) { return req->completed; }'
This checkin converts almost all _seqno_passed() calls. The only one left is in
the semaphore code which still requires seqnos not request structures.
For: VIZ-4377
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Reviewed-by: Thomas Daniel <Thomas.Daniel@intel.com>
[danvet: Drop hunk touching the trace_irq code since I've dropped the
patch which converts that, and resolve resulting conflict.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
It makes a lot more sense (and makes future seqno -> request conversion patches
simpler) to fill in the 'ring' field of the request structure at the point of
creation rather than submission. Given that the request structure is assigned by
ring specific code and thus is locked to a ring from the start, there really is
no reason to defer this assignment.
For: VIZ-4377
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Reviewed-by: Thomas Daniel <Thomas.Daniel@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
More seqno value to request structure conversions. Note, this change temporarily
moves the 'get_seqno()' call inside ring_idle() but this will disappear again in
a later patch when i915_seqno_passed() itself is converted.
For: VIZ-4377
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Reviewed-by: Thomas Daniel <Thomas.Daniel@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
All the code above is now using requests not seqnos so it is possible to convert
the trace functions across. Note that rather than get into problematic reference
counting issues, the trace code only saves the seqno and ring values from the
request structure not the structure pointer itself.
For: VIZ-4377
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Reviewed-by: Thomas Daniel <Thomas.Daniel@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Converted the flip_queued_seqno value to be a request structure as part of the
on going seqno to request changes. This includes reference counting the request
being saved away to ensure it can not be retired and freed while the flip code
is still waiting on it.
For: VIZ-4377
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Reviewed-by: Thomas Daniel <Thomas.Daniel@intel.com>
[danvet: Again get rid of the _irq request unref by simply moving that
into the unpin worker. Doesn't matter when we hang onto the request
for a bit longer, and in the unpin worker we already grab the
dev->struct_mutex anyway.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
There is no longer any need to retrieve a seqno value from an i915_add_request()
call. The calling code already knows which request structure is being processed
(it can only be ring->OLR). And as the request itself is now used in preference
to the basic seqno value, the latter is now redundant in this situation.
For: VIZ-4377
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Reviewed-by: Thomas Daniel <Thomas.Daniel@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Now that all code above is using request structures instead of seqno values, it
is possible to convert __wait_seqno() itself. Internally, it is still calling
i915_seqno_passed(), this will be updated later in the series. This step is just
changing the parameter list and function name.
For: VIZ-4377
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Reviewed-by: Thomas Daniel <Thomas.Daniel@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Converted the mmio_flip 'seqno' value to be a request structure as part of the
on going seqno to request changes. This includes reference counting the request
being saved away to ensure it can not be retired and freed while the flip code
is still waiting on it.
v2: Used the IRQ friendly request dereference call in the notify handler as that
code is called asynchronously without holding any useful mutex locks.
For: VIZ-4377
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Reviewed-by: Thomas Daniel <Thomas.Daniel@intel.com>
[danvet: Drop the _irq variant and use the normal reques unref,
wrapped in dev->struct_mutex per the discussion on the m-l.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
With refcounting it looks like you can just drop that refcount, but
that's not really the case. So make sure no one forgets.
Motivated by the unlocked call in the mmio flip code.
Cc: John Harrison <John.C.Harrison@Intel.com>
Cc: Thomas Daniel <Thomas.Daniel@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
Updated i915_wait_seqno() to take a request structure instead of a seqno value
and renamed it accordingly. Internally, it just pulls the seqno out of the
request and calls on to __wait_seqno() as before. However, all the code further
up the stack is now simplified as it can just pass the request object straight
through without having to peek inside.
For: VIZ-4377
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Reviewed-by: Thomas Daniel <Thomas.Daniel@intel.com>
[danvet: Squash in hunk from an earlier patch which was rebased
wrongly.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Converted 'last_flip_req' to be an actual request rather than a seqno value as
part of the on going seqno to request changes. This includes reference counting
the request being saved away to ensure it can not be retired and freed while the
overlay code is still waiting on it.
For: VIZ-4377
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Reviewed-by: Thomas Daniel <Thomas.Daniel@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Updated the _check_olr() function to actually take a request object and compare
it to the OLR rather than extracting seqnos and comparing those.
Note that there is one use case where the request object being processed is no
longer available at that point in the call stack. Hence a temporary copy of the
original function is still present (but called _check_ols() instead). This will
be removed in a subsequent patch.
Also, downgraded a BUG_ON to a WARN_ON as apparently the former is frowned upon
for shipping code.
For: VIZ-4377
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Reviewed-by: Thomas Daniel <Thomas.Daniel@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The OLS value is now obsolete. Exactly the same value is guarateed to be always
available as PLR->seqno. Thus it is safe to remove the OLS completely. And also
to rename the PLR to OLR to keep the 'outstanding lazy ...' naming convention
valid.
For: VIZ-4377
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Reviewed-by: Thomas Daniel <Thomas.Daniel@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Added reference counting of the request structure around __wait_seqno() calls.
This is a precursor to updating the wait code itself to take the request rather
than a seqno. At that point, it would be a Bad Idea for a request object to be
retired and freed while the wait code is still using it.
v3:
Note that even though the mutex lock is held during a call to i915_wait_seqno(),
it is still necessary to explicitly bump the reference count. It appears that
the shrinker can asynchronously retire items even though the mutex is locked.
For: VIZ-4377
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Reviewed-by: Thomas Daniel <Thomas.Daniel@intel.com>
[danvet: Remove wrongly squashed hunk which breaks the build.]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Convert the throttle code to use the request structure rather than extracting a
ring/seqno pair from it and using those. This is in preparation for
__wait_seqno() becoming __wait_request().
For: VIZ-4377
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Reviewed-by: Thomas Daniel <Thomas.Daniel@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The object structure contains the last read, write and fenced seqno values for
use in syncrhonisation operations. These have now been replaced with their
request structure counterparts.
Note that to ensure that objects do not end up with dangling pointers, the
assignments of last_*_req include reference count updates. Thus a request cannot
be freed if an object is still hanging on to it for any reason.
v2: Corrected 'last_rendering_' to 'last_read_' in a number of comments that did
not get updated when 'last_rendering_seqno' became 'last_read|write_seqno'
several millenia ago.
For: VIZ-4377
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Reviewed-by: Thomas Daniel <Thomas.Daniel@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Added helper functions for retrieving the ring and seqno entries from a request
structure. This allows the internal workings of the request structure to be
hidden from code that is using these. It also allows for useful
workarounds/debug code to be added as or when necessary.
Note that it is intended that the majority (if not all) uses of the seqno
accessor will disappear eventually as code is updated to use the request
structure itself rather than working with seqno values.
For: VIZ-4377
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Reviewed-by: Thomas Daniel <Thomas.Daniel@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The plan is to use request structures everywhere that seqno values were
previously used. This means saving pointers to structures in places that used to
be simple integers. In turn, that means that the target structure now needs much
more stringent lifetime tracking. That is, it must not be freed while some other
random object still holds a pointer to it.
To achieve this tracking, a reference count needs to be added. Whenever a
pointer to the structure is saved away, the count must be incremented and the
free must only occur when all references have been released.
For: VIZ-4377
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Reviewed-by: Thomas Daniel <Thomas.Daniel@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
The aim is to replace seqno values with request structures. A step along the way
is to switch to using the PLR in preference to the OLS. That requires the PLR to
only be valid when and only when the OLS is also valid. I.e., the two must be
kept in lock step. Then, code which was using the OLS can be safely switched
over to using the PLR instead.
For: VIZ-4377
Signed-off-by: John Harrison <John.C.Harrison@Intel.com>
Reviewed-by: Thomas Daniel <Thomas.Daniel@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
That's the version actually taking the dev_priv->power_domains lock.
Signed-off-by: Damien Lespiau <damien.lespiau@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Before suspending, we wait upon the outstanding GPU requests and flush
our pending idle handlers. This should downclock the GPU to its lowest
power state. Add a WARN to check that the delayed tasks were run and did
their job properly.
Suggested-by: Akash Goel <akash.goel@intel.com>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>