It used to be handy that we only had a couple of headers, but over time
intel_drv.h has become unwieldy. Extract declarations to a separate
header file corresponding to the implementation module, clarifying the
modularity of the driver.
Ensure the new header is self-contained, and do so with minimal further
includes, using forward declarations as needed. Include the new header
only where needed, and sort the modified include directives while at it
and as needed.
No functional changes.
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/64e46278dc8dccc9c548ef453cb2ceece5367bb2.1556540890.git.jani.nikula@intel.com
We now have two locks for sideband access. The general one covering
sideband access across all generation, sb_lock, and a specific one
covering sideband access via the punit on vlv/chv. After lifting the
sb_lock around the punit into the callers, the pcu_lock is now redudant
and can be separated from its other use to regulate RPS (essentially
giving RPS a lock all of its own).
v2: Extract a couple of minor bug fixes.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Sagar Arun Kamble <sagar.a.kamble@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190426081725.31217-4-chris@chris-wilson.co.uk
As we now employ a very heavy pm_qos around the punit access, we want to
minimise the number of synchronous requests by performing one for the
whole punit sequence rather than around individual accesses. The
sideband lock is used for this, so push the pm_qos into the sideband
lock acquisition and release, moving it from the lowlevel punit rw
routine to the callers. In the first step, we move the punit magic into
the common sideband lock so that we can acquire a bunch of ports
simultaneously, and if need be extend the workaround protection later.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190426081725.31217-2-chris@chris-wilson.co.uk
While we talk to the punit over its sideband, we need to prevent the cpu
from sleeping in order to prevent a potential machine hang.
Note that by itself, it appears that pm_qos_update_request (via
intel_idle) doesn't provide a sufficient barrier to ensure that all core
are indeed awake (out of Cstate) and that the package is awake. To do so,
we need to supplement the pm_qos with a manual ping on_each_cpu.
v2: Restrict the heavy-weight wakeup to just the ISOF_PORT_PUNIT, there
is insufficient evidence to implicate a wider problem atm. Similarly,
restrict the w/a to Valleyview, as Cherryview doesn't have an angry cadre
of users.
The working theory, courtesy of Ville and Hans, is the issue lies within
the power delivery and so is likely to be unit and board specific and
occurs when both the unit/fw require extra power at the same time as the
cpu package is changing its own power state.
References: https://bugzilla.kernel.org/show_bug.cgi?id=109051
References: https://bugs.freedesktop.org/show_bug.cgi?id=102657
References: https://bugzilla.kernel.org/show_bug.cgi?id=195255
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Hans de Goede <hdegoede@redhat.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190426081725.31217-1-chris@chris-wilson.co.uk
In the current scheme, on submitting a request we take a single global
GEM wakeref, which trickles down to wake up all GT power domains. This
is undesirable as we would like to be able to localise our power
management to the available power domains and to remove the global GEM
operations from the heart of the driver. (The intent there is to push
global GEM decisions to the boundary as used by the GEM user interface.)
Now during request construction, each request is responsible via its
logical context to acquire a wakeref on each power domain it intends to
utilize. Currently, each request takes a wakeref on the engine(s) and
the engines themselves take a chipset wakeref. This gives us a
transition on each engine which we can extend if we want to insert more
powermangement control (such as soft rc6). The global GEM operations
that currently require a struct_mutex are reduced to listening to pm
events from the chipset GT wakeref. As we reduce the struct_mutex
requirement, these listeners should evaporate.
Perhaps the biggest immediate change is that this removes the
struct_mutex requirement around GT power management, allowing us greater
flexibility in request construction. Another important knock-on effect,
is that by tracking engine usage, we can insert a switch back to the
kernel context on that engine immediately, avoiding any extra delay or
inserting global synchronisation barriers. This makes tracking when an
engine and its associated contexts are idle much easier -- important for
when we forgo our assumed execution ordering and need idle barriers to
unpin used contexts. In the process, it means we remove a large chunk of
code whose only purpose was to switch back to the kernel context.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Imre Deak <imre.deak@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190424200717.1686-5-chris@chris-wilson.co.uk
For controlling runtime pm of the GT and engines, we would like to have
a callback to do extra work the first time we wake up and the last time
we drop the wakeref. This first/last access needs serialisation and so
we encompass a mutex with the regular intel_wakeref_t tracker.
v2: Drop the _once naming and report the errors.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc; Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190424200717.1686-1-chris@chris-wilson.co.uk
Start partitioning off the code that talks to the hardware (GT) from the
uapi layers and move the device facing code under gt/
One casualty is s/intel_ringbuffer.h/intel_engine.h/ with the plan to
subdivide that header and body further (and split out the submission
code from the ringbuffer and logical context handling). This patch aims
to be simple motion so git can fixup inflight patches with little mess.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Acked-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Acked-by: Jani Nikula <jani.nikula@intel.com>
Acked-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190424174839.7141-1-chris@chris-wilson.co.uk
As we push for better compartmentalisation, it is more convenient to
copy the default sseu configuration from the engine into the derived
logical context, than it is to dig it out from i915->runtime_info.
v2: Use intel_sseu_from_device_info() to describe the converter
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190424095134.30249-1-chris@chris-wilson.co.uk
When we are called to relieve mempressue via the shrinker, the only way
we can make progress is either by discarding unwanted pages (those
objects that userspace has marked MADV_DONTNEED) or by reclaiming the
dirty objects via swap. As we know that is the only way to make further
progress, we can initiate the writeback as we invalidate the objects.
This means the objects we put onto the inactive anon lru list are
already marked for reclaim+writeback and so will trigger a wait upon the
writeback inside direct reclaim, greatly improving the success rate of
direct reclaim on i915 objects.
The corollary is that we may start a slow swap on opportunistic
mempressure from the likes of the compaction + migration kthreads. This
is limited by those threads only being allowed to shrink idle pages, but
also that if we reactivate the page before it is swapped out by gpu
activity, we only page the cost of repinning the page. The cost is most
felt when an object is reused after mempressure, which hopefully
excludes the latency sensitive tasks (as we are just extending the
impact of swap thrashing to them).
Apparently this is not the first time we've had this idea. Back in
commit 5537252b6b ("drm/i915: Invalidate our pages under memory
pressure") we wanted to start writeback but settled on invalidate after
Hugh Dickins warned us about a possibility of a deadlock within shmemfs
if we started writeback from shrink_slab. Looking at the callchain,
using writeback from i915_gem_shrink should be equivalent to the pageout
also employed by shrink_slab, i.e. it should not be any riskier afaict.
v2: Leave mmapings intact. At this point, the only mmapings of our
objects will be via CPU mmaps on the shmemfs filp, which are
out-of-scope for our LRU tracking. Instead leave those pages to the
inactive anon LRU page list for aging and pageout as normal.
v3: Be selective on which paths trigger writeback, in particular
excluding paths shrinking just to reclaim vm space (e.g. mmap, vmap
reapers) and avoid starting writeback on the entire process space from
within the pm freezer.
References: https://bugs.freedesktop.org/show_bug.cgi?id=108686
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Matthew Auld <matthew.auld@intel.com>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Michal Hocko <mhocko@suse.com>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com> #v1
Link: https://patchwork.freedesktop.org/patch/msgid/20190420115539.29081-1-chris@chris-wilson.co.uk
For consistency (and elegance!), add intel_device_info.has_rps.
The immediate boon is that RPS support is now emitted along the other
capabilities in the debug log and after errors.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Sagar Arun Kamble <sagar.a.kamble@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190419134836.5626-1-chris@chris-wilson.co.uk
On resume, we know that the only pinned contexts in danger of seeing
corruption are the kernel context, and so we do not need to walk the
list of all GEM contexts as we tracked them on each engine.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190410190120.830-1-chris@chris-wilson.co.uk
As soon as a device is considered unplugged, not only prevent pending
users from accessing the device structures but also cancel all their
pending requests so all consumed resources can be cleaned up as soon
as possible.
Suggested-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Janusz Krzysztofik <janusz.krzysztofik@linux.intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20190406104034.31380-1-chris@chris-wilson.co.uk
If we have only a single active pipe and the cdclk change only requires
the cd2x divider to be updated bxt+ can do the update with forcing a full
modeset on the pipe. Try to hook that up.
v2:
- Wait for vblank after an optimized CDCLK change.
- Avoid optimization if the pipe needs a modeset (or was disabled).
- Split CDCLK change to a pre/post plane update step.
v3:
- Use correct version of CDCLK state as old state. (Ville)
- Remove unused intel_cdclk_can_skip_modeset()
v4:
- For consistency call intel_set_cdclk_post_plane_update() only during
modesets (and not fastsets).
v5:
- Remove the logic to update the CD2X divider on-the-fly on ICL, since
only a divider of 1 is supported there. Clint also noticed that the
pipe select bits in CDCLK_CTL are oddly defined on ICL, it's not clear
yet whether that's only an error in the specification.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Abhay Kumar <abhay.kumar@intel.com>
Tested-by: Abhay Kumar <abhay.kumar@intel.com>
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Clint Taylor <Clinton.A.Taylor@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190327101321.3095-1-imre.deak@intel.com
CDCLK has to be at least twice the BLCK regardless of audio. Audio
driver has to probe using this hook and increase the clock even in
absence of any display.
v2: Use atomic refcount for get_power, put_power so that we can
call each once(Abhay).
v3: Reset power well 2 to avoid any transaction on iDisp link
during cdclk change(Abhay).
v4: Remove Power well 2 reset workaround(Ville).
v5: Remove unwanted Power well 2 register defined in v4(Abhay).
v6:
- Use a dedicated flag instead of state->modeset for min CDCLK changes
- Make get/put audio power domain symmetric
- Rebased on top of intel_wakeref tracking changes.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Abhay Kumar <abhay.kumar@intel.com>
Tested-by: Abhay Kumar <abhay.kumar@intel.com>
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Clint Taylor <Clinton.A.Taylor@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190320135439.12201-1-imre.deak@intel.com
We want to use intel_engine_mask_t inside i915_request.h, which means
extracting it from the general header file mess and placing it inside a
types.h. A knock on effect is that the compiler wants to warn about
type-contraction of ALL_ENGINES into intel_engine_maskt_t, so prepare
for the worst.
v2: Use intel_engine_mask_t consistently
v3: Move I915_NUM_ENGINES to its natural home at the end of the enum
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Cc: John Harrison <John.C.Harrison@Intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190401162641.10963-1-chris@chris-wilson.co.uk
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Concept of a sub-platform already exist in our code (like ULX and ULT
platform variants and similar),implemented via the macros which check a
list of device ids to determine a match.
With this patch we consolidate device ids checking into a single function
called during early driver load.
A few low bits in the platform mask are reserved for sub-platform
identification and defined as a per-platform namespace.
At the same time it future proofs the platform_mask handling by preparing
the code for easy extending, and tidies the very verbose WARN strings
generated when IS_PLATFORM macros are embedded into a WARN type
statements.
v2: Fixed IS_SUBPLATFORM. Updated commit msg.
v3: Chris was right, there is an ordering problem.
v4:
* Catch-up with new sub-platforms.
* Rebase for RUNTIME_INFO.
* Drop subplatform mask union tricks and convert platform_mask to an
array for extensibility.
v5:
* Fix subplatform check.
* Protect against forgetting to expand subplatform bits.
* Remove platform enum tallying.
* Add subplatform to error state. (Chris)
* Drop macros and just use static inlines.
* Remove redundant IRONLAKE_M. (Ville)
v6:
* Split out Ironlake change.
* Optimize subplatform check.
* Use __always_inline. (Lucas)
* Add platform_mask comment. (Paulo)
* Pass stored runtime info in error capture. (Chris)
v7:
* Rebased for new AML ULX device id.
* Bump platform mask array size for EHL.
* Stop mentioning device ids in intel_device_subplatform_init by using
the trick of splitting macros i915_pciids.h. (Jani)
* AML seems to be either a subplatform of KBL or CFL so express it like
that.
v8:
* Use one device id table per subplatform. (Jani)
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Suggested-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Jani Nikula <jani.nikula@intel.com>
Cc: Lucas De Marchi <lucas.demarchi@intel.com>
Cc: Jose Souza <jose.souza@intel.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Cc: Paulo Zanoni <paulo.r.zanoni@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Acked-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190327142328.31780-1-tvrtko.ursulin@linux.intel.com
IS_IRONLAKE_M can use the already defined intel_device_info.is_mobile for
this platform, so remove the instance of Ironlake's mobile device id from
the header file and replace it with an IS_MOBILE check.
v2:
* Improved commit text. (Chris)
v3:
* Rebased for EHL.
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Suggested-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20190326074057.27833-3-tvrtko.ursulin@linux.intel.com
This allows the IS_PINEVIEW_<G|M> macros to be removed and avoid
duplication of device ids already defined in i915_pciids.h.
!IS_MOBILE check can be used in place of existing IS_PINEVIEW_G call
sites.
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Suggested-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20190326074057.27833-2-tvrtko.ursulin@linux.intel.com
edram is not part of uncore and there is no requirement for the
detection to be done before we initialize the uncore functions. The
first check on HAS_EDRAM is in the ggtt_init path, so move it to
i915_driver_init_hw, where other dram-related detection happens.
While at it, save the size in MB instead of the capabilities because the
size is the only thing we look at outside of the init function.
Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Cc: Paulo Zanoni <paulo.r.zanoni@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20190328174533.31532-1-daniele.ceraolospurio@intel.com
The current intel_color_check() is a mess, and worse yet it is
in fact incorrect for several platforms. The hardware has
evolved quite a bit over the years, so let's just go for a clean
split between the platforms by turning this into a vfunc.
The actual work to split it up will follow.
v2: Assign the vfuncs in the order they appear in the
struct (Matt)
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Matt Roper <matthew.d.roper@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190327155045.28446-3-ville.syrjala@linux.intel.com
Tvrtko spotted that I left off the trailing ';'. It went unnoticed by CI
because despite adding the macro, we didn't add a user, so include one as
well (a simple debug print).
Reported-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Fixes: 97ee6e9255 ("drm/i915: stop storing the media fuse")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190326180007.11722-1-chris@chris-wilson.co.uk
The full read/write ops can now work on the intel_uncore struct.
Introduce intel_uncore_read/write functions working on intel_uncore
and switch the I915_READ/WRITE macro to internally call those.
v2: no change
v3: add intel_uncore_read/write functions (Chris), update commit msg
Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Cc: Paulo Zanoni <paulo.r.zanoni@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20190325214940.23632-6-daniele.ceraolospurio@intel.com
They now work on uncore, so use raw_uncore_ prefix. Also move them to
uncore.h
Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Cc: Paulo Zanoni <paulo.r.zanoni@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20190325214940.23632-2-daniele.ceraolospurio@intel.com
We're already updating the engine_mask to reflect what's in the HW, so
we can just get the info from there. A couple of macros have been added
to facilitate this.
v2: Appease checkpatch
Suggested-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Michal Wajdeczko <michal.wajdeczko@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Michal Wajdeczko <michal.wajdeczko@intel.com>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20190322002431.9585-1-daniele.ceraolospurio@intel.com
Iterate over child devices instead of ports in parse_ddi_ports() to
initialize ddi_port_info. We'll eventually need to decide some stuff
based on the child device order, which may be different from the port
order.
As a bonus, this allows better abstractions for e.g. dvo port mapping.
There's a subtle change in the DDC pin and AUX channel sanitization as
we change the order. Otherwise, this should not change behaviour.
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190322121008.4456-1-jani.nikula@intel.com
The AGPBUSY thing doesn't work on i945gm anymore. This means
the gmch is incapable of waking the CPU from C3 when an interrupt
is generated. The interrupts just get postponed indefinitely until
something wakes up the CPU. This is rather annoying for vblank
interrupts as we are unable to maintain a steady framerate
unless the machine is sufficiently loaded to stay out of C3.
To combat this let's use pm_qos to prevent C3 whenever vblank
interrupts are enabled. To maintain reasonable amount of powersaving
we will attempt to limit this to C3 only while leaving C1 and C2
enabled.
v2: Use READ_ONCE() (Chris)
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=30364
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190322180804.3300-1-ville.syrjala@linux.intel.com
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
If I'm reading the spec right AML 0x87CA is a Y SKU, so it
should be marked as ULX in our old style terminology.
Cc: stable@vger.kernel.org
Cc: José Roberto de Souza <jose.souza@intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Fixes: c0c46ca461 ("drm/i915/aml: Add new Amber Lake PCI ID")
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190322204944.23613-1-ville.syrjala@linux.intel.com
Reviewed-by: José Roberto de Souza <jose.souza@intel.com>
Add ElkhartLake as a unique platform as there are some differences
between it and Icelake.
Signed-off-by: Bob Paauwe <bob.j.paauwe@intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com>
Reviewed-by: José Roberto de Souza <jose.souza@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190322175847.25707-2-rodrigo.vivi@intel.com
In preparation to making the ppGTT binding for a context explicit (to
facilitate reusing the same ppGTT between different contexts), allow the
user to create and destroy named ppGTT.
v2: Replace global barrier for swapping over the ppgtt and tlbs with a
local context barrier (Tvrtko)
v3: serialise with struct_mutex; it's lazy but required dammit
v4: Rewrite igt_ctx_shared_exec to be more different (aimed to be more
similarly, turned out different!)
v5: Fix up test unwind for aliasing-ppgtt (snb)
v6: Tighten language for uapi struct drm_i915_gem_vm_control.
v7: Patch the context image for runtime ppgtt switching!
Testcase: igt/gem_vm_create
Testcase: igt/gem_ctx_param/vm
Testcase: igt/gem_ctx_clone/vm
Testcase: igt/gem_ctx_shared
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190322092325.5883-2-chris@chris-wilson.co.uk
When we return pages to the system, we ensure that they are marked as
being in the CPU domain since any external access is uncontrolled and we
must assume the worst. This means that we need to always flush the pages
on acquisition if we need to use them on the GPU, and from the beginning
have used set-domain. Set-domain is overkill for the purpose as it is a
general synchronisation barrier, but our intent is to only flush the
pages being swapped in. If we move that flush into the pages acquisition
phase, we know then that when we have obj->mm.pages, they are coherent
with the GPU and need only maintain that status without resorting to
heavy handed use of set-domain.
The principle knock-on effect for userspace is through mmap-gtt
pagefaulting. Our uAPI has always implied that the GTT mmap was async
(especially as when any pagefault occurs is unpredicatable to userspace)
and so userspace had to apply explicit domain control itself
(set-domain). However, swapping is transparent to the kernel, and so on
first fault we need to acquire the pages and make them coherent for
access through the GTT. Our use of set-domain here leaks into the uABI
that the first pagefault was synchronous. This is unintentional and
baring a few igt should be unoticed, nevertheless we bump the uABI
version for mmap-gtt to reflect the change in behaviour.
Another implication of the change is that gem_create() is presumed to
create an object that is coherent with the CPU and is in the CPU write
domain, so a set-domain(CPU) following a gem_create() would be a minor
operation that merely checked whether we could allocate all pages for
the object. On applying this change, a set-domain(CPU) causes a clflush
as we acquire the pages. This will have a small impact on mesa as we move
the clflush here on !llc from execbuf time to create, but that should
have minimal performance impact as the same clflush exists but is now
done early and because of the clflush issue, userspace recycles bo and
so should resist allocating fresh objects.
Internally, the presumption that objects are created in the CPU
write-domain and remain so through writes to obj->mm.mapping is more
prevalent than I expected; but easy enough to catch and apply a manual
flush.
For the future, we should push the page flush from the central
set_pages() into the callers so that we can more finely control when it
is applied, but for now doing it one location is easier to validate, at
the cost of sometimes flushing when there is no need.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Matthew Auld <matthew.william.auld@gmail.com>
Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Cc: Antonio Argenziano <antonio.argenziano@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Reviewed-by: Matthew Auld <matthew.william.auld@gmail.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190321161908.8007-1-chris@chris-wilson.co.uk
Define a mutex for the exclusive use of interacting with the per-file
context-idr, that was previously guarded by struct_mutex. This allows us
to reduce the coverage of struct_mutex, with a view to removing the last
bits coordinating GEM context later. (In the short term, we avoid taking
struct_mutex while using the extended constructor functions, preventing
some nasty recursion.)
v2: s/context_lock/context_idr_lock/
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190321140711.11190-2-chris@chris-wilson.co.uk
This allows us to ditch i915 in some more places.
v2: use local var in check_vgpu (Paulo)
Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Cc: Paulo Zanoni <paulo.r.zanoni@intel.com>
Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20190319183543.13679-9-daniele.ceraolospurio@intel.com
This will allow futher simplifications in the uncore handling.
v2: move register access setup under uncore (Chris)
Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Cc: Paulo Zanoni <paulo.r.zanoni@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20190319183543.13679-8-daniele.ceraolospurio@intel.com
Get/put functions used outside of uncore.c are updated in the next
patch for a nicer split.
v2: use dev_priv where we still have it (Paulo)
Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Cc: Paulo Zanoni <paulo.r.zanoni@intel.com>
Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20190319183543.13679-3-daniele.ceraolospurio@intel.com
With the introduction of the separate addressable bits into the device
info, we can remove the conflation of the ppgtt size from the ppgtt
type.
Based on a patch by Bob Paauwe.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Bob Paauwe <bob.j.paauwe@intel.com>
Cc: Matthew Auld <matthew.william.auld@gmail.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190314223839.28258-3-chris@chris-wilson.co.uk
As the maximum addressable bits is determined by platform, record that
information in our static chipset tables. This has the advantage of
being clearly recorded in our capability dumps for dmesg, debugfs and
error states.
Based on a patch by Bob Paauwe.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Bob Paauwe <bob.j.paauwe@intel.com>
Cc: Matthew Auld <matthew.william.auld@gmail.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190314223839.28258-2-chris@chris-wilson.co.uk
A new field with the training pattern(TP) wakeup time for PSR2 was
added to VBT, so lets use it when available otherwise it will
fallback to PSR1 wakeup time.
v2: replacing enum to numerical usec time (Jani)
BSpec: 20131
Cc: Jani Nikula <jani.nikula@intel.com>
Cc: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: José Roberto de Souza <jose.souza@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190312195743.8829-1-jose.souza@intel.com
In order to make it easier to bring up new platforms
without having to take care about all corner cases
that was previously taken care for previous platforms
we already use comparative INTEL_GEN statements.
Let's start doing the same with PCH.
The only caveats are:
- less-than comparisons need to be avoided or done with
attention and check > PCH_NONE as well.
- It is not necessarily a chronological order, but a matter
of south display compatibility/inheritance.
v2: Rebased on top of Jani's clean-up which removed the
need for less-than comparison
Cc: Jani Nikula <jani.nikula@intel.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Cc: Lucas De Marchi <lucas.demarchi@intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190308214300.25057-3-rodrigo.vivi@intel.com
So we can later use PCH >= comparisons. The ultimate goal
is to make it easier for us to introduce a new platform
with south display engine on PCH just by reusing the previous
one.
Suggested-by: Lucas De Marchi <lucas.demarchi@intel.com>
Cc: Lucas De Marchi <lucas.demarchi@intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190308214300.25057-2-rodrigo.vivi@intel.com
Currently we assume that we know the order in which requests run and so
can determine if we need to reissue a switch-to-kernel-context prior to
idling. That assumption does not hold for the future, so instead of
tracking which barriers have been used, simply determine if we have ever
switched away from the kernel context by using the engine and before
idling ensure that all engines that have been used since the last idle
are synchronously switched back to the kernel context for safety (and
else of shrinking memory while idle).
v2: Use intel_engine_mask_t and ALL_ENGINES
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190308093657.8640-3-chris@chris-wilson.co.uk
When the system idles, we switch to the kernel context as a defensive
measure (no users are harmed if the kernel context is lost). Currently,
we issue a switch to kernel context and then come back later to see if
the kernel context is still current and the system is idle. However,
if we are no longer privy to the runqueue ordering, then we have to
relax our assumptions about the logical state of the GPU and the only
way to ensure that the kernel context is currently loaded is by issuing
a request to run after all others, and wait for it to complete all while
preventing anyone else from issuing their own requests.
v2: Pull wedging into switch_to_kernel_context_sync() but only after
waiting (though only for the same short delay) for the active context to
finish.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190308093657.8640-1-chris@chris-wilson.co.uk
We'll need to know the memory type in the system for some
bandwidth limitations and whatnot. Let's read that out on
gen9+.
v2: Rebase
v3: Fix the copy paste fail in the BXT bit definitions (Jani)
Reviewed-by: Jani Nikula <jani.nikula@intel.com>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190306203551.24592-13-ville.syrjala@linux.intel.com
Pass the dimm struct to skl_is_16gb_dimm() rather than passing each
value separately. And let's replace the hardcoded set of values with
some simple arithmetic.
Also fix the byte vs. bit inconsistency in the debug message,
and polish the wording otherwise as well.
v2: Deobfuscate the math (Chris)
Reviewed-by: Jani Nikula <jani.nikula@intel.com>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190306203551.24592-4-ville.syrjala@linux.intel.com
Instead of passing the gem_context and engine to find the instance of
the intel_context to use, pass around the intel_context instead. This is
useful for the next few patches, where the intel_context is no longer a
direct lookup.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190306084704.15755-1-chris@chris-wilson.co.uk
To find the active request, we need only search along the individual
engine for the right request. This does not require touching any global
GEM state, so move it into the engine compartment.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190305180332.30900-3-chris@chris-wilson.co.uk
In the next patch, we are introducing a broad virtual engine to encompass
multiple physical engines, losing the 1:1 nature of BIT(engine->id). To
reflect the broader set of engines implied by the virtual instance, lets
store the full bitmask.
v2: Use intel_engine_mask_t (s/ring_mask/engine_mask/)
v3: Tvrtko voted for moah churn so teach everyone to not mention ring
and use $class$instance throughout.
v4: Comment upon the disparity in bspec for using VCS1,VCS2 in gen8 and
VCS[0-4] in later gen. We opt to keep the code consistent and use
0-index naming throughout.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190305180332.30900-1-chris@chris-wilson.co.uk
Define a HAS_TRANSCODER_EDP() macro that checks if we have defined an
offset for this transcoder. This allows platforms to be defined without
eDP transcoder.
Cc: Mika Kahola <mika.kahola@intel.com>
Cc: Imre Deak <imre.deak@intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com>
Reviewed-by: Mika Kahola <mika.kahola@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190222230254.20351-2-lucas.demarchi@intel.com
We currently use a worker queued from an rcu callback to determine when
a how grace period has elapsed while we remained idle. We use this idle
delay to infer that we will be idle for a while and this is a suitable
point at which we can trim our global memory caches.
Since we wrote that, this mechanism now exists as rcu_work, and having
converted the idle shrinkers over to using that, we can remove our own
variant.
v2: Say goodbye to gt.epoch as well.
v3: Remove the misplaced and redundant comment before parking globals
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190228102035.5857-3-chris@chris-wilson.co.uk
As kmem_caches share the same properties (size, allocation/free behaviour)
for all potential devices, we can use global caches. While this
potential has worse fragmentation behaviour (one can argue that
different devices would have different activity lifetimes, but you can
also argue that activity is temporal across the system) it is the
default behaviour of the system at large to amalgamate matching caches.
The benefit for us is much reduced pointer dancing along the frequent
allocation paths.
v2: Defer shrinking until after a global grace period for futureproofing
multiple consumers of the slab caches, similar to the current strategy
for avoiding shrinking too early.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190228102035.5857-1-chris@chris-wilson.co.uk
Other than LPT, no other PCH needed to differentiate between
LP and HP. So let's remove this before we spread this mistake
to future platforms.
Cc: Anusha Srivatsa <anusha.srivatsa@intel.com>
Cc: Lucas De Marchi <lucas.demarchi@intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190221211716.9433-1-rodrigo.vivi@intel.com
On skl the crc registers were extended to provide plane crcs
for up to 7 planes. Add the new crc sources.
The current code uses the ivb+ register definitions for skl+
which does happen to work as the plane1, plane2, and dmux/pf
bits happen the match what ivb+ had. So no bug in the current
code.
v2: Drop the unused set_wa parameter (DK)
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190214192219.3858-4-ville.syrjala@linux.intel.com
Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Defining the mei-i915 interface functions and initialization of
the interface.
v2:
Adjust to the new interface changes. [Tomas]
Added further debug logs for the failures at MEI i/f.
port in hdcp_port data is equipped to handle -ve values.
v3:
mei comp is matched for global i915 comp master. [Daniel]
In hdcp_shim hdcp_protocol() is replaced with const variable. [Daniel]
mei wrappers are adjusted as per the i/f change [Daniel]
v4:
port initialization is done only at hdcp2_init only [Danvet]
v5:
I915 registers a subcomponent to be matched with mei_hdcp [Daniel]
v6:
HDCP_disable for all connectors incase of comp_unbind.
Tear down HDCP comp interface at i915_unload [Daniel]
v7:
Component init and fini are moved out of connector ops [Daniel]
hdcp_disable is not called from unbind. [Daniel]
v8:
subcomponent name is dropped as it is already merged.
Signed-off-by: Ramalingam C <ramalingam.c@intel.com>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch> [v11]
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: https://patchwork.freedesktop.org/patch/msgid/1550338640-17470-5-git-send-email-ramalingam.c@intel.com
At a few points in our uABI, we check to see if the driver is wedged and
report -EIO back to the user in that case. However, as we perform the
check and reset asynchronously (where once before they were both
serialised by the struct_mutex), we may instead see the temporary wedging
used to cancel inflight rendering to avoid a deadlock during reset
(caused by either us timing out in our reset handler,
i915_wedge_on_timeout or with malice aforethought in intel_reset_prepare
for a stuck modeset). If we suspect this is the case, that is we see a
wedged driver *and* reset in progress, then wait until the reset is
resolved before reporting upon the wedged status.
v2: might_sleep() (Mika)
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=109580
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190220145637.23503-1-chris@chris-wilson.co.uk
As time goes by, usage of generic ioctls such as drm_syncobj and
sync_file are on the increase bypassing i915-specific ioctls like
GEM_WAIT. Currently, we only apply waitboosting to our driver ioctls as
we track the file/client and account the waitboosting to them. However,
since commit 7b92c1bd05 ("drm/i915: Avoid keeping waitboost active for
signaling threads"), we no longer have been applying the client
ratelimiting on waitboosts and so that information has only been used
for debug tracking.
Push the application of waitboosting down to the common
i915_request_wait, and apply it to all foreign fence waits as well.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Eero Tamminen <eero.t.tamminen@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190213092504.25709-1-chris@chris-wilson.co.uk
Previously, we were able to rely on the recursive properties of
struct_mutex to allow us to serialise revoking mmaps and reacquiring the
FENCE registers with them being clobbered over a global device reset.
I then proceeded to throw out the baby with the bath water in order to
pursue a struct_mutex-less reset.
Perusing LWN for alternative strategies, the dilemma on how to serialise
access to a global resource on one side was answered by
https://lwn.net/Articles/202847/ -- Sleepable RCU:
1 int readside(void) {
2 int idx;
3 rcu_read_lock();
4 if (nomoresrcu) {
5 rcu_read_unlock();
6 return -EINVAL;
7 }
8 idx = srcu_read_lock(&ss);
9 rcu_read_unlock();
10 /* SRCU read-side critical section. */
11 srcu_read_unlock(&ss, idx);
12 return 0;
13 }
14
15 void cleanup(void)
16 {
17 nomoresrcu = 1;
18 synchronize_rcu();
19 synchronize_srcu(&ss);
20 cleanup_srcu_struct(&ss);
21 }
No more worrying about stop_machine, just an uber-complex mutex,
optimised for reads, with the overhead pushed to the rare reset path.
However, we do run the risk of a deadlock as we allocate underneath the
SRCU read lock, and the allocation may require a GPU reset, causing a
dependency cycle via the in-flight requests. We resolve that by declaring
the driver wedged and cancelling all in-flight rendering.
v2: Use expedited rcu barriers to match our earlier timing
characteristics.
v3: Try to annotate locking contexts for sparse
v4: Reduce selftest lock duration to avoid a reset deadlock with fences
v5: s/srcu/reset_backoff_srcu/
v6: Remove more stale comments
Testcase: igt/gem_mmap_gtt/hang
Fixes: eb8d0f5af4 ("drm/i915: Remove GPU reset dependence on struct_mutex")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190208153708.20023-2-chris@chris-wilson.co.uk
Changing the i915_edp_psr_debug was enabling, disabling or switching
PSR version by directly calling intel_psr_disable_locked() and
intel_psr_enable_locked(), what is not the default PSR path that will
be executed by real users.
So lets force a fastset in the PSR CRTC to trigger a pipe update and
stress the default code path.
Recently a bug was found when switching from PSR2 to PSR1 while
enable_psr kernel parameter was set to the default parameter, this
changes fix it and also fixes the bug linked bellow were DRRS was
left enabled together with PSR when enabling PSR from debugfs.
v2: Handling missing case: disabled to PSR1
v3: Not duplicating the whole atomic state(Maarten)
v4: Adding back the missing call to intel_psr_irq_control(Dhinakaran)
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=108341
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Cc: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: José Roberto de Souza <jose.souza@intel.com>
Reviewed-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190206211845.5322-1-jose.souza@intel.com
Split the color management hooks along the single vs. double
buffered registers line. Of the currently programmed registers
GAMMA_MODE and the ilk+ pipe CSC are double buffered, the
LUTS and CHV CGM block are single buffered.
The double buffered register will be programmed during the
normal pipe update with evasion, and also during pipe enable
so that the settings will already be correct when the pipe
starts up before the planes are enabled.
The single buffered registers are currently programmed before
the vblank evade. Which is totally wrong, but we'll correct
that later.
v2: Add some docs to explain the two vfuncs (Matt,Uma)
Rebase
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190205160848.24662-6-ville.syrjala@linux.intel.com
Reviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Pass the crtc state etc. as const to the color management commit
functions. And while at it polish some of the local variables.
v2: Rebase
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Matt Roper <matthew.d.roper@intel.com>
Reviewed-by: Uma Shankar <uma.shankar@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190205160848.24662-4-ville.syrjala@linux.intel.com
Reviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
First of all GMCH can be considered a feature by itself
since it is a chip present in some platforms that connects
the IA processor to memory and other components in PC.
Also with the introduction of display block at device info,
we got a redundant definition:
.display.has_gmch_display = 1,
So, let's clean up things a bit and use the standardized
way of has_feature on displays side.
No functional change and no manual interaction to generate
this patch.
It is only:
sed -si -e 's/has_gmch_display/has_gmch/g' \
-e 's/HAS_GMCH_DISPLAY/HAS_GMCH/g' drivers/gpu/drm/i915/*{c,h}
Cc: José Roberto de Souza <jose.souza@intel.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Reviewed-by: José Roberto de Souza <jose.souza@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190204222538.15842-1-rodrigo.vivi@intel.com
We want to expose the ability to reconfigure the slices, subslice and
eu per context and per engine. To facilitate that, store the current
configuration on the context for each engine, which is initially set
to the device default upon creation.
v2: record sseu configuration per context & engine (Chris)
v3: introduce the i915_gem_context_sseu to store powergating
programming, sseu_dev_info has grown quite a bit (Lionel)
v4: rename i915_gem_sseu into intel_sseu (Chris)
use to_intel_context() (Chris)
v5: More to_intel_context() (Tvrtko)
Switch intel_sseu from union to struct (Tvrtko)
Move context default sseu in existing loop (Chris)
v6: s/intel_sseu_from_device_sseu/intel_device_default_sseu/ (Tvrtko)
Tvrtko Ursulin:
v7:
* Pass intel_sseu by pointer instead of value to make_rpcs.
* Rebase for make_rpcs changes.
v8:
* Rebase for RPCS edit on pin.
v9:
* Rebase for context image setup changes.
v10:
* Rename dev_priv to i915. (Chris Wilson)
v11:
* Rebase.
v12:
* Rebase for IS_GEN changes.
v13:
* Rebase for RUNTIME_INFO.
v14:
* Rebase for intel_context_init.
v15:
* Rebase for drm-tip changes.
v16:
* Moved struct intel_sseu definition to i915_gem_context.h.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190205095032.22673-1-tvrtko.ursulin@linux.intel.com
On icl+ bspec tells us to calculate a separate minimum ddb
allocation from the blocks watermark. Both have to be checked
against the actual ddb allocation, but since we do things the
other way around we'll just calculat the minimum acceptable
ddb allocation by taking the maximum of the two values.
We'll also replace the memcmp() with a full trawl over the
the watermarks so that it'll ignore the min_ddb_alloc
because we can't directly read that out from the hw. I suppose
we could reconstruct it from the other values, but I was
too lazy to do that now.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20181221171436.8218-6-ville.syrjala@linux.intel.com
Reviewed-by: Matt Roper <matthew.d.roper@intel.com>
Now that we pin timelines around use, we have a clearly defined lifetime
and convenient points at which we can track only the active timelines.
This allows us to reduce the list iteration to only consider those
active timelines and not all.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190128181812.22804-6-chris@chris-wilson.co.uk
If we restrict ourselves to only using a cacheline for each timeline's
HWSP (we could go smaller, but want to avoid needless polluting
cachelines on different engines between different contexts), then we can
suballocate a single 4k page into 64 different timeline HWSP. By
treating each fresh allocation as a slab of 64 entries, we can keep it
around for the next 64 allocation attempts until we need to refresh the
slab cache.
John Harrison noted the issue of fragmentation leading to the same worst
case performance of one page per timeline as before, which can be
mitigated by adopting a freelist.
v2: Keep all partially allocated HWSP on a freelist
This is still without migration, so it is possible for the system to end
up with each timeline in its own page, but we ensure that no new
allocation would needless allocate a fresh page!
v3: Throw a selftest at the allocator to try and catch invalid cacheline
reuse.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: John Harrison <John.C.Harrison@Intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190128181812.22804-4-chris@chris-wilson.co.uk
Currently, the list of timelines is serialised by the struct_mutex, but
to alleviate difficulties with using that mutex in future, move the
list management under its own dedicated mutex.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190128102356.15037-5-chris@chris-wilson.co.uk
Now that the submission backends are controlled via their own spinlocks,
with a wave of a magic wand we can lift the struct_mutex requirement
around GPU reset. That is we allow the submission frontend (userspace)
to keep on submitting while we process the GPU reset as we can suspend
the backend independently.
The major change is around the backoff/handoff strategy for performing
the reset. With no mutex deadlock, we no longer have to coordinate with
any waiter, and just perform the reset immediately.
Testcase: igt/gem_mmap_gtt/hang # regresses
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190125132230.22221-3-chris@chris-wilson.co.uk
Mixed C99 and kernel types use is getting ugly. Prefer kernel types.
sed -i 's/\buint\(8\|16\|32\|64\)_t\b/u\1/g'
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Acked-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: José Roberto de Souza <jose.souza@intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190118120125.15484-7-jani.nikula@intel.com
Since commit 93065ac753 ("mm, oom: distinguish blockable mode for mmu
notifiers") we have been able to report failure from
mmu_invalidate_range_start which allows us to use a trylock on the
struct_mutex to avoid potential recursion and report -EBUSY instead.
Furthermore, this allows us to pull the work into the main callback and
avoid the sleight-of-hand in using a workqueue to avoid lockdep.
However, not all paths to mmu_invalidate_range_start are prepared to
handle failure, so instead of reporting the recursion, deal with it by
propagating the failure upwards, who can decide themselves to handle it
or report it.
v2: Mark up the recursive lock behaviour and comment on the various weak
points.
v3: Follow commit 3824e41975 ("drm/i915: Use mutex_lock_killable() from
inside the shrinker") and also use mutex_lock_killable().
v3.1: No leak on EINTR.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=108375
References: 93065ac753 ("mm, oom: distinguish blockable mode for mmu notifiers")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190115124442.3500-1-chris@chris-wilson.co.uk
As the GT_IRQ power domain implies a wakeref, we can use it inplace of
our existing redundant rpm grab.
v2: Drop papering over forgetting to take the runtime wakeref in
selftests
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Jani Nikula <jani.nikula@intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190114142129.24398-20-chris@chris-wilson.co.uk
On module load and unload, we grab the POWER_DOMAIN_INIT powerwells and
transfer them to the runtime-pm code. We can use our wakeref tracking to
verify that the wakeref is indeed passed from init to enable, and
disable to fini; and across suspend.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Jani Nikula <jani.nikula@intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190114142129.24398-17-chris@chris-wilson.co.uk
The majority of runtime-pm operations are bounded and scoped within a
function; these are easy to verify that the wakeref are handled
correctly. We can employ the compiler to help us, and reduce the number
of wakerefs tracked when debugging, by passing around cookies provided
by the various rpm_get functions to their rpm_put counterpart. This
makes the pairing explicit, and given the required wakeref cookie the
compiler can verify that we pass an initialised value to the rpm_put
(quite handy for double checking error paths).
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Jani Nikula <jani.nikula@intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190114142129.24398-16-chris@chris-wilson.co.uk
Record the wakeref used for keeping the device awake as the GPU is
executing requests and be sure to cancel the tracking upon parking.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Jani Nikula <jani.nikula@intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190114142129.24398-3-chris@chris-wilson.co.uk
The majority of runtime-pm operations are bounded and scoped within a
function; these are easy to verify that the wakeref are handled
correctly. We can employ the compiler to help us, and reduce the number
of wakerefs tracked when debugging, by passing around cookies provided
by the various rpm_get functions to their rpm_put counterpart. This
makes the pairing explicit, and given the required wakeref cookie the
compiler can verify that we pass an initialised value to the rpm_put
(quite handy for double checking error paths).
For regular builds, the compiler should be able to eliminate the unused
local variables and the program growth should be minimal. Fwiw, it came
out as a net improvement as gcc was able to refactor rpm_get and
rpm_get_if_in_use together,
v2: Just s/rpm_put/rpm_put_unchecked/ everywhere, leaving the manual
mark up for smaller more targeted patches.
v3: Mention the cookie in Returns
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Jani Nikula <jani.nikula@intel.com>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190114142129.24398-2-chris@chris-wilson.co.uk
Everytime we take a wakeref, record the stack trace of where it was
taken; clearing the set if we ever drop back to no owners. For debugging
a rpm leak, we can look at all the current wakerefs and check if they
have a matching rpm_put.
v2: Use skip=0 for unwinding the stack as it appears our noinline
function doesn't appear on the stack (nor does save_stack_trace itself!)
v3: Allow rpm->debug_count to disappear between inspections and so
avoid calling krealloc(0) as that may return a ZERO_PTR not NULL! (Mika)
v4: Show who last acquire/released the runtime pm
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Jani Nikula <jani.nikula@intel.com>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Tested-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190114142129.24398-1-chris@chris-wilson.co.uk
Needs just a few additional includes here and there.
Cc: Sam Ravnborg <sam@ravnborg.org>
Cc: Daniel Vetter <daniel@ffwll.ch>
Cc: Laurent Pinchart <laurent.pinchart@ideasonboard.com>
Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Acked-by: Sam Ravnborg <sam@ravnborg.org>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190108082709.3748-1-jani.nikula@intel.com
Ignore trying to shrink from i915 if we fail to acquire the struct_mutex
in the shrinker while performing direct-reclaim. The trade-off being
(much) lower latency for non-i915 clients at an increased risk of being
unable to obtain a page from direct-reclaim without hitting the
oom-notifier. The proviso being that we still keep trying to hard
obtain the lock for kswapd so that we can reap under heavy memory
pressure.
v2: Taint all mutexes taken within the shrinker with the struct_mutex
subclass as an early warning system, and drop I915_SHRINK_ACTIVE from
vmap to reduce the number of dangerous paths. We also have to drop
I915_SHRINK_ACTIVE from oom-notifier to be able to make the same claim
that ACTIVE is only used from outside context, which fits in with a
longer strategy of avoiding stalls due to scanning active during
shrinking.
The danger in using the subclass struct_mutex is that we declare
ourselves more knowledgable than lockdep and deprive ourselves of
automatic coverage. Instead, we require ourselves to mark up any mutex
taken inside the shrinker in order to detect lock-inversion, and if we
miss any we are doomed to a deadlock at the worst possible moment.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190107115509.12523-1-chris@chris-wilson.co.uk
Now that we have eliminated the CPU-side irq_seqno_barrier by moving the
delays on the GPU before emitting the MI_USER_INTERRUPT, we can remove
the engine->irq_seqno_barrier infrastructure. Though intentionally
slowing down the GPU is nasty, so is the code we can now remove!
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20181228171641.16531-6-chris@chris-wilson.co.uk
The writing is on the wall for the existence of a single execution queue
along each engine, and as a consequence we will not be able to track
dependencies along the HW queue itself, i.e. we will not be able to use
HW semaphores on gen7 as they use a global set of registers (and unlike
gen8+ we can not effectively target memory to keep per-context seqno and
dependencies).
On the positive side, when we implement request reordering for gen7 we
also can not presume a simple execution queue and would also require
removing the current semaphore generation code. So this bring us another
step closer to request reordering for ringbuffer submission!
The negative side is that using interrupts to drive inter-engine
synchronisation is much slower (4us -> 15us to do a nop on each of the 3
engines on ivb). This is much better than it was at the time of introducing
the HW semaphores and equally important userspace weaned itself off
intermixing dependent BLT/RENDER operations (the prime culprit was glyph
rendering in UXA). So while we regress the microbenchmarks, it should not
impact the user.
References: https://bugs.freedesktop.org/show_bug.cgi?id=108888
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20181228140736.32606-2-chris@chris-wilson.co.uk
After we found a workaround for a hang on context load, Ben Widawsky
found confirmation that it was for an issue with waking from rc6 and
loading a context image.
The workaround from on high suggests that we should
I915_WRITE(RING_WAIT_FOR_RC6_EXIT(engine->mmio_base),
_MASKED_FIELD(RING_RC6_SEL_WRITE_ADDR_MASK,
RING_RC6_SEL_WRITE_ADDR_UPPER_LEFT));
in our rc6 setup for Haswell GT1, but on applying that we find instead
that the machine encounters a GT forcewake error and locks up.
As we are removing HW semaphore usage in the next patch, and the
suggested workaround is no improvement, we need to
decouple the PSMI workaround from HAS_SEMAPHORES to IS_HSW_GT1.
References: 2c55018347 ("drm/i915: Disable PSMI sleep messages on all rings around context switches")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20181228140736.32606-1-chris@chris-wilson.co.uk
This is needed by the next patch to determine if a DDI TypeC port is
physically wired to a legacy DP or legacy HDMI connector or if the port
is wired to a USB-C/Thunderbolt connector.
Cc: Jani Nikula <jani.nikula@linux.intel.com>
Cc: Paulo Zanoni <paulo.r.zanoni@intel.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Cc: José Roberto de Souza <jose.souza@intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20181214182703.18865-3-imre.deak@intel.com
Define IS_GEN() similarly to our IS_GEN_RANGE(). but use gen instead of
gen_mask to do the comparison. Now callers can pass then gen as a parameter,
so we don't require one macro for each gen.
The following spatch was used to convert the users of these macros:
@@
expression e;
@@
(
- IS_GEN2(e)
+ IS_GEN(e, 2)
|
- IS_GEN3(e)
+ IS_GEN(e, 3)
|
- IS_GEN4(e)
+ IS_GEN(e, 4)
|
- IS_GEN5(e)
+ IS_GEN(e, 5)
|
- IS_GEN6(e)
+ IS_GEN(e, 6)
|
- IS_GEN7(e)
+ IS_GEN(e, 7)
|
- IS_GEN8(e)
+ IS_GEN(e, 8)
|
- IS_GEN9(e)
+ IS_GEN(e, 9)
|
- IS_GEN10(e)
+ IS_GEN(e, 10)
|
- IS_GEN11(e)
+ IS_GEN(e, 11)
)
v2: use IS_GEN rather than GT_GEN and compare to info.gen rather than
using the bitmask
Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com>
Reviewed-by: Jani Nikula <jani.nikula@intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20181212181044.15886-2-lucas.demarchi@intel.com
RANGE makes it longer, but clearer. We are also going to add a macro to
check an individual gen, so add the _RANGE prefix here.
Diff generated with:
sed 's/IS_GEN(/IS_GEN_RANGE(/g' drivers/gpu/drm/i915/{*/,}*.{c,h} -i
v2: use IS_GEN rather than GT_GEN
Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Jani Nikula <jani.nikula@intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20181212181044.15886-1-lucas.demarchi@intel.com
Try to be more consistent about intel_* types rather than drm_* types
for lower-level driver functions. While we're at it, let's also be more
consistent with state variable naming (half of the platforms use the
name 'state' whereas the other half used 'crtc_state').
While we're touching these variables, let's also be more consistent
about always naming the intel_crtc_state's "crtc_state" rather than
"state" so that different platform types aren't using different naming
conventions.
v2:
- s/state/crtc_state/ for consistency between platform types (Ville)
- Drop the crtc parameter to intel_color_check(); we can just pull that
out of the state object.
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20181210215415.19854-2-matthew.d.roper@intel.com
Try to be more consistent about intel_* types rather than drm_* types
for lower-level driver functions.
v2:
- Also drop the intel_crtc parameter from compute_intermediate_wm()
since we can just extract it from the crtc_state parameter. (Ville)
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20181210215415.19854-1-matthew.d.roper@intel.com
According to eDP spec, sink can required specific selective update
granularity that source must comply.
Here caching the value if required and checking if source supports
it.
v3:
- Returning the default granularity in case DPCD read fails(Dhinakaran)
- Changed DPCD error message level(Dhinakaran)
v4:
- Setting granularity to defaul when granularity read is equal to
0(Dhinakaran)
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Cc: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
Signed-off-by: José Roberto de Souza <jose.souza@intel.com>
Reviewed-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20181204003403.23361-9-jose.souza@intel.com
Currently we allocate a scratch page for each engine, but since we only
ever write into it for post-sync operations, it is not exposed to
userspace nor do we care for coherency. As we then do not care about its
contents, we can use one page for all, reducing our allocations and
avoid complications by not assuming per-engine isolation.
For later use, it simplifies engine initialisation (by removing the
allocation that required struct_mutex!) and means that we can always rely
on there being a scratch page.
v2: Check that we allocated a large enough scratch for I830 w/a
Fixes: 06e562e7f515 ("drm/i915/ringbuffer: Delay after EMIT_INVALIDATE for gen4/gen5") # v4.18.20
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=108850
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Reviewed-by: Mika Kuoppala <mika.kuoppala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20181204141522.13640-1-chris@chris-wilson.co.uk
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: <stable@vger.kernel.org> # v4.18.20+
Convert the per context workaround handling code to run against the newly
introduced common workaround framework and fuse the two to use the
existing smarter list add helper, the one which does the sorted insert and
merges registers where possible.
This completes migration of all four classes of workarounds onto the
common framework.
Existing macros are kept untouched for smaller code churn.
v2:
* Rename to list name ctx_wa_list and move from dev_priv to engine.
v3:
* API rename and parameters tweaking. (Chris Wilson)
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20181203133357.10341-1-tvrtko.ursulin@linux.intel.com
To enable later verification of GT workaround state at various stages of
driver lifetime, we record the list of applicable ones per platforms to a
list, from which they are also applied.
The added data structure is a simple array of register, mask and value
items, which is allocated on demand as workarounds are added to the list.
This is a temporary implementation which later in the series gets fused
with the existing per context workaround list handling. It is separated at
this stage since the following patch fixes a bug which needs to be as easy
to backport as possible.
Also, since in the following patch we will be adding a new class of
workarounds (per engine) which can be applied from interrupt context, we
straight away make the provision for safe read-modify-write cycle.
v2:
* Change dev_priv to i915 along the init path. (Chris Wilson)
* API rename. (Chris Wilson)
v3:
* Remove explicit list size tracking in favour of growing the allocation
in power of two chunks. (Chris Wilson)
v4:
Chris Wilson:
* Change wa_list_finish to early return.
* Copy workarounds using the compiler for static checking.
* Do not bother zeroing unused entries.
* Re-order struct i915_wa_list.
v5:
* kmalloc_array.
* Whitespace cleanup.
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20181203133319.10174-1-tvrtko.ursulin@linux.intel.com
This helps separate what capabilities are display capabilities.
v3: Moving display struct right after flags (Lucas)
Cc: Jani Nikula <jani.nikula@linux.intel.com>
Suggested-by: Jani Nikula <jani.nikula@linux.intel.com>
Suggested-by: Lucas De Marchi <lucas.demarchi@intel.com>
Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com>
Signed-off-by: José Roberto de Souza <jose.souza@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20181130232048.14216-2-jose.souza@intel.com
Right now it is decided if GEN has display by checking the num_pipes,
so lets make it explicit and use a macro.
Cc: Jani Nikula <jani.nikula@intel.com>
Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com>
Signed-off-by: José Roberto de Souza <jose.souza@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20181130232048.14216-1-jose.souza@intel.com
As stated in struct drm_encoder, crtc field should only be used
by non-atomic drivers.
So here caching the pipe id in intel_psr_enable() what is way more
simple and efficient than at every call to
intel_psr_flush()/invalidate() get the
drm.mode_config.connection_mutex lock to safely be able to get the
pipe id by reading drm_connector_state.crtc.
This should fix the null pointer dereference crash below as the
previous way to get the pipe id was prone to race conditions.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=105959
Cc: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: José Roberto de Souza <jose.souza@intel.com>
Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20181128072838.22773-1-jose.souza@intel.com
1. Disable Left/right VDSC branch in DSS Ctrl reg
depending on the number of VDSC engines being used
2. Disable joiner in DSS Ctrl reg
v4:
* Remove encoder, make crtc_state const (Ville)
v3 (From Manasi):
* Add Disable PG2 for VDSC on eDP
v2 (From Manasi):
* Use old_crtc_state to find dsc params
* Add a condition to disable only if
dsc state compression is enabled
* Use correct DSS CTL regs
Cc: Jani Nikula <jani.nikula@linux.intel.com>
Cc: Ville Syrjala <ville.syrjala@linux.intel.com>
Cc: Anusha Srivatsa <anusha.srivatsa@intel.com>
Signed-off-by: Manasi Navare <manasi.d.navare@intel.com>
Signed-off-by: Gaurav K Singh <gaurav.k.singh@intel.com>
Reviewed-by: Anusha Srivatsa <anusha.srivatsa@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20181128202628.20238-12-manasi.d.navare@intel.com
After encoder->pre_enable() hook, after link training sequence is
completed, PPS registers for DSC encoder are configured using the
DSC state parameters in intel_crtc_state as part of DSC enabling
routine in the source. DSC enabling routine is called after
encoder->pre_enable() before enbaling the pipe and after
compression is enabled on the sink.
v7:
* Remove unnecessary comments, leftovers (Ville)
* No need for explicit val &= ~ (Ville)
v6:
intel_dsc_enable to be part of pre_enable hook (Ville)
v5:
* make crtc_state const (Ville)
v4:
* Use cpu_transcoder instead of encoder->type for using EDP transcoder
DSC registers(Ville)
* Keep all PSS regs together (Anusha)
v3:
* Configure Pic_width/2 for each VDSC engine when two VDSC engines per pipe
are used (Manasi)
* Add DSC slice_row_per_frame in PPS16 (Manasi)
v2:
* Enable PG2 power well for VDSC on eDP
Cc: Jani Nikula <jani.nikula@linux.intel.com>
Cc: Ville Syrjala <ville.syrjala@linux.intel.com>
Cc: Anusha Srivatsa <anusha.srivatsa@intel.com>
Signed-off-by: Manasi Navare <manasi.d.navare@intel.com>
[manasi: fixup the line longer than 100 chars while applying]
Reviewed-by: Anusha Srivatsa <anusha.srivatsa@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20181128202628.20238-8-manasi.d.navare@intel.com
Basic DSC parameters and DSC configuration data needs to be computed
for each of the requested mode during atomic check. This is
required since for certain modes, valid DSC parameters and config
data might not be computed in which case compression cannot be
enabled for that mode.
For that reason we need to add these params and config structure
to the intel_crtc_state so that if valid this state information
can directly be used while enabling DSC in atomic commit.
v2:
* Rebase on drm-tip (Manasi)
Cc: Gaurav K Singh <gaurav.k.singh@intel.com>
Cc: Jani Nikula <jani.nikula@linux.intel.com>
Cc: Ville Syrjala <ville.syrjala@linux.intel.com>
Cc: Anusha Srivatsa <anusha.srivatsa@intel.com>
Signed-off-by: Manasi Navare <manasi.d.navare@intel.com>
Reviewed-by: Anusha Srivatsa <anusha.srivatsa@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20181128202628.20238-1-manasi.d.navare@intel.com
On SKL+ the plane WM/BUF_CFG registers are a proper part of each
plane's register set. That means accessing them will cancel any
pending plane update, and we would need a PLANE_SURF register write
to arm the wm/ddb change as well.
To avoid all the problems with that let's just move the wm/ddb
programming into the plane update/disable hooks. Now all plane
registers get written in one (hopefully atomic) operation.
To make that feasible we'll move the plane ddb tracking into
the crtc state. Watermarks were already tracked there.
v2: Rebase due to input CSC
v3: Split out a bunch of junk (Matt)
v4: Add skl_wm_add_affected_planes() to deal with
cursor special case and non-zero wm register reset value
v5: Drop the unrelated for_each_intel_plane_mask() fix (Matt)
Remove the redundant ddb memset() (Matt)
Cc: Matt Roper <matthew.d.roper@intel.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Reviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> #v3
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20181127165900.31298-1-ville.syrjala@linux.intel.com
While PSR is active hardware will do aux transactions by it self to
wakeup sink to receive a new frame when necessary. If that
transaction is not acked by sink, hardware will trigger this
interruption.
So let's disable PSR as it is a hint that there is problem with this
sink.
The removed FIXME was asking to manually train the link but we don't
need to do that as by spec sink should do a short pulse when it is
out of sync with source, we just need to make sure it is awaken and
the SDP header with PSR inactive set it will trigger the short pulse
with a error set in the link status.
v3: added workarround to fix scheduled work starvation cause by
to frequent PSR error interruption
v4: only setting irq_aux_error as we don't care in clear it and
not using dev_priv->irq_lock as consequence.
v5: rebased: using edp_psr_shift()
Cc: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: José Roberto de Souza <jose.souza@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20181121225441.18785-4-jose.souza@intel.com
When we detect a error and disable PSR, it is kept disabled until the
next modeset but as the sink already show signs that it do not
properly work with PSR lets disabled it for good to avoid any
additional flickering.
Cc: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: José Roberto de Souza <jose.souza@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20181121225441.18785-3-jose.souza@intel.com
VBT appears to have two (or possibly three) ways to indicate the panel
rotation. The first is in the MIPI config block, but that apparenly
usually (maybe always?) indicates 0 degrees despite the actual panel
orientation. The second way to indicate this is in the general features
block, which can just indicate whether 180 degress rotation is used.
The third might be a separate rotation data block, but that is not
at all documented so who knows what it may contain.
Let's try the first two. We first try the DSI specicic VBT
information, and it it doesn't look trustworthy (ie. indicates
0 degrees) we fall back to the 180 degree thing. Just to avoid too
many changes in one go we shall also keep the hardware readout path
for now.
If this works for more than just my VLV FFRD the question becomes
how many of the panel orientation quirks are now redundant?
v2: Move the code into intel_dsi.c (Jani)
Cc: Hans de Goede <hdegoede@redhat.com>
Reviewed-by: Jani Nikula <jani.nikula@intel.com>
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20181022142015.4026-1-ville.syrjala@linux.intel.com
Tested-by: Hans de Goede <hdegoede@redhat.com>
This reduces the size of struct skl_wm_level from 6 to 4, which
reduces the size of struct skl_plane_wm from 104 to 70, which reduces
the size of struct skl_pipe_wm from 524 to 356. A reduction of 168
padding bytes per pipe. This will increase even more the next time we
bump I915_MAX_PLANES.
v2: Paste the pahole output provided by Lucas:
$ pahole -s -C skl_wm_level drivers/gpu/drm/i915/i915.o
struct skl_wm_level {
bool plane_en; /* 0 1 */
/* XXX 1 byte hole, try to pack */
uint16_t plane_res_b; /* 2 2 */
uint8_t plane_res_l; /* 4 1 */
/* size: 6, cachelines: 1, members: 3 */
/* sum members: 4, holes: 1, sum holes: 1 */
/* padding: 1 */
/* last cacheline: 6 bytes */
};
Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com>
Signed-off-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20181016220133.26991-3-paulo.r.zanoni@intel.com
Similarly to the GEN9_LP DPIO PHY code keep the CNL+ combo PHY code in a
separate file.
No functional change.
v2:
- Use SPDX license tag instead of boilerplate. (Rodrigo)
v3:
- Use MIT instead of GPL-2.0 license. (Ville)
Suggested-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Cc: Paulo Zanoni <paulo.r.zanoni@intel.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Cc: José Roberto de Souza <jose.souza@intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Reviewed-by: José Roberto de Souza <jose.souza@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20181106160621.23057-3-imre.deak@intel.com
Unfortunately, it seems that the HPD IRQ storm problem from the early
days of Intel GPUs was never entirely solved, only mostly. Within the
last couple of days, I got a bug report from one of our customers who
had been having issues with their machine suddenly booting up very
slowly after having updated. The amount of time it took to boot went
from around 30 seconds, to over 6 minutes consistently.
After some investigation, I discovered that i915 was reporting massive
amounts of short HPD IRQ spam on this system from the DisplayPort port,
despite there not being anything actually connected. The symptoms would
start with one "long" HPD IRQ being detected at boot:
[ 1.891398] [drm:intel_get_hpd_pins [i915]] hotplug event received, stat 0x00440000, dig 0x00440000, pins 0x000000a0
[ 1.891436] [drm:intel_hpd_irq_handler [i915]] digital hpd port B - long
[ 1.891472] [drm:intel_hpd_irq_handler [i915]] Received HPD interrupt on PIN 5 - cnt: 0
[ 1.891508] [drm:intel_hpd_irq_handler [i915]] digital hpd port D - long
[ 1.891544] [drm:intel_hpd_irq_handler [i915]] Received HPD interrupt on PIN 7 - cnt: 0
[ 1.891592] [drm:intel_dp_hpd_pulse [i915]] got hpd irq on port B - long
[ 1.891628] [drm:intel_dp_hpd_pulse [i915]] got hpd irq on port D - long
…
followed by constant short IRQs afterwards:
[ 1.895091] [drm:intel_encoder_hotplug [i915]] [CONNECTOR:66:DP-1] status updated from unknown to disconnected
[ 1.895129] [drm:i915_hotplug_work_func [i915]] Connector DP-3 (pin 7) received hotplug event.
[ 1.895165] [drm:intel_dp_detect [i915]] [CONNECTOR:72:DP-3]
[ 1.895275] [drm:intel_get_hpd_pins [i915]] hotplug event received, stat 0x00200000, dig 0x00200000, pins 0x00000080
[ 1.895312] [drm:intel_hpd_irq_handler [i915]] digital hpd port D - short
[ 1.895762] [drm:intel_get_hpd_pins [i915]] hotplug event received, stat 0x00200000, dig 0x00200000, pins 0x00000080
[ 1.895799] [drm:intel_hpd_irq_handler [i915]] digital hpd port D - short
[ 1.896239] [drm:intel_dp_aux_xfer [i915]] dp_aux_ch timeout status 0x71450085
[ 1.896293] [drm:intel_get_hpd_pins [i915]] hotplug event received, stat 0x00200000, dig 0x00200000, pins 0x00000080
[ 1.896330] [drm:intel_hpd_irq_handler [i915]] digital hpd port D - short
[ 1.896781] [drm:intel_get_hpd_pins [i915]] hotplug event received, stat 0x00200000, dig 0x00200000, pins 0x00000080
[ 1.896817] [drm:intel_hpd_irq_handler [i915]] digital hpd port D - short
[ 1.897275] [drm:intel_get_hpd_pins [i915]] hotplug event received, stat 0x00200000, dig 0x00200000, pins 0x00000080
The customer's system in question has a GM45 GPU, which is apparently
well known for hotplugging storms.
So, workaround this impressively broken hardware by changing the default
HPD storm threshold from 5 to 50. Then, make long IRQs count for 10, and
short IRQs count for 1. This makes it so that 5 long IRQs will trigger
an HPD storm, and on systems with short HPD storm detection 50 short
IRQs will trigger an HPD storm. 50 short IRQs amounts to 100ms of
constant pulsing, which seems like a good middleground between being too
sensitive and not being sensitive enough (which would cause visible
stutters in userspace every time a storm occurs).
And just to be extra safe: we don't enable this by default on systems
with MST support. There's too high of a chance of MST support triggering
storm detection, and systems that are new enough to support MST are a
lot less likely to have issues with IRQ storms anyway.
As a note: this patch was tested using a ThinkPad T450s and a Chamelium
to simulate the short IRQ storms.
Changes since v1:
- Don't use two separate thresholds, just make long IRQs count for 10
each and short IRQs count for 1. This simplifies the code a bit
- Ville Syrjälä
Changes since v2:
- Document @long_hpd in intel_hpd_irq_storm_detect, no functional
changes
Changes since v4:
- Remove !! in long_hpd assignment - Ville Syrjälä
- queue_hp = true - Ville Syrjälä
Signed-off-by: Lyude Paul <lyude@redhat.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20181106213017.14563-6-lyude@redhat.com
Most of the AUX_CH_CTL flags are concerned with DP AUX transfer
parameters. As opposed to this the flag specifying the thunderbolt vs.
non-thunderbolt mode of the port is not related to AUX transfers at all
(rather it's repurposed to enable either TBT or non-TBT PHY HW blocks).
The programming has to be done before enabling the corresponding AUX
power well, so make it part of the power well code.
v3:
- Use existing enable/disable helpers instead of opencoding. (Jose)
- Fix type of is_tc_tbt to remain a bitfield. (Lucas)
- Add comment describing the is_tc_tbt power well flag. (Lucas)
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=108548
Cc: Paulo Zanoni <paulo.r.zanoni@intel.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Cc: José Roberto de Souza <jose.souza@intel.com>
Cc: Lucas De Marchi <lucas.demarchi@intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: José Roberto de Souza <jose.souza@intel.com>
Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20181101140427.31026-8-imre.deak@intel.com
From ICL onwards all the DDI/TypeC ports - even working in HDMI mode -
need to know their corresponding AUX channel, so move the corresponding
helper to a common place.
No functional change.
v4:
- Fix 'no space is necessary after a cast' checkpatch warn.
Cc: Paulo Zanoni <paulo.r.zanoni@intel.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Cc: José Roberto de Souza <jose.souza@intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: José Roberto de Souza <jose.souza@intel.com>
Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20181101140427.31026-2-imre.deak@intel.com
commit ac657f6461 ("drm/i915: Introduce IS_GEN macro") introduced
GEN_FOREVER that was never used.
My first attempt was to rename it to FOREVER since GEN is
already part of the macro. Then I used coccinelle to change all
-INTEL_GEN(e1) >= e2
+INTEL_GEN_RANGE(e1, e2, FOREVER)
-INTEL_GEN(e1) <= e2
+INTEL_GEN_RANGE(e1, 0, e2)
and I liked it.
However I didn't like very much the remaining
INTEL_GEN(dev_priv) < n
and:
INTEL_GEN(e1) < n
INTEL_GEN_RANGE(e1, 0, n - 1)
didn't make much sense either.
So INTEL_GEN use for > or < seems a better unified way for unlimited
bounds. So, no reason to keep GEN_FOREVER here.
Let's kill before someone start using it.
v2: Remove remaining GEN_FOREVER forgotten in a comment. (Daniel)
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: https://patchwork.freedesktop.org/patch/msgid/20181026195143.20353-2-rodrigo.vivi@intel.com
The 16Gb DIMM w/a is not applicable to BXT or GLK. Limit it to
the appropriate platforms.
This was especially harsh on GLK since we don't even try to read
the DIMM information on that platforms, hence valid_dimm was
always false and thus we always tried to apply the w/a.
Furthermore the w/a pushed the level 0 latency above the
level 1 latency, which doesn't really make sense.
v2: Do the check when populating is_16gb_dimm (Mahesh)
Cc: Mahesh Kumar <mahesh1.kumar@intel.com>
Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Fixes: 86b592876c ("drm/i915: Implement 16GB dimm wa for latency level-0")
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20181023182102.31549-1-ville.syrjala@linux.intel.com
Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Reviewed-by: Mahesh Kumar <mahesh1.sh.kumar@gmail.com>
The way our hardware is designed doesn't seem to let us use the
MI_RECORD_PERF_COUNT command without setting up a circular buffer.
In the case where the user didn't request OA reports to be available
through the i915 perf stream, we can set the OA buffer to the minimum
size to avoid consuming memory which won't be used by the driver.
v2: Simplify oa buffer size exponent selection (Chris)
Reuse vma size field (Lionel)
v3: Restrict size opening parameter to values supported by HW (Chris)
v4: Drop out of date comment (Matt)
Add debug message when buffer size is rejected (Matt)
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20181023100707.31738-5-lionel.g.landwerlin@intel.com
We initialize the OA buffer everytime we enable the OA unit (first call in
gen[78]_oa_enable), so we don't need to initialize when preparing the metric
set.
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20181023100707.31738-3-lionel.g.landwerlin@intel.com
Now that we have intel_connector.c, move the connector specific
functions from intel_display.c there. Fix a few checkpatch complaints
while at it. No functional changes.
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20181010075205.7713-2-jani.nikula@intel.com
ALPM is a requirement and we don't need to keep it's cached, what
were done in commit 97c9de66ca
("drm/i915/psr: Fix ALPM cap check for PSR2") but the alpm was not
removed from i915_psr.
Reviewed-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
Signed-off-by: José Roberto de Souza <jose.souza@intel.com>
Signed-off-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20181003205031.32474-7-jose.souza@intel.com
According to patch "drm/i915/aml: Introducing Amber Lake platform"
(e364672477). Add a new marco for AML ULX GT2 devices.
Cc: Jani Nikula <jani.nikula@intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Cc: Jose Roberto de Souza <jose.souza@intel.com>
Signed-off-by: Lee, Shawn C <shawn.c.lee@intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/1538034499-31256-1-git-send-email-shawn.c.lee@intel.com
Add plane alpha blending support with the different blend modes.
This has been tested on a icl to show the correct results,
on earlier platforms small rounding errors cause issues. But this
already happens case with fully transparant or fully opaque RGB8888
fb's.
The recommended HW workaround is to disable alpha blending when the
plane alpha is 0 (transparant, hide plane) or 0xff (opaque, disable blending).
This is easy to implement on any platform, so just do that.
The tests for userspace are also available, and pass on gen11.
Changes since v1:
- Change mistaken < 0xff0 to 0xff00.
- Only set PLANE_KEYMSK_ALPHA_ENABLE when plane alpha < 0xff00, ignore blend mode.
- Rework disabling FBC when per pixel alpha is used.
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
[mlankhorst: Change MISSING_CASE default to explicit alpha disable (mattrope)]
Link: https://patchwork.freedesktop.org/patch/msgid/20180815103405.22679-1-maarten.lankhorst@linux.intel.com
Reviewed-by: Matt Roper <matthew.d.roper@intel.com>
In the next few patches, we will want to give a small priority boost to
some requests/queues but not so much that we perturb the user controlled
order. As such we will shift the user priority bits higher leaving
ourselves a few low priority bits for our internal bumping.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20181001123204.23982-3-chris@chris-wilson.co.uk
Now that we are confident in providing full-ppgtt where supported,
remove the ability to override the context isolation.
v2: Remove faked aliasing-ppgtt for testing as it no longer is accepted.
v3: s/USES/HAS/ to match usage and reject attempts to load the module on
old GVT-g setups that do not provide support for full-ppgtt.
v4: Insulate ABI ppGTT values from our internal enum (later plans
involve moving ppGTT depth out of the enum, thus potentially breaking
ABI unless we document the current values).
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Matthew Auld <matthew.auld@intel.com>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Acked-by: Zhi Wang <zhi.a.wang@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180926201222.5643-1-chris@chris-wilson.co.uk
Partial views are small but there can be many of them, and since the sg
list space for them is allocated pessimistically, we can save some slab by
trimming the unused tail entries.
Signed-off-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20180926080353.20867-1-tvrtko.ursulin@linux.intel.com
Move max firmware size to the same if ladder with firmware name and
required version. This allows us to detect the missing max size for a
platform without actually loading the firmware, and makes the whole
thing easier to maintain.
We need to move the power get earlier to allow for early return in the
missing platform case. While at it, extend the comment on why we return
with the reference held on errors.
We also need to move the module parameter override later to reuse the
max firmware size, which is independent of the override.
v2: Add comment on why we leak the wakeref on errors (Chris)
v3: Rebase
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180926133414.22073-2-jani.nikula@intel.com
It really wants dev_priv anyway, also now matches i915_gem_init_stolen.
Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20180920142707.19659-2-matthew.auld@intel.com
That we use a WB mapping for updating the RING_TAIL register inside the
context image even on !llc machines has been a source of consternation
for every reader. It appears to work on bsw+, but it may just have been
that we have been incredibly bad at detecting the errors.
v2: With extra enthusiasm.
v3: Drop force of map type for pinned default_state as by the time we
pin it, the map type is always WB and doesn't conflict with the earlier
use by ce->state.
v4: Transfer engine->default_state from MAP_WC to MAP_WB on creation so
we do not need the MAP_FORCE littered around the backends
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180914123504.2062-3-chris@chris-wilson.co.uk
UAPI Changes:
- Add host endian variants for the most common formats (Gerd)
- Fail ADDFB2 for big-endian drivers that don't advertise BE quirk (Gerd)
- clear smem_start in fbdev for drm drivers to avoid leaking fb addr (Daniel)
Cross-subsystem Changes:
Core Changes:
- fix drm_mode_addfb() on big endian machines (Gerd)
- add timeline point to syncobj find+replace (Chunming)
- more drmP.h removal effort (Daniel)
- split uapi portions of drm_atomic.c into drm_atomic_uapi.c (Daniel)
Driver Changes:
- bochs: Convert open-coded portions to use helpers (Peter)
- vkms: Add cursor support (Haneen)
- udmabuf: Lots of fixups (mostly cosmetic afaict) (Gerd)
- qxl: Convert to use fbdev helper (Peter)
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Chunming Zhou <david1.zhou@amd.com>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Peter Wu <peter@lekensteyn.nl>
Cc: Haneen Mohammed <hamohammed.sa@gmail.com>
-----BEGIN PGP SIGNATURE-----
iQEzBAABCgAdFiEEfxcpfMSgdnQMs+QqlvcN/ahKBwoFAluaXuAACgkQlvcN/ahK
BwrQHggAtcu96+plN6cDcMcoOfnQT/OG30dBER4/cpG05hEciq/NXwXBQ9dPWtqk
Nkcgst28UbXTmt0UKck7ibfePLVqnN7+yRqnj3yrD28Qjrg1Ewr0go8cKlIJ8+8t
E6aLvgRwx5/9sHHaeCC1K1qfowEr0Put9DQvLH2BVRM3C1Sj5BXeXMP4djb5PHGU
BYGLoN9DrrVHLVARwbmzSb8V5oLED2CdRkL7WpXC2LcEGZ3jPllTN8EOoqsIMOAZ
LGnpWxADVnYTA5np3O0QJsalu942T4rMPoxgCHZmuGIhEijqk7mgGWpeOmzN71Eh
rXX1yyWvZenUc69Pbl7G7lQmE6DSDw==
=9Mxt
-----END PGP SIGNATURE-----
Merge tag 'drm-misc-next-2018-09-13' of git://anongit.freedesktop.org/drm/drm-misc into drm-next
drm-misc-next for 4.20:
UAPI Changes:
- Add host endian variants for the most common formats (Gerd)
- Fail ADDFB2 for big-endian drivers that don't advertise BE quirk (Gerd)
- clear smem_start in fbdev for drm drivers to avoid leaking fb addr (Daniel)
Cross-subsystem Changes:
Core Changes:
- fix drm_mode_addfb() on big endian machines (Gerd)
- add timeline point to syncobj find+replace (Chunming)
- more drmP.h removal effort (Daniel)
- split uapi portions of drm_atomic.c into drm_atomic_uapi.c (Daniel)
Driver Changes:
- bochs: Convert open-coded portions to use helpers (Peter)
- vkms: Add cursor support (Haneen)
- udmabuf: Lots of fixups (mostly cosmetic afaict) (Gerd)
- qxl: Convert to use fbdev helper (Peter)
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Chunming Zhou <david1.zhou@amd.com>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Peter Wu <peter@lekensteyn.nl>
Cc: Haneen Mohammed <hamohammed.sa@gmail.com>
Signed-off-by: Dave Airlie <airlied@redhat.com>
From: Sean Paul <sean@poorly.run>
Link: https://patchwork.freedesktop.org/patch/msgid/20180913130254.GA156437@art_vandelay
IPC may cause underflows if not used with dual channel symmetric
memory configuration. Disable IPC for non symmetric configurations in
affected platforms.
Display WA #1141
Changes Since V1:
- Re-arrange the code.
- update wrapper to return if memory is symmetric (Rodrigo)
Signed-off-by: Mahesh Kumar <mahesh1.kumar@intel.com>
Reviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180824093225.12598-6-mahesh1.kumar@intel.com
Memory with 16GB dimms require an increase of 1us in level-0 latency.
This patch implements the same.
Bspec: 4381
changes since V1:
- s/memdev_info/dram_info
- make skl_is_16gb_dimm pure function
Changes since V2:
- make is_16gb_dimm more generic
- rebase
Changes since V3:
- Simplify condition (Maarten)
Signed-off-by: Mahesh Kumar <mahesh1.kumar@intel.com>
Reviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180831110942.9234-1-mahesh1.kumar@intel.com
This patch adds support to decode system memory bandwidth and other
parameters for skylake and Gen9+ platforms, which will be used for
arbitrated display memory bandwidth calculation in GEN9 based
platforms and WM latency level-0 Work-around calculation on GEN9+.
Changes Since V1:
- s/memdev_info/dram_info
- create a struct to hold channel info
Changes Since V2:
- rewrite code to adhere i915 coding style
- not valid for GLK
Signed-off-by: Mahesh Kumar <mahesh1.kumar@intel.com>
Reviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180824093225.12598-3-mahesh1.kumar@intel.com
This patch adds support to decode system memory bandwidth and other
parameters for broxton platform, which will be used for arbitrated
display memory bandwidth calculation in GEN9 based platforms and
WM latency level-0 Work-around calculation on GEN9+ platforms.
Changes since V1:
- s/memdev_info/dram_info
Changes since V2:
- Adhere to i915 coding style (Rodrigo)
Signed-off-by: Mahesh Kumar <mahesh1.kumar@intel.com>
Reviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180824093225.12598-2-mahesh1.kumar@intel.com
Use I915_GTT_PAGE_SIZE when talking about GTT pages rather than
physical pages.
There are some PAGE_SHIFTs left though. Not sure if we want to
introduce I915_GTT_PAGE_SHIFT or what?
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Suggested-by: Chris Wilson <chris@chris-wilson.co.uk> # at least some of it :)
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180913150405.706-1-ville.syrjala@linux.intel.com
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
We have a bunch of neat little macros all over the place which should
move to kernel.h. But some of them died in bikesheds on lkml, and we
need a decent home for them.
Start out by moving the for_each_if macro there.
v2: Rename to drm_util.h instead (Dave&Sean)
Cc: Sean Paul <seanpaul@chromium.org>
Acked-by: Sean Paul <seanpaul@chromium.org>
Cc: Dave Airlie <airlied@gmail.com>
Acked-by: Dave Airlie <airlied@gmail.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: https://patchwork.freedesktop.org/patch/msgid/20180905135711.28370-1-daniel.vetter@ffwll.ch
Future gen reduce the number of bits we will have available to
differentiate between contexts, so reduce the lifetime of the ID
assignment from that of the context to its current active cycle (i.e.
only while it is pinned for use by the HW, will it have a constant ID).
This means that instead of a max of 2k allocated contexts (worst case
before fun with bit twiddling), we instead have a limit of 2k in flight
contexts (minus a few that have been pinned by the kernel or by perf).
To reduce the number of contexts id we require, we allocate a context id
on first and mark it as pinned for as long as the GEM context itself is,
that is we keep it pinned it while active on each engine. If we exhaust
our context id space, then we try to reclaim an id from an idle context.
In the extreme case where all context ids are pinned by active contexts,
we force the system to idle in order to recover ids.
We cannot reduce the scope of an HW-ID to an engine (allowing the same
gem_context to have different ids on each engine) as in the future we
will need to preassign an id before we know which engine the
context is being executed on.
v2: Improved commentary (Tvrtko) [I tried at least]
References: https://bugs.freedesktop.org/show_bug.cgi?id=107788
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Mika Kuoppala <mika.kuoppala@intel.com>
Cc: Michel Thierry <michel.thierry@intel.com>
Cc: Michal Wajdeczko <michal.wajdeczko@intel.com>
Cc: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180904153117.3907-1-chris@chris-wilson.co.uk
Older gen use a physical address for the hardware status page, for which
we use cache-coherent writes. As the writes are into the cpu cache, we use
a normal WB mapped page to read the HWS, used for our seqno tracking.
Anecdotally, I observed lost breadcrumbs writes into the HWS on i965gm,
which so far have not reoccurred with this patch. How reliable that
evidence is remains to be seen.
v2: Explicitly pass the expected physical address to the hw
v3: Also remember the wild writes we once had for HWS above 4G.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Daniel Vetter <daniel@ffwll.ch>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Reviewed-by: Daniel Vetter <daniel@ffwll.ch>
Link: https://patchwork.freedesktop.org/patch/msgid/20180903152304.31589-2-chris@chris-wilson.co.uk
We need to clear the register in order to get correct value after the
next potential hang.
v2: Centralize error register clearing in i915_irq.c (Chris)
v3: Don't read gen8 register on < gen6 (Chris)
v4: Don't swap gen8+ & gen6+ code... (Chris)
Signed-off-by: Lionel Landwerlin <lionel.g.landwerlin@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20180830132424.21940-1-lionel.g.landwerlin@intel.com
Instead of defining all registers twice, define just a PCH_GPIO_BASE
that has the same address as PCH_GPIO_A and use that to calculate all
the others. This also brings VLV and !HAS_GMCH_DISPLAY in line, doing
the same thing.
v2: Fix GMBUS registers to be relative to gpio base; create GPIO()
macro to return a particular gpio address and move the enum out of
i915_reg.h (suggested by Jani)
v3: Move base offset inside the GPIO() macro so the GMBUS defines don't
actually need to be changed (suggested by Daniel/Ville)
v4: Move definition of i915_gpio to intel_display.h and remove
GMBUS/GPIO handling from gvt since now they have their own
defines.
Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com>
Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180727193647.8639-3-lucas.demarchi@intel.com
The device global init_power_on flag is somewhat arbitrary and makes
debugging power refcounting problems difficult. Instead arrange things
so that all display power domain get has a corresponding put call. After
this change we have the following sequences:
driver loading:
intel_power_domains_init_hw();
<other init steps>
intel_power_domains_enable();
driver unloading:
intel_power_domains_disable();
<other uninit steps>
intel_power_domains_fini_hw();
system suspend:
intel_power_domains_disable();
<other suspend steps>
intel_power_domains_suspend();
system resume:
intel_power_domains_resume();
<other resume steps>
intel_power_domains_enable();
at other times while the driver is loaded:
intel_display_power_get();
...
intel_display_power_put();
Suggested-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20180816123757.3286-2-imre.deak@intel.com
This will make it easier to test PSR1 on PSR2 capable eDP machines.
Changes since v1:
- Remove I915_PSR_DEBUG_FORCE_PSR2, it did nothing, not sure forcing
PSR2 would even work.
- Handle NULL crtc in intel_psr_set_debugfs_mode. (dhnkrn)
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180808141911.7647-2-maarten.lankhorst@linux.intel.com
Reviewed-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
Currently tests modify i915.enable_psr and then do a modeset cycle
to change PSR. We can write a value to i915_edp_psr_debug to force
a certain PSR mode without a modeset.
To retain compatibility with older userspace, we also still allow
the override through the module parameter, and add some tracking
to check whether a debugfs mode is specified.
Changes since v1:
- Rename dev_priv->psr.enabled to .dp, and .hw_configured to .enabled.
- Fix i915_psr_debugfs_mode to match the writes to debugfs.
- Rename __i915_edp_psr_write to intel_psr_set_debugfs_mode, simplify
it and move it to intel_psr.c. This keeps all internals in intel_psr.c
- Perform an interruptible wait for hw completion outside of the psr
lock, instead of being forced to trywait and return -EBUSY.
Changes since v2:
- Rebase on top of intel_psr changes.
Changes since v3:
- Assign psr.dp during init. (dhnkrn)
- Add prepared bool, which should be used instead of relying on psr.dp. (dhnkrn)
- Fix -EDEADLK handling in debugfs. (dhnkrn)
- Clean up waiting for idle in intel_psr_set_debugfs_mode.
- Print PSR mode when trying to enable PSR. (dhnkrn)
- Move changing psr debug setting to i915_edp_psr_debug_set. (dhnkrn)
Changes since v4:
- Return error in _set() function.
- Change flag values to make them easier to remember. (dhnkrn)
- Only assign psr.dp once. (dhnkrn)
- Only set crtc_state->has_psr on the crtc with psr.dp.
- Fix typo. (dhnkrn)
Changes since v5:
- Only wait for PSR idle on the PSR connector correctly. (dhnkrn)
- Reinstate WARN_ON(drrs.dp) in intel_psr_enable. (dhnkrn)
- Remove stray comment. (dhnkrn)
- Be silent in intel_psr_compute_config on wrong connector. (dhnkrn)
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Cc: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180809142101.26155-1-maarten.lankhorst@linux.intel.com
Reviewed-by: Dhinakaran Pandiyan <dhinakaran.pandiyan@intel.com>
Similarly to the previous patch use a separate request/status HW flag
index defined right after the corresponding control registers instead of
depending for this on the power well IDs. Since the set of
control/status registers varies among the different power wells (on a
single platform), also add a new i915_power_well_registers struct that
we populate and assign to each DDI power well as needed.
Also clarify a bit the code comment describing the function and layout
of the control registers.
This also fixes a problem on ICL, where we incorrectly read the KVMR
control register in hsw_power_well_requesters() even for DDI and AUX
power wells.
v2:
- Clarify platform range tags in code comments. (Paulo)
- Fix line over 80 chars checkpatch warning.
Cc: Ville Syrjala <ville.syrjala@linux.intel.com>
Cc: Paulo Zanoni <paulo.r.zanoni@intel.com>
Cc: Jani Nikula <jani.nikula@intel.com>
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180806095843.13294-7-imre.deak@intel.com
Atm, we determine the control/status flag offsets within the PUNIT
control/status registers based on the power well's ID. Since the power
well ID enum is global across all platforms, the associated macros to
get the flag offsets involves some magic. This makes checking the
register/bit definitions against the specification more difficult than
necessary. Also the values in the power well ID enum must stay fixed,
making code maintenance of the enum cumbersome.
To solve the above define the control/status flag indices right after
the corresponding registers and use these to derive the control/status
flag values by storing the indices in the i915_power_well_desc struct.
Initializing anonymous union fields require the preceding field in the
struct to be explicitly initialized - even when using named
initializers - and the initialization to be done right before the union
initialization, hence the reordering of the .id fields.
v2:
- Clarify commit log message about anonymous union initializers. (Paulo)
Cc: Ville Syrjala <ville.syrjala@linux.intel.com>
Cc: Paulo Zanoni <paulo.r.zanoni@intel.com>
Cc: Jani Nikula <jani.nikula@intel.com>
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180806095843.13294-6-imre.deak@intel.com
It makes sense to keep unchanging data const. Extract such fields from
the i915_power_well struct into a new i915_power_well_desc struct that
we initialize during compile time. For the rest of the dynamic
fields allocate an array of i915_power_well objects in i915 dev_priv,
and link to each of these objects their corresponding
i915_power_well_desc object.
v2:
- Fix checkpatch warnings about missing param name in fn declaration and
lines over 80 chars. (Paulo)
- Move check for unique IDs to __set_power_wells().
Cc: Ville Syrjala <ville.syrjala@linux.intel.com>
Cc: Paulo Zanoni <paulo.r.zanoni@intel.com>
Cc: Jani Nikula <jani.nikula@intel.com>
Signed-off-by: Imre Deak <imre.deak@intel.com>
[Fixed checkpatch warn in __set_power_wells()]
Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180806095843.13294-5-imre.deak@intel.com
RPS provides a feedback loop where we use the load during the previous
evaluation interval to decide whether to up or down clock the GPU
frequency. Our responsiveness is split into 3 regimes, a high and low
plateau with the intent to keep the gpu clocked high to cover occasional
stalls under high load, and low despite occasional glitches under steady
low load, and inbetween. However, we run into situations like kodi where
we want to stay at low power (video decoding is done efficiently
inside the fixed function HW and doesn't need high clocks even for high
bitrate streams), but just occasionally the pipeline is more complex
than a video decode and we need a smidgen of extra GPU power to present
on time. In the high power regime, we sample at sub frame intervals with
a bias to upclocking, and conversely at low power we sample over a few
frames worth to provide what we consider to be the right levels of
responsiveness respectively. At low power, we more or less expect to be
kicked out to high power at the start of a busy sequence by waitboosting.
Prior to commit e9af4ea2b9 ("drm/i915: Avoid waitboosting on the active
request") whenever we missed the frame or stalled, we would immediate go
full throttle and upclock the GPU to max. But in commit e9af4ea2b9, we
relaxed the waitboosting to only apply if the pipeline was deep to avoid
over-committing resources for a near miss. Sadly though, a near miss is
still a miss, and perceptible as jitter in the frame delivery.
To try and prevent the near miss before having to resort to boosting
after the fact, we use the pageflip queue as an indication that we are
in an "interactive" regime and so should sample the load more frequently
to provide power before the frame misses it vblank. This will make us
more favorable to providing a small power increase (one or two bins) as
required rather than going all the way to maximum and then having to
work back down again. (We still keep the waitboosting mechanism around
just in case a dramatic change in system load requires urgent uplocking,
faster than we can provide in a few evaluation intervals.)
v2: Reduce rps_set_interactive to a boolean parameter to avoid the
confusion of what if they wanted a new power mode after pinning to a
different mode (which to choose?)
v3: Only reprogram RPS while the GT is awake, it will be set when we
wake the GT, and while off warns about being used outside of rpm.
v4: Fix deferred application of interactive mode
v5: s/state/interactive/
v6: Group the mutex with its principle in a substruct
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=107111
Fixes: e9af4ea2b9 ("drm/i915: Avoid waitboosting on the active request")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Radoslaw Szwichtenberg <radoslaw.szwichtenberg@intel.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180731132629.3381-1-chris@chris-wilson.co.uk
(cherry picked from commit 60548c554b)
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
After disabling resource streamer on ICL (due to it actually not
existing there), I got feedback that there have been some experimental
patches for mesa to use RS years ago, but nothing ever landed or shipped
because there was no performance improvement.
This removes it from kernel keeping the uapi defines around for
compatibility.
v2: - re-add the inadvertent removal of CTX_CTRL_INHIBIT_SYN_CTX_SWITCH
- don't bother trying to document removed params on uapi header:
applications should know that from the query.
(from Chris)
v3: - disable CTX_CTRL_RS_CTX_ENABLE istead of removing it
- reword commit message after Daniele confirmed no performance
regression on his machine
- reword commit message to make clear RS is being removed due to
never been used
v4: - move I915_EXEC_RESOURCE_STREAMER to __I915_EXEC_ILLEGAL_FLAGS so
the check on ioctl() is made much earlier by
i915_gem_check_execbuffer() (suggested by Tvrtko)
Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com>
Acked-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Reviewed-by: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20180803232443.17193-1-lucas.demarchi@intel.com
RPS provides a feedback loop where we use the load during the previous
evaluation interval to decide whether to up or down clock the GPU
frequency. Our responsiveness is split into 3 regimes, a high and low
plateau with the intent to keep the gpu clocked high to cover occasional
stalls under high load, and low despite occasional glitches under steady
low load, and inbetween. However, we run into situations like kodi where
we want to stay at low power (video decoding is done efficiently
inside the fixed function HW and doesn't need high clocks even for high
bitrate streams), but just occasionally the pipeline is more complex
than a video decode and we need a smidgen of extra GPU power to present
on time. In the high power regime, we sample at sub frame intervals with
a bias to upclocking, and conversely at low power we sample over a few
frames worth to provide what we consider to be the right levels of
responsiveness respectively. At low power, we more or less expect to be
kicked out to high power at the start of a busy sequence by waitboosting.
Prior to commit e9af4ea2b9 ("drm/i915: Avoid waitboosting on the active
request") whenever we missed the frame or stalled, we would immediate go
full throttle and upclock the GPU to max. But in commit e9af4ea2b9, we
relaxed the waitboosting to only apply if the pipeline was deep to avoid
over-committing resources for a near miss. Sadly though, a near miss is
still a miss, and perceptible as jitter in the frame delivery.
To try and prevent the near miss before having to resort to boosting
after the fact, we use the pageflip queue as an indication that we are
in an "interactive" regime and so should sample the load more frequently
to provide power before the frame misses it vblank. This will make us
more favorable to providing a small power increase (one or two bins) as
required rather than going all the way to maximum and then having to
work back down again. (We still keep the waitboosting mechanism around
just in case a dramatic change in system load requires urgent uplocking,
faster than we can provide in a few evaluation intervals.)
v2: Reduce rps_set_interactive to a boolean parameter to avoid the
confusion of what if they wanted a new power mode after pinning to a
different mode (which to choose?)
v3: Only reprogram RPS while the GT is awake, it will be set when we
wake the GT, and while off warns about being used outside of rpm.
v4: Fix deferred application of interactive mode
v5: s/state/interactive/
v6: Group the mutex with its principle in a substruct
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=107111
Fixes: e9af4ea2b9 ("drm/i915: Avoid waitboosting on the active request")
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Cc: Tvrtko Ursulin <tvrtko.ursulin@intel.com>
Cc: Radoslaw Szwichtenberg <radoslaw.szwichtenberg@intel.com>
Cc: Ville Syrjälä <ville.syrjala@linux.intel.com>
Reviewed-by: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180731132629.3381-1-chris@chris-wilson.co.uk
We broke the LVDS notifier resume thing in (presumably) commit
e2c8b8701e ("drm/i915: Use atomic helpers for suspend, v2.") as
we no longer duplicate the current state in the LVDS notifier and
thus we never resume it properly either.
Instead of trying to fix it again let's just kill off the lid
notifier entirely. None of the machines tested thus far have
apparently needed it. Originally the lid notifier was added to
work around cases where the VBIOS was clobbering some of the
hardware state behind the driver's back, mostly on Thinkpads.
We now have a few report of Thinkpads working just fine without
the notifier. So maybe it was misdiagnosed originally, or
something else has changed (ACPI video stuff perhaps?).
If we do end up finding a machine where the VBIOS is still causing
problems I would suggest that we first try setting various bits in
the VBIOS scratch registers. There are several to choose from that
may instruct the VBIOS to steer clear.
With the notifier gone we'll also stop looking at the panel status
in ->detect().
v2: Nuke enum modeset_restore (Rodrigo)
Cc: stable@vger.kernel.org
Cc: Wolfgang Draxinger <wdraxinger.maillist@draxit.de>
Cc: Vito Caputo <vcaputo@pengaru.com>
Cc: kitsunyan <kitsunyan@airmail.cc>
Cc: Joonas Saarinen <jza@saunalahti.fi>
Tested-by: Vito Caputo <vcaputo@pengaru.com> # Thinkapd X61s
Tested-by: kitsunyan <kitsunyan@airmail.cc> # ThinkPad X200
Tested-by: Joonas Saarinen <jza@saunalahti.fi> # Fujitsu Siemens U9210
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=105902
References: https://lists.freedesktop.org/archives/intel-gfx/2018-June/169315.html
References: https://bugs.freedesktop.org/show_bug.cgi?id=21230
Fixes: e2c8b8701e ("drm/i915: Use atomic helpers for suspend, v2.")
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180717174216.22252-1-ville.syrjala@linux.intel.com
Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
power well support and begin of DSI support addition. Also there were many improvements
on execlists and interrupts for minimal latency on command submission; and many fixes
on selftests, mostly caught by our CI.
General driver:
- Clean-up on aux irq (Lucas)
- Mark expected switch fall-through for dealing with static analysis tools (Gustavo)
Gem:
- Different fixes for GuC (Chris, Anusha, Michal)
- Avoid self-relocation BIAS if no relocation (Chris)
- Improve debugging cases in on EINVAL return and vma allocation (Chris)
- Fixes and improvements on context destroying and freeing (Chris)
- Wait for engines to idle before retiring (Chris)
- Many improvements on execlists and interrupts for minimal latency on command submission (Chris)
- Many fixes in selftests, specially on cases highlighted on CI (Chris)
- Other fixes and improvements around GGTT (Chris)
- Prevent background reaping of active objects (Chris)
Display:
- Parallel modeset cleanup to fix driver reset (Chris)
- Get AUX power domain for DP main link (Imre)
- Clean-up on PSR unused func pointers (Rodrigo)
- Many PSR/PSR2 fixes and improvements (DK, Jose, Tarun)
- Add a PSR1 live status (Vathsala)
- Replace old drm_*_{un/reference} with put,get functions (Thomas)
- FBC fixes (Maarten)
- Abstract and document the usage of picking macros (Jani)
- Remove unnecessary check for unsupported modifiers for NV12. (DK)
- Interrupt fixes for display (Ville)
- Clean up on sdvo code (Ville)
- Clean up on current DSI code (Jani)
- Remove support for legacy debugfs crc interface (Maarten)
- Simplify get_encoder_power_domains (Imre)
Icelake:
- MG PLL fixes (Imre)
- Add hw workaround for alpha blending (Vandita)
- Add power well support (Imre)
- Add Interrupt Support (Anusha)
- Start to add support for DSI on Ice Lake (Madhav)
-----BEGIN PGP SIGNATURE-----
iQEcBAABAgAGBQJbQ+ShAAoJEPpiX2QO6xPKas0H/igf9RFubtkMK7gHTef4FM+d
Bg+Qaq+O1vXlS/gidimL4NsVp1FxkejuCab0IffbTMvvjY0mv5NUA3kiIreAB0QZ
XO2hXr4fjjOINAQrdv5wiVMOqRjDws+fPgFFgZ8s5h1aJbofO27fjY/1MNtHwcA0
8VgtABpk+D3mkWvI8VTL0jCjYk2KocEvqUciz/Y7SQcPGV1iYFXqgBt5PR//rSvP
DU3u4R3KJGLDFbQwbe3uz2GxMfodAI6ijrqFeiizNSVqZORdTwnWlzKi6b6Cj9gl
SuleZacHPfv/+Ia7jmbmBqJEqi2GiAs948ne8QWL5/hsB9MMFO/UzwX/wYLNrP4=
=w6zC
-----END PGP SIGNATURE-----
Merge tag 'drm-intel-next-2018-07-09' of git://anongit.freedesktop.org/drm/drm-intel into drm-next
Higlights here goes to many PSR fixes and improvements; to the Ice lake work with
power well support and begin of DSI support addition. Also there were many improvements
on execlists and interrupts for minimal latency on command submission; and many fixes
on selftests, mostly caught by our CI.
General driver:
- Clean-up on aux irq (Lucas)
- Mark expected switch fall-through for dealing with static analysis tools (Gustavo)
Gem:
- Different fixes for GuC (Chris, Anusha, Michal)
- Avoid self-relocation BIAS if no relocation (Chris)
- Improve debugging cases in on EINVAL return and vma allocation (Chris)
- Fixes and improvements on context destroying and freeing (Chris)
- Wait for engines to idle before retiring (Chris)
- Many improvements on execlists and interrupts for minimal latency on command submission (Chris)
- Many fixes in selftests, specially on cases highlighted on CI (Chris)
- Other fixes and improvements around GGTT (Chris)
- Prevent background reaping of active objects (Chris)
Display:
- Parallel modeset cleanup to fix driver reset (Chris)
- Get AUX power domain for DP main link (Imre)
- Clean-up on PSR unused func pointers (Rodrigo)
- Many PSR/PSR2 fixes and improvements (DK, Jose, Tarun)
- Add a PSR1 live status (Vathsala)
- Replace old drm_*_{un/reference} with put,get functions (Thomas)
- FBC fixes (Maarten)
- Abstract and document the usage of picking macros (Jani)
- Remove unnecessary check for unsupported modifiers for NV12. (DK)
- Interrupt fixes for display (Ville)
- Clean up on sdvo code (Ville)
- Clean up on current DSI code (Jani)
- Remove support for legacy debugfs crc interface (Maarten)
- Simplify get_encoder_power_domains (Imre)
Icelake:
- MG PLL fixes (Imre)
- Add hw workaround for alpha blending (Vandita)
- Add power well support (Imre)
- Add Interrupt Support (Anusha)
- Start to add support for DSI on Ice Lake (Madhav)
Signed-off-by: Dave Airlie <airlied@redhat.com>
# gpg: Signature made Tue 10 Jul 2018 08:41:37 AM AEST
# gpg: using RSA key FA625F640EEB13CA
# gpg: Good signature from "Rodrigo Vivi <rodrigo.vivi@intel.com>"
# gpg: aka "Rodrigo Vivi <rodrigo.vivi@gmail.com>"
# gpg: WARNING: This key is not certified with a trusted signature!
# gpg: There is no indication that the signature belongs to the owner.
# Primary key fingerprint: 6D20 7068 EEDD 6509 1C2C E2A3 FA62 5F64 0EEB 13CA
Link: https://patchwork.freedesktop.org/patch/msgid/20180710234349.GA16562@intel.com
We're doing a pointless translation from hpd_pin to port simply for
passing the thing to long_pulse_detect(). Let's pass the hpd_pin
directly instead.
This removes the assumption that the hpd_pin and port always
match. The only other place where we make that assumption anymore
is intel_hpd_pin_default() and that's fine as it's what determines
the relationship between the two. If we ever get hardware where
the hpd pins are wired in more interesting ways it should be
trivial to handle from now on.
This should also fix the IS_CNL_WITH_PORT_F() case as that mapped
pin E back to port F and passed that to
spt_port_hotplug2_long_detect() which would always return false
for port F. Now that we pass in pin E directly it'll actually
do the right thing.
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Fixes: cf53902f48 ("drm/i915/cnl: Add HPD support for Port F.")
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180705164357.28512-7-ville.syrjala@linux.intel.com
Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Instead of looping over ports and hpd_pins, let's loop over
the encoders when doing hotplug processing. And instead of
depending on dev_priv->irq_port[] to tell us whether the
encoder has the ->hpd_pulse() hook or not, we can just
check for that directly. So we can just nuke irq_port[] entirely.
Signed-off-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180705164357.28512-5-ville.syrjala@linux.intel.com
Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
On GLK NUC platforms the HDMI retiming buffer needs additional disabled
time to correctly sync to a faster incoming signal.
When measured on a scope the highspeed lines of the HDMI clock turn off
for ~400uS during a normal resolution change. The HDMI retimer on the
GLK NUC appears to require at least a full frame of quiet time before a
new faster clock can be correctly sync'd. Wait 100ms due to msleep
inaccuracies while waiting for a completed frame. Add a quirk to the
driver for GLK boards that use ITE66317 HDMI retimers.
V2: Add more devices to the quirk list
V3: Delay increased to 100ms, check to confirm crtc type is HDMI.
V4: crtc type check extended to include _DDI and whitespace fixes
v5: Fix white spaces, remove the macro for delay. Revert the crtc type
check introduced in v4.
Cc: Imre Deak <imre.deak@intel.com>
Cc: <stable@vger.kernel.org> # v4.14+
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=105887
Signed-off-by: Clint Taylor <clinton.a.taylor@intel.com>
Tested-by: Daniel Scheller <d.scheller.oss@gmail.com>
Signed-off-by: Radhakrishna Sripada <radhakrishna.sripada@intel.com>
Signed-off-by: Imre Deak <imre.deak@intel.com>
Reviewed-by: Imre Deak <imre.deak@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20180710200205.1478-1-radhakrishna.sripada@intel.com
Support for Burst read in HW is added for HDCP2.2 compliance
requirement.
This patch enables the burst read for all the gmbus read of more than
511Bytes, on capable platforms.
v2:
Extra line is removed.
v3:
Macro is added for detecting the BURST_READ Support [Jani]
Runtime detection of the need for burst_read [Jani]
Calculation enhancement.
v4:
GMBUS0 reg val is passed from caller [ville]
Removed a extra var [ville]
Extra brackets are removed [ville]
Implemented the handling of 512Bytes Burst Read.
v5:
Burst read max length is fixed at 767Bytes [Ville]
v6:
Collecting the received reviewed-by.
Signed-off-by: Ramalingam C <ramalingam.c@intel.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: https://patchwork.freedesktop.org/patch/msgid/1530192889-5789-3-git-send-email-ramalingam.c@intel.com
Add a mutex into struct i915_address_space to be used while operating on
the vma and their lists for a particular vm. As this may be called from
the shrinker, we taint the mutex with fs_reclaim so that from the start
lockdep warns us if we are caught holding the mutex across an
allocation. (With such small steps we will eventually rid ourselves of
struct_mutex recursion!)
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: https://patchwork.freedesktop.org/patch/msgid/20180711073608.20286-2-chris@chris-wilson.co.uk