Although most of our multicast registers are replicated per-subslice, we
also have a small number of multicast registers that are replicated
per-l3 bank instead. For both types of multicast registers we need to
make sure we steer reads of these registers to a valid instance.
Ideally we'd like to find a specific instance ID that would steer reads
of either type of multicast register to a valid instance (i.e., not
fused off and not powered down), but sometimes the combination of
part-specific fusing and the additional restrictions imposed by Render
Power Gating make it impossible to find any overlap between the set of
valid subslices and valid l3 banks. This problem will become even more
noticeable on our upcoming platforms since they will be adding
additional types of multicast registers with new types of replication
and rules for finding valid instances for reads.
To handle this we'll continue to pick a suitable subslice instance at
driver startup and program this as the default (sliceid,subsliceid)
setting in the steering control register (0xFDC). In cases where we
need to read another type of multicast GT register, but the default
subslice steering would not correspond to a valid instance, we'll
explicitly re-steer the single read to a valid value, perform the read,
and then reset the steering to it's "subslice" default.
This patch adds the general functionality to prepare for this explicit
steering of other multicast register types. We'll plug L3 bank steering
into this in the next patch, and then add additional types of multicast
registers when the support for our next upcoming platform arrives.
v2:
- Use entry->end==0 as table terminator. (Rodrigo)
- Grab forcewake in wa_list_verify() now that we're using accessors
that assume forcewake is already held.
v3:
- Fix loop condition when iterating over steering range tables.
(Rodrigo)
Cc: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Matt Roper <matthew.d.roper@intel.com>
Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20210617211425.1943662-3-matthew.d.roper@intel.com
If we pipeline the PTE updates and then do the copy of those pages
within a single unpreemptible command packet, we can submit the copies
and leave them to be scheduled without having to synchronously wait
under a global lock. In order to manage migration, we need to
preallocate the page tables (and keep them pinned and available for use
at any time), causing a bottleneck for migrations as all clients must
contend on the limited resources. By inlining the ppGTT updates and
performing the blit atomically, each client only owns the PTE while in
use, and so we can reschedule individual operations however we see fit.
And most importantly, we do not need to take a global lock on the shared
vm, and wait until the operation is complete before releasing the lock
for others to claim the PTE for themselves.
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Co-developed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20210617063018.92802-8-thomas.hellstrom@linux.intel.com
Since the ww transaction endpoint easily end up far out-of-scope of
the objects on the ww object list, particularly for contending lock
objects, make sure we reference objects on the list so they don't
disappear under us.
This comes with a performance penalty so it's been debated whether this
is really needed. But I think this is motivated by the fact that locking
is typically difficult to get right, and whatever we can do to make it
simpler for developers moving forward should be done, unless the
performance impact is far too high.
Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20210617063018.92802-2-thomas.hellstrom@linux.intel.com
Use an rwlock instead of spinlock for the global notifier lock
to reduce risk of contention in execbuf.
Protect object state with the object lock whenever possible rather
than with the global notifier lock
Don't take an explicit page_ref in userptr_submit_init() but rather
call get_pages() after obtaining the page list so that
get_pages() holds the page_ref. This means we don't need to call
userptr_submit_fini(), which is needed to avoid awkward locking
in our upcoming VM_BIND code.
Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Reviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Signed-off-by: Matthew Auld <matthew.auld@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20210610143525.624677-1-thomas.hellstrom@linux.intel.com
Most logical place to introduce TTM buffer objects is as an i915
gem object backend. We need to add some ops to account for added
functionality like delayed delete and LRU list manipulation.
Initially we support only LMEM and SYSTEM memory, but SYSTEM
(which in this case means evicted LMEM objects) is not
visible to i915 GEM yet. The plan is to move the i915 gem system region
over to the TTM system memory type in upcoming patches.
We set up GPU bindings directly both from LMEM and from the system region,
as there is no need to use the legacy TTM_TT memory type. We reserve
that for future porting of GGTT bindings to TTM.
Remove the old lmem backend.
Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
Signed-off-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20210610070152.572423-2-thomas.hellstrom@linux.intel.com
Two cleanups
- These patches make Exynos DRM driver to use pm_runtime_resume_and_get()
function instead of m_runtime_get_sync() to deal with usage counter.
pm_runtime_get_sync() increases the usage counter even when it failed,
which could make callers to forget to decrease the usage counter.
pm_runtime_resume_and_get() decreases the usage counter regardless of
whether it failed or not.
Signed-off-by: Dave Airlie <airlied@redhat.com>
From: Inki Dae <inki.dae@samsung.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20210611025939.393282-1-inki.dae@samsung.com
UAPI Changes:
- Disable mmap ioctl for gen12+ (excl. TGL-LP)
- Start enabling HuC loading by default for upcoming Gen12+
platforms (excludes TGL and RKL)
Core Changes:
- Backmerge of drm-next
Driver Changes:
- Revert "i915: use io_mapping_map_user" (Eero, Matt A)
- Initialize the TTM device and memory managers (Thomas)
- Major rework to the GuC submission backend to prepare
for enabling on new platforms (Michal Wa., Daniele,
Matt B, Rodrigo)
- Fix i915_sg_page_sizes to record dma segments rather
than physical pages (Thomas)
- Locking rework to prep for TTM conversion (Thomas)
- Replace IS_GEN and friends with GRAPHICS_VER (Lucas)
- Use DEVICE_ATTR_RO macro (Yue)
- Static code checker fixes (Zhihao)
Signed-off-by: Dave Airlie <airlied@redhat.com>
From: Joonas Lahtinen <joonas.lahtinen@linux.intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/YMHeDxg9VLiFtyn3@jlahtine-mobl.ger.corp.intel.com
Use pm_runtime_resume_and_get() instead of pm_runtime_get_sync()
to deal with usage counter. pm_runtime_get_sync() increases the
usage counter even when it failed, which makes callers to forget
to decrease the usage counter and resulted in reference leak.
pm_runtime_resume_and_get() function decreases the usage counter
when it failed internally so it can avoid the reference leak.
Changelog v1:
- Fix an build error reported by kernel test robot of Intel.
Signed-off-by: Inki Dae <inki.dae@samsung.com>
Reported-by: kernel test robot <lkp@intel.com>
use pm_runtime_resume_and_get() to replace pm_runtime_get_sync and
pm_runtime_put_noidle to avoid continuing to increase the refcount
when pm_runtime_get_sync fails.
Signed-off-by: Tian Tao <tiantao6@hisilicon.com>
Reviewed-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Inki Dae <inki.dae@samsung.com>
This is the 3D GPU found on the i.MX8MP SoC. The feature bits are
taken from the NXP downstream kernel driver 6.4.3.p1.305572.
Signed-off-by: Sascha Hauer <s.hauer@pengutronix.de>
Signed-off-by: Lucas Stach <l.stach@pengutronix.de>
Reviewed-by: Christian Gmeiner <christian.gmeiner@gmail.com>
Drop disabling of gfxoff during VCN use. This allows gfxoff
to kick in and potentially save power if the user is not using
gfx for color space conversion or scaling.
VCN1.0 had a bug which prevented it from working properly with
gfxoff, so we disabled it while using VCN. That said, most apps
today use gfx for scaling and color space conversion rather than
overlay planes so it was generally in use anyway. This was fixed
on VCN2+, but since we mostly use gfx for color space conversion
and scaling and rapidly powering up/down gfx can negate the
advantages of gfxoff, we left gfxoff disabled. As more
applications use overlay planes for color space conversion
and scaling, this starts to be a win, so go ahead and leave
gfxoff enabled.
Note that VCN1.0 uses vcn_v1_0_idle_work_handler() and
vcn_v1_0_ring_begin_use() so they are not affected by this
patch.
Reviewed-by: James Zhu <James.Zhu@amd.com>
Acked-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Boyuan Zhang <Boyuan.Zhang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>