Double check the end of the privilege buffer to make sure the size
of the privilege buffer remains unchanged after copy.
v4:
- Refine the commit message. (Zhenyu)
v3:
- To get the right offset of the batch buffer end cmd. (Yan)
v2:
- Use lightweight way to audit batch buffer end. (Yan)
Reviewed-by: Yan Zhao <yan.y.zhao@intel.com>
Signed-off-by: Tina Zhang <tina.zhang@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Add valid length check for the commands with variable length.
v2: remove the macro definition. (Zhenyu)
v3: refine the LRI command. (Zhenyu)
Reviewed-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Signed-off-by: Gao, Fred <fred.gao@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Add utility for valid command length check.
v2: Add F_VAL_CONST flag to identify the value is const
although LEN maybe variable. (Zhenyu)
v3: unused code removal, flag rename/conflict. (Zhenyu)
v4: redefine F_IP_ADVANCE_CUSTOM and move the check function to
next patch. (Zhenyu)
Reviewed-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Signed-off-by: Gao, Fred <fred.gao@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Factor out tlb and mocs register offset table to fix the issues reported
by klocwork, #512 and #550. Mostly, the reason why the klocwork reports
these problems is because there can be possbilities for platforms, which
have more rings than the ring offset table, to take the dirty data from
the stack as the register offset. It results to a random HW register
offset writting in this scenairo when doing context switch between vGPUs.
After the factoring, the ring offset table of TLB and MOCS should be per
platform.
v2:
- Enable TLB register switch for GEN8. (Zhenyu)
Reviewed-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Signed-off-by: Zhi Wang <zhi.a.wang@intel.com>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
We rely on the tasklet to update the GT PM refcount, so we can't disable
it even if we've processed all the requests for the engine because we
might have detected the request completion before the interrupt arrived.
Since on all platforms on which we plan to support guc submission we
don't allow disabling the breadcrumb interrupts, we can further siplify
the park/unpark flow by removing the interrupt pin/unpin. A BUG_ON has
been added to catch changes to this flow that would require us to
restore some kind of pinning.
v2: split removal of engine_pin/unpin_breadcrumbs_irq to its own
patch (chris)
Signed-off-by: Daniele Ceraolo Spurio <daniele.ceraolospurio@intel.com>
Cc: Chris Wilson <chris@chris-wilson.co.uk>
Cc: Michal Wajdeczko <michal.wajdeczko@intel.com>
Reviewed-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Link: https://patchwork.freedesktop.org/patch/msgid/20190812233152.2172-1-daniele.ceraolospurio@intel.com
It's large and doesn't need contiguous memory. Fixes
allocation failures in some cases.
v2: kvfree the memory.
Reviewed-by: Andrey Grodzovsky <andrey.grodzovsky@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
The midgard/bifrost GPUs need to allocate GPU heap memory which is
allocated on GPU page faults and not pinned in memory. The vendor driver
calls this functionality GROW_ON_GPF.
This implementation assumes that BOs allocated with the
PANFROST_BO_NOEXEC flag are never mmapped or exported. Both of those may
actually work, but I'm unsure if there's some interaction there. It
would cause the whole object to be pinned in memory which would defeat
the point of this.
On faults, we map in 2MB at a time in order to utilize huge pages (if
enabled). Currently, once we've mapped pages in, they are only unmapped
if the BO is freed. Once we add shrinker support, we can unmap pages
with the shrinker.
Cc: Tomeu Vizoso <tomeu.vizoso@collabora.com>
Cc: Boris Brezillon <boris.brezillon@collabora.com>
Cc: Robin Murphy <robin.murphy@arm.com>
Acked-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Reviewed-by: Steven Price <steven.price@arm.com>
Signed-off-by: Rob Herring <robh@kernel.org>
Link: https://patchwork.freedesktop.org/patch/msgid/20190808222200.13176-9-robh@kernel.org
Executable buffers have an alignment restriction that they can't cross
16MB boundary as the GPU program counter is 24-bits. This restriction is
currently not handled and we just get lucky. As current userspace
assumes all BOs are executable, that has to remain the default. So add a
new PANFROST_BO_NOEXEC flag to allow userspace to indicate which BOs are
not executable.
There is also a restriction that executable buffers cannot start or end
on a 4GB boundary. This is mostly avoided as there is only 4GB of space
currently and the beginning is already blocked out for NULL ptr
detection. Add support to handle this restriction fully regardless of
the current constraints.
For existing userspace, all created BOs remain executable, but the GPU
VA alignment will be increased to the size of the BO. This shouldn't
matter as there is plenty of GPU VA space.
Cc: Tomeu Vizoso <tomeu.vizoso@collabora.com>
Cc: Boris Brezillon <boris.brezillon@collabora.com>
Cc: Robin Murphy <robin.murphy@arm.com>
Reviewed-by: Steven Price <steven.price@arm.com>
Acked-by: Alyssa Rosenzweig <alyssa.rosenzweig@collabora.com>
Signed-off-by: Rob Herring <robh@kernel.org>
Link: https://patchwork.freedesktop.org/patch/msgid/20190808222200.13176-6-robh@kernel.org
I made the condition of the wait_event_timeout call in
gm12u320_fb_update_work a helper which takes a mutex to make sure
that any writes to fb_update.run or fb_update.fb from other CPU cores
are seen before the check is done.
This is not necessary as the wait_event helpers contain the necessary
barriers for this themselves.
More over it is harmfull since by the time the check is done the task
is no longer in the TASK_RUNNING state and calling mutex_lock while not
in task-running is not allowed, leading to this warning when the kernel
is build with some extra locking checks enabled:
[11947.450011] do not call blocking ops when !TASK_RUNNING; state=2 set at
[<00000000e4306de6>] prepare_to_wait_event+0x61/0x190
This commit fixes this by dropping the helper and simply directly
checking the condition (without unnecessary locking) in the
wait_event_timeout call.
Acked-by: Sam Ravnborg <sam@ravnborg.org>
Signed-off-by: Hans de Goede <hdegoede@redhat.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190811143725.5951-1-hdegoede@redhat.com
We were using the last_fence to track the last request that used this
vma that might be interpreted by a fence register and forced ourselves
to wait for this request before modifying any fence register that
overlapped our vma. Due to requirement that we need to track any XY_BLT
command, linear or tiled, this in effect meant that we have to track the
vma for its active lifespan anyway, so we can forgo the explicit
last_fence tracking and just use the whole vma->active.
Another solution would be to pipeline the register updates, and would
help resolve some long running stalls for gen3 (but only gen 2 and 3!)
Signed-off-by: Chris Wilson <chris@chris-wilson.co.uk>
Reviewed-by: Matthew Auld <matthew.auld@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20190812174804.26180-1-chris@chris-wilson.co.uk
The current code won't likely work on production hw when
it ships so leave it as experimental until it's ready.
Acked-by: Huang Rui <ray.huang@amd.com>
Reviewed-by: Huang Rui <ray.huang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
PSP has issue for renoir, that will cause VCN fw failed to be loaded. So use
direct loading for the moment till the issue is addressed.
Acked-by: Huang Rui <ray.huang@amd.com>
Signed-off-by: Huang Rui <ray.huang@amd.com>
Reviewed-by: Aaron Liu <aaron.liu@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
1. Add psp ip block
2. Use direct loading type by default and it can also config psp
loading type.
3. Bypass sos fw loading and xgmi&ras interface
v2: drop TA loading
Acked-by: Huang Rui <ray.huang@amd.com>
Signed-off-by: Aaron Liu <aaron.liu@amd.com>
Reviewed-by: Huang Rui <ray.huang@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>