remaining timeout returned by amdgpu_fence_wait_any can be larger than
max int value, thus the truncated 32 bit value in r ends up being
negative while its original long value is positive.
Signed-off-by: monk.liu <monk.liu@amd.com>
Reviewed-by: Michel Dänzer <michel.daenzer@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Jammy Zhou <jammy.zhou@amd.com>
fix fence is released when pass to **fence sometimes.
add reference for it.
Signed-off-by: Chunming Zhou <david1.zhou@amd.com>
Reviewed-by: Christian K?nig <christian.koenig@amd.com>
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Tested-and-Reviewed-by: Leo Liu <leo.liu@amd.com>
Not used any more.
v2: remove amd_sched_emit as well.
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Chunming Zhou <david1.zhou@amd.com>
Similar to radeon, except that amdgpu doesn't even use struct_mutex to
protect anything like the shared z buffer (sane gpu architecture,
yay!). And the code already grabs the globa adev->ring_lock, so this
code can't race with itself. Which makes struct_mutex completely
redundnant. Remove it.
Cc: Alex Deucher <alexander.deucher@amd.com>
Cc: "Christian König" <christian.koenig@amd.com>
Reviewed-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
It really doesn't protect anything which doesn't have other locks
already. Also this is run from driver unload code so not much need for
locks anyway.
Same changes as for radeon really.
Cc: Alex Deucher <alexander.deucher@amd.com>
Cc: "Christian König" <christian.koenig@amd.com>
Reviewed-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
We already grab 2 device-global locks (write-sema rdev->pm.mclk_lock
and rdev->ring_lock), adding another global mutex won't serialize this
code more. And since there's really nothing interesting that gets
protected in radeon by dev->struct mutex (we only have the global z
buffer owners and it's still serializing gem bo destruction in the drm
core - which is irrelevant since radeon uses ttm anyway internally)
this doesn't add protection. Remove it.
Cc: Alex Deucher <alexander.deucher@amd.com>
Cc: "Christian König" <christian.koenig@amd.com>
Reviewed-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
It really doesn't protect anything which doesn't have other locks
already. Also this is run from driver unload code so not much need for
locks anyway.
Cc: Alex Deucher <alexander.deucher@amd.com>
Cc: "Christian König" <christian.koenig@amd.com>
Reviewed-by: Thierry Reding <treding@nvidia.com>
Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
The workaround simply doesn't work because VM mappings
are controlled by userspace not the kernel.
Additional to that this is just a performance problem
which happens if you have holes in your VM mapping.
v2: adjust virtual addr alignment as well.
v3: fix trivial warning
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Monk Liu <monk.liu@amd.com> (v1)
Reviewed-by: Jammy Zhou <Jammy.Zhou@amd.com> (v2)
Looks like that somehow got missed while during porting the radeon changes.
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
It was just a wrapper for fence_wait anyway.
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
The common kernel function does the same thing.
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
scheduler fence is based on kernel fence framework.
v2: squash in Christian's build fix
Signed-off-by: Chunming Zhou <david1.zhou@amd.com>
Reviewed-by: Christian K?nig <christian.koenig@amd.com>
v2: rebased
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com> (v1)
Reviewed-by: Chunming Zhou <david1.zhou@amd.com>
Avoiding a couple of casts.
v2: rename c_entity to entity as well
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Chunming Zhou <david1.zhou@amd.com>
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Reviewed-by: Chunming Zhou <david1.zhou@amd.com>
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Reviewed-by: Chunming Zhou <david1.zhou@amd.com>
Cleanup the kernel context handling.
v2: rebased
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Chunming Zhou <david1.zhou@amd.com> (v1)
Id's are for the IOCTL ABI only.
v2: remove tgid as well
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Chunming Zhou <david1.zhou@amd.com>
We didn't initialized the mutex in the cloned bo list resulting in nice
warnings from lockdep. Also fixes error handling in this function.
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Chunming Zhou <david1.zhou@amd.com>
This way can avoid interrupt lost, and can process sched job exactly.
Signed-off-by: Chunming Zhou <david1.zhou@amd.com>
Reviewed-by: Jammy Zhou <Jammy.Zhou@amd.com>
This function is used to get the next queued sequence number
Signed-off-by: Jammy Zhou <Jammy.Zhou@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Jammy Zhou <Jammy.Zhou@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
This function is to update last_emitted_v_seq and wake up the waiters.
It should be called by driver in the run_job backend function
Signed-off-by: Jammy Zhou <Jammy.Zhou@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
It is clean to update last_queued_v_seq in the scheduler module
Signed-off-by: Jammy Zhou <Jammy.Zhou@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
Fix the code alignment, etc.
v2: rebase the code
Signed-off-by: Jammy Zhou <Jammy.Zhou@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
every sbumission should be able to get a fence.
Signed-off-by: Chunming Zhou <david1.zhou@amd.com>
Reviewed-by: Christian K?nig <christian.koenig@amd.com>
Reviewed-by: Jammy Zhou <jammy.zhou@amd.com>
It is theoretically possible that a swapped out BO gets the
same GTT address, but different backing pages while being swapped in.
Instead just use another VA state to note updated areas.
Ported from not upstream yet radeon commit with the same name.
v2: fix some bugs in the original implementation found in the radeon code.
v3: squash in VCE/UVD fix
Signed-off-by: Christian König <christian.koenig@amd.com>
Reviewed-by: Alex Deucher <alexander.deucher@amd.com>
thus unnecessary wake up could be avoid between rings
v2:
move wait_queue_head to fence_drv from ring
Signed-off-by: monk.liu <monk.liu@amd.com>
Reviewed-by: Christian König <christian.koenig@amd.com>