drm/amdgpu: rework dma_resv handling v3
Drop the workaround and instead implement a better solution. Basically we are now chaining all submissions using a dma_fence_chain container and adding them as exclusive fence to the dma_resv object. This way other drivers can still sync to the single exclusive fence while amdgpu only sync to fences from different processes. v3: add the shared fence first before the exclusive one Signed-off-by: Christian König <christian.koenig@amd.com> Reviewed-by: Alex Deucher <alexander.deucher@amd.com> Acked-by: Daniel Vetter <daniel.vetter@ffwll.ch> Link: https://patchwork.freedesktop.org/patch/msgid/20210614174536.5188-2-christian.koenig@amd.com
This commit is contained in:
@@ -829,7 +829,8 @@ int amdgpu_gem_op_ioctl(struct drm_device *dev, void *data,
|
||||
break;
|
||||
}
|
||||
case AMDGPU_GEM_OP_SET_PLACEMENT:
|
||||
if (robj->prime_shared_count && (args->value & AMDGPU_GEM_DOMAIN_VRAM)) {
|
||||
if (robj->tbo.base.import_attach &&
|
||||
args->value & AMDGPU_GEM_DOMAIN_VRAM) {
|
||||
r = -EINVAL;
|
||||
amdgpu_bo_unreserve(robj);
|
||||
break;
|
||||
|
||||
Reference in New Issue
Block a user