habanalabs: fix race condition in multi CS completion

Race condition occurs when CS fence completes and multi CS did not
completed yet, while waiting for multi CS ends and returns indication
to user that the CS completed. Next wait for multi CS may be triggered
by previous multi CS completion without any current CS completed,
causing an error.

Example scenario :
1. User do multi CS wait for CSs 1 and 2 on master QID 0

2. CS 1 and 2 reached the "cs release" code. The thread of CS 1
   completed both the CS and multi CS handling but the completion
   thread of CS 2 completed the CS but still did not executed
   complete_multi_cs (note that in CS completion the sequence is to
   first do complete all for the CS and then another complete all to
   signal the multi_cs)

3. User received indication that CS 1 and 2 completed (since we check
   the CS fence and both indicated as completed) and immediately waits
   on CS 3 and 4, also on master QID 0.

4. Completion thread of CS2 executed complete_multi_cs before
   completion of CS 3 and 4 and so will trigger the multi CS wait of
   CSs 3 and 4 as they wait on master QID 0.

This will trigger multi CS completion although none of its
current CS has been completed.

Fixed by adding multi CS complete handling indication for each CS.
CS will be marked to the user as completed only if its fence completed
and multi CS handling is done.

Signed-off-by: Dani Liberman <dliberman@habana.ai>
Reviewed-by: Oded Gabbay <ogabbay@kernel.org>
Signed-off-by: Oded Gabbay <ogabbay@kernel.org>
This commit is contained in:
Dani Liberman 2021-10-03 15:57:44 +03:00 committed by Oded Gabbay
parent 1d16a46b1a
commit ea6eb91c09
2 changed files with 23 additions and 1 deletions

View File

@ -143,6 +143,7 @@ static void hl_fence_init(struct hl_fence *fence, u64 sequence)
fence->cs_sequence = sequence;
fence->error = 0;
fence->timestamp = ktime_set(0, 0);
fence->mcs_handling_done = false;
init_completion(&fence->completion);
}
@ -535,10 +536,21 @@ static void complete_multi_cs(struct hl_device *hdev, struct hl_cs *cs)
mcs_compl->timestamp =
ktime_to_ns(fence->timestamp);
complete_all(&mcs_compl->completion);
/*
* Setting mcs_handling_done inside the lock ensures
* at least one fence have mcs_handling_done set to
* true before wait for mcs finish. This ensures at
* least one CS will be set as completed when polling
* mcs fences.
*/
fence->mcs_handling_done = true;
}
spin_unlock(&mcs_compl->lock);
}
/* In case CS completed without mcs completion initialized */
fence->mcs_handling_done = true;
}
static inline void cs_release_sob_reset_handler(struct hl_device *hdev,
@ -2372,7 +2384,13 @@ static int hl_cs_poll_fences(struct multi_cs_data *mcs_data)
mcs_data->stream_master_qid_map |= fence->stream_master_qid_map;
if (status == CS_WAIT_STATUS_BUSY)
/*
* Using mcs_handling_done to avoid possibility of mcs_data
* returns to user indicating CS completed before it finished
* all of its mcs handling, to avoid race the next time the
* user waits for mcs.
*/
if (status == CS_WAIT_STATUS_BUSY || !fence->mcs_handling_done)
continue;
mcs_data->completion_bitmap |= BIT(i);

View File

@ -610,6 +610,9 @@ struct asic_fixed_properties {
* @error: mark this fence with error
* @timestamp: timestamp upon completion
* @take_timestamp: timestamp shall be taken upon completion
* @mcs_handling_done: indicates that corresponding command submission has
* finished msc handling, this does not mean it was part
* of the mcs
*/
struct hl_fence {
struct completion completion;
@ -619,6 +622,7 @@ struct hl_fence {
int error;
ktime_t timestamp;
u8 take_timestamp;
u8 mcs_handling_done;
};
/**