RDMA/bnxt_re: use shadow qd while posting non blocking rcfw command

Whenever there is a fast path IO and create/destroy resources from
the slow path is happening in parallel, we may notice high latency
of slow path command completion.

Introduces a shadow queue depth to prevent the outstanding requests
to the FW. Driver will not allow more than #RCFW_CMD_NON_BLOCKING_SHADOW_QD
non-blocking commands to the Firmware.

Shadow queue depth is a soft limit only for non-blocking
commands. Blocking commands will be posted to the firmware
as long as there is a free slot.

Signed-off-by: Kashyap Desai <kashyap.desai@broadcom.com>
Signed-off-by: Selvin Xavier <selvin.xavier@broadcom.com>
Link: https://lore.kernel.org/r/1686308514-11996-8-git-send-email-selvin.xavier@broadcom.com
Signed-off-by: Leon Romanovsky <leon@kernel.org>
This commit is contained in:
Kashyap Desai 2023-06-09 04:01:44 -07:00 committed by Leon Romanovsky
parent 3022cc1511
commit 65288a22dd
2 changed files with 61 additions and 2 deletions

View File

@ -281,8 +281,21 @@ done:
return 0;
}
int bnxt_qplib_rcfw_send_message(struct bnxt_qplib_rcfw *rcfw,
struct bnxt_qplib_cmdqmsg *msg)
/**
* __bnxt_qplib_rcfw_send_message - qplib interface to send
* and complete rcfw command.
* @rcfw - rcfw channel instance of rdev
* @msg - qplib message internal
*
* This function does not account shadow queue depth. It will send
* all the command unconditionally as long as send queue is not full.
*
* Returns:
* 0 if command completed by firmware.
* Non zero if the command is not completed by firmware.
*/
static int __bnxt_qplib_rcfw_send_message(struct bnxt_qplib_rcfw *rcfw,
struct bnxt_qplib_cmdqmsg *msg)
{
struct creq_qp_event *evnt = (struct creq_qp_event *)msg->resp;
u16 cookie;
@ -331,6 +344,48 @@ int bnxt_qplib_rcfw_send_message(struct bnxt_qplib_rcfw *rcfw,
return rc;
}
/**
* bnxt_qplib_rcfw_send_message - qplib interface to send
* and complete rcfw command.
* @rcfw - rcfw channel instance of rdev
* @msg - qplib message internal
*
* Driver interact with Firmware through rcfw channel/slow path in two ways.
* a. Blocking rcfw command send. In this path, driver cannot hold
* the context for longer period since it is holding cpu until
* command is not completed.
* b. Non-blocking rcfw command send. In this path, driver can hold the
* context for longer period. There may be many pending command waiting
* for completion because of non-blocking nature.
*
* Driver will use shadow queue depth. Current queue depth of 8K
* (due to size of rcfw message there can be actual ~4K rcfw outstanding)
* is not optimal for rcfw command processing in firmware.
*
* Restrict at max #RCFW_CMD_NON_BLOCKING_SHADOW_QD Non-Blocking rcfw commands.
* Allow all blocking commands until there is no queue full.
*
* Returns:
* 0 if command completed by firmware.
* Non zero if the command is not completed by firmware.
*/
int bnxt_qplib_rcfw_send_message(struct bnxt_qplib_rcfw *rcfw,
struct bnxt_qplib_cmdqmsg *msg)
{
int ret;
if (!msg->block) {
down(&rcfw->rcfw_inflight);
ret = __bnxt_qplib_rcfw_send_message(rcfw, msg);
up(&rcfw->rcfw_inflight);
} else {
ret = __bnxt_qplib_rcfw_send_message(rcfw, msg);
}
return ret;
}
/* Completions */
static int bnxt_qplib_process_func_event(struct bnxt_qplib_rcfw *rcfw,
struct creq_func_event *func_event)
@ -932,6 +987,7 @@ int bnxt_qplib_enable_rcfw_channel(struct bnxt_qplib_rcfw *rcfw,
return rc;
}
sema_init(&rcfw->rcfw_inflight, RCFW_CMD_NON_BLOCKING_SHADOW_QD);
bnxt_qplib_start_rcfw(rcfw);
return 0;

View File

@ -66,6 +66,8 @@ static inline void bnxt_qplib_rcfw_cmd_prep(struct cmdq_base *req,
req->cmd_size = cmd_size;
}
/* Shadow queue depth for non blocking command */
#define RCFW_CMD_NON_BLOCKING_SHADOW_QD 64
#define RCFW_CMD_WAIT_TIME_MS 20000 /* 20 Seconds timeout */
/* CMDQ elements */
@ -197,6 +199,7 @@ struct bnxt_qplib_rcfw {
u64 oos_prev;
u32 init_oos_stats;
u32 cmdq_depth;
struct semaphore rcfw_inflight;
};
struct bnxt_qplib_cmdqmsg {