forked from Minki/linux
4f9264d156
The s_ack_queue is managed by two pointers into the ring: r_head_ack_queue and s_tail_ack_queue. r_head_ack_queue is the index of where the next received request is going to be placed and s_tail_ack_queue is the entry of the request currently being processed. This works perfectly fine for normal Verbs as the requests are processed one at a time and the s_tail_ack_queue is not moved until the request that it points to is fully completed. In this fashion, s_tail_ack_queue constantly chases r_head_ack_queue and the two pointers can easily be used to determine "queue full" and "queue empty" conditions. The detection of these two conditions are imported in determining when an old entry can safely be overwritten with a new received request and the resources associated with the old request be safely released. When pipelined TID RDMA WRITE is introduced into this mix, things look very different. r_head_ack_queue is still the point at which a newly received request will be inserted, s_tail_ack_queue is still the currently processed request. However, with pipelined TID RDMA WRITE requests, s_tail_ack_queue moves to the next request once all TID RDMA WRITE responses for that request have been sent. The rest of the protocol for a particular request is managed by other pointers specific to TID RDMA - r_tid_tail and r_tid_ack - which point to the entries for which the next TID RDMA DATA packets are going to arrive and the request for which the next TID RDMA ACK packets are to be generated, respectively. What this means is that entries in the ring, which are "behind" s_tail_ack_queue (entries which s_tail_ack_queue has gone past) are no longer considered complete. This is where the problem is - a newly received request could potentially overwrite a still active TID RDMA WRITE request. The reason why the TID RDMA pointers trail s_tail_ack_queue is that the normal Verbs send engine uses s_tail_ack_queue as the pointer for the next response. Since TID RDMA WRITE responses are processed by the normal Verbs send engine, s_tail_ack_queue had to be moved to the next entry once all TID RDMA WRITE response packets were sent to get the desired pipelining between requests. Doing otherwise would mean that the normal Verbs send engine would not be able to send the TID RDMA WRITE responses for the next TID RDMA request until the current one is fully completed. This patch introduces the s_acked_ack_queue index to point to the next request to complete on the responder side. For requests other than TID RDMA WRITE, s_acked_ack_queue should always be kept in sync with s_tail_ack_queue. For TID RDMA WRITE request, it may fall behind s_tail_ack_queue. Reviewed-by: Mike Marciniszyn <mike.marciniszyn@intel.com> Signed-off-by: Mitko Haralanov <mitko.haralanov@intel.com> Signed-off-by: Kaike Wan <kaike.wan@intel.com> Signed-off-by: Dennis Dalessandro <dennis.dalessandro@intel.com> Signed-off-by: Doug Ledford <dledford@redhat.com> |
||
---|---|---|
.. | ||
ib_addr.h | ||
ib_cache.h | ||
ib_cm.h | ||
ib_fmr_pool.h | ||
ib_hdrs.h | ||
ib_mad.h | ||
ib_marshall.h | ||
ib_pack.h | ||
ib_pma.h | ||
ib_sa.h | ||
ib_smi.h | ||
ib_umem_odp.h | ||
ib_umem.h | ||
ib_verbs.h | ||
ib.h | ||
iw_cm.h | ||
iw_portmap.h | ||
mr_pool.h | ||
opa_addr.h | ||
opa_port_info.h | ||
opa_smi.h | ||
opa_vnic.h | ||
rdma_cm_ib.h | ||
rdma_cm.h | ||
rdma_netlink.h | ||
rdma_vt.h | ||
rdmavt_cq.h | ||
rdmavt_mr.h | ||
rdmavt_qp.h | ||
restrack.h | ||
rw.h | ||
tid_rdma_defs.h | ||
uverbs_ioctl.h | ||
uverbs_named_ioctl.h | ||
uverbs_std_types.h | ||
uverbs_types.h |