forked from Minki/linux
5ab8142839
Currently, all three chunk list encoders each use a portion of the one rl_segments array in rpcrdma_req. This is because the MWs for each chunk list were preserved in rl_segments so that ro_unmap could find and invalidate them after the RPC was complete. However, now that MWs are placed on a per-req linked list as they are registered, there is no longer any information in rpcrdma_mr_seg that is shared between ro_map and ro_unmap_{sync,safe}, and thus nothing in rl_segments needs to be preserved after rpcrdma_marshal_req is complete. Thus the rl_segments array can be used now just for the needs of each rpcrdma_convert_iovs call. Once each chunk list is encoded, the next chunk list encoder is free to re-use all of rl_segments. This means all three chunk lists in one RPC request can now each encode a full size data payload with no increase in the size of rl_segments. This is a key requirement for Kerberos support, since both the Call and Reply for a single RPC transaction are conveyed via Long messages (RDMA Read/Write). Both can be large. Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Tested-by: Steve Wise <swise@opengridcomputing.com> Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com> |
||
---|---|---|
.. | ||
backchannel.c | ||
fmr_ops.c | ||
frwr_ops.c | ||
Makefile | ||
module.c | ||
rpc_rdma.c | ||
svc_rdma_backchannel.c | ||
svc_rdma_marshal.c | ||
svc_rdma_recvfrom.c | ||
svc_rdma_sendto.c | ||
svc_rdma_transport.c | ||
svc_rdma.c | ||
transport.c | ||
verbs.c | ||
xprt_rdma.h |