mirror of
https://github.com/torvalds/linux.git
synced 2024-12-29 22:31:32 +00:00
655fec6987
An RPC Call message that is sent inline but that has a data payload (ie, one or more items in rq_snd_buf's page list) must be "pulled up:" - call_allocate has to reserve enough RPC Call buffer space to accommodate the data payload - call_transmit has to memcopy the rq_snd_buf's page list and tail into its head iovec before it is sent As the inline threshold is increased beyond its current 1KB default, however, this means data payloads of more than a few KB are copied by the host CPU. For example, if the inline threshold is increased just to 4KB, then NFS WRITE requests up to 4KB would involve a memcpy of the NFS WRITE's payload data into the RPC Call buffer. This is an undesirable amount of participation by the host CPU. The inline threshold may be much larger than 4KB in the future, after negotiation with a peer server. Instead of copying the components of rq_snd_buf into its head iovec, construct a gather list of these components, and send them all in place. The same approach is already used in the Linux server's RPC-over-RDMA reply path. This mechanism also eliminates the need for rpcrdma_tail_pullup, which is used to manage the XDR pad and trailing inline content when a Read list is present. This requires that the pages in rq_snd_buf's page list be DMA-mapped during marshaling, and unmapped when a data-bearing RPC is completed. This is slightly less efficient for very small I/O payloads, but significantly more efficient as data payload size and inline threshold increase past a kilobyte. Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com> |
||
---|---|---|
.. | ||
backchannel.c | ||
fmr_ops.c | ||
frwr_ops.c | ||
Makefile | ||
module.c | ||
rpc_rdma.c | ||
svc_rdma_backchannel.c | ||
svc_rdma_marshal.c | ||
svc_rdma_recvfrom.c | ||
svc_rdma_sendto.c | ||
svc_rdma_transport.c | ||
svc_rdma.c | ||
transport.c | ||
verbs.c | ||
xprt_rdma.h |