nvmf/tcp: execute buffer allocation only if request is the first of pendings
RDMA transport executes spdk_nvmf_rdma_request_parse_sgl() only if the request is the first of the pending requests in the case RDMA_REQUEST_STATE_NEED_BUFFER in the state machine spdk_nvmf_rdma_requests_process(). This made RDMA transport possible to use STAILQ for pending requests because STAILQ_REMOVE parses from head and is slow when the target is in the middle of STAILQ. On the other hand, TCP transport executes spdk_nvmf_tcp_req_parse_sgl() even if the request is in the middle of the pending request in the case TCP_REQUEST_STATE_NEED_BUFFER in the state machine spdk_nvmf_tcp_req_process() if the request has in-capsule data. Hence TCP transport have used TAILQ for pending requests. This patch removes the condition if the request has in-capsule data from the case TCP_REQUEST_STATE_NEED_BUFFER. The purpose of this patch is to unify I/O buffer management further. Performance degradation was not observed even after this patch. Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com> Change-Id: Idc97fe20f7013ca66fd58587773edb81ef7cbbfc Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/466636 Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Reviewed-by: Changpeng Liu <changpeng.liu@intel.com> Reviewed-by: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com> Reviewed-by: Ben Walker <benjamin.walker@intel.com>
This commit is contained in:
parent
0f73c253b5
commit
8a80461ac6
@ -2568,8 +2568,7 @@ spdk_nvmf_tcp_req_process(struct spdk_nvmf_tcp_transport *ttransport,
|
||||
|
||||
assert(tcp_req->req.xfer != SPDK_NVME_DATA_NONE);
|
||||
|
||||
if (!tcp_req->has_incapsule_data &&
|
||||
(tcp_req != TAILQ_FIRST(&tqpair->group->pending_data_buf_queue))) {
|
||||
if (tcp_req != TAILQ_FIRST(&tqpair->group->pending_data_buf_queue)) {
|
||||
SPDK_DEBUGLOG(SPDK_LOG_NVMF_TCP,
|
||||
"Not the first element to wait for the buf for tcp_req(%p) on tqpair=%p\n",
|
||||
tcp_req, tqpair);
|
||||
|
Loading…
Reference in New Issue
Block a user