nvmf/tcp: fix the state machine issue if data is already read.

Since we use big buffer to read the data, so the incoming
data may already be read when the req is waiting for the buffer.
So if we use the orginalstatement machine, there will be no
read event will be generated again.

The quick solution is to restore the original code, since
for req which has incapsule data, we not need to wait for the
buffer from the shared buffer pool.

Signed-off-by: Ziye Yang <ziye.yang@intel.com>
Change-Id: Ib195d57cc2969235203c34664115c3322d1c9eae
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/472047
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
This commit is contained in:
Ziye Yang 2019-10-24 11:28:59 +08:00 committed by Jim Harris
parent faedb24952
commit 2ec99adad9

View File

@ -2546,7 +2546,7 @@ spdk_nvmf_tcp_req_process(struct spdk_nvmf_tcp_transport *ttransport,
assert(tcp_req->req.xfer != SPDK_NVME_DATA_NONE);
if (&tcp_req->req != STAILQ_FIRST(&group->pending_buf_queue)) {
if (!tcp_req->has_incapsule_data && (&tcp_req->req != STAILQ_FIRST(&group->pending_buf_queue))) {
SPDK_DEBUGLOG(SPDK_LOG_NVMF_TCP,
"Not the first element to wait for the buf for tcp_req(%p) on tqpair=%p\n",
tcp_req, tqpair);
@ -2572,7 +2572,7 @@ spdk_nvmf_tcp_req_process(struct spdk_nvmf_tcp_transport *ttransport,
break;
}
STAILQ_REMOVE_HEAD(&group->pending_buf_queue, buf_link);
STAILQ_REMOVE(&group->pending_buf_queue, &tcp_req->req, spdk_nvmf_request, buf_link);
/* If data is transferring from host to controller, we need to do a transfer from the host. */
if (tcp_req->req.xfer == SPDK_NVME_DATA_HOST_TO_CONTROLLER) {