tcp: Fix no tcp_req issue while using async writev later.

Purpose: But if we use asynchronous writev
for pdu sending, the call_back of writev may occur
after the new data coming. So it means that the
free tcp request may not be available.
So we use the strategy to check the request status
in TCP_REQUEST_STATE_TRANSFERRING_CONTROLLER_TO_HOST.

So the strategy is checking the state_cntr of all the
reqs in TCP_REQUEST_STATE_TRANSFERRING_CONTROLLER_TO_HOST
state.
1 If the state_cntr > 0, we should queue
the new request.
2 If the statec_cntr == 0, it means that
there is no available slot for the new tcp request
, i.e., the new nvme command comming from the initiator.
If we receive this, it means that the initiator sends more
requests,and we should reject it.

Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Signed-off-by: Ziye Yang <ziye.yang@intel.com>
Change-Id: Ifbeb510e669082cb7b80faf2e7987075af31d176
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/472912
Community-CI: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
This commit is contained in:
Ziye Yang 2019-10-31 22:52:49 +08:00 committed by Jim Harris
parent e19fd311fc
commit 08273e77de

View File

@ -1365,6 +1365,12 @@ spdk_nvmf_tcp_capsule_cmd_hdr_handle(struct spdk_nvmf_tcp_transport *ttransport,
tcp_req = spdk_nvmf_tcp_req_get(tqpair);
if (!tcp_req) {
/* Directly return and make the allocation retry again */
if (tqpair->state_cntr[TCP_REQUEST_STATE_TRANSFERRING_CONTROLLER_TO_HOST] > 0) {
return;
}
/* The host sent more commands than the maximum queue depth. */
SPDK_ERRLOG("Cannot allocate tcp_req\n");
tqpair->state = NVME_TCP_QPAIR_STATE_EXITING;
spdk_nvmf_tcp_qpair_set_recv_state(tqpair, NVME_TCP_PDU_RECV_STATE_ERROR);
@ -2039,6 +2045,13 @@ spdk_nvmf_tcp_sock_process(struct spdk_nvmf_tcp_qpair *tqpair)
break;
/* Wait for the pdu specific header */
case NVME_TCP_PDU_RECV_STATE_AWAIT_PDU_PSH:
/* Handle the case if psh is already read but the nvmf tcp is not tied */
if (spdk_unlikely((pdu->psh_valid_bytes == pdu->psh_len) &&
(pdu->hdr->common.pdu_type == SPDK_NVME_TCP_PDU_TYPE_CAPSULE_CMD))) {
spdk_nvmf_tcp_capsule_cmd_hdr_handle(ttransport, tqpair, pdu);
break;
}
if (!tqpair->pdu_recv_buf.remain_size) {
rc = nvme_tcp_recv_buf_read(tqpair->sock, &tqpair->pdu_recv_buf);
if (rc <= 0) {