nvmf/rdma: use LIFO practice for incoming queue

To maximize cache locality, use lifo and not fifo when managing objects
which are used per IO such as the RDMA receive elements queue.

Reported-by: Jim Harris <james.r.harris@intel.com>
Signed-off-by: Or Gerlitz <ogerlitz@nvidia.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12272 (master)

(cherry picked from commit 5edb8edca7)
Change-Id: Id8917558acc1bec29943fcbae6afe6b072bde6ac
Signed-off-by: Krzysztof Karas <krzysztof.karas@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12484
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
This commit is contained in:
Or Gerlitz 2022-03-09 17:43:37 +02:00 committed by Keith Lucas
parent 033d916339
commit ca83335c0e

View File

@ -3886,7 +3886,7 @@ nvmf_rdma_poller_poll(struct spdk_nvmf_rdma_transport *rtransport,
rqpair->current_recv_depth++;
rdma_recv->receive_tsc = poll_tsc;
rpoller->stat.requests++;
STAILQ_INSERT_TAIL(&rqpair->resources->incoming_queue, rdma_recv, link);
STAILQ_INSERT_HEAD(&rqpair->resources->incoming_queue, rdma_recv, link);
break;
case RDMA_WR_TYPE_DATA:
rdma_req = SPDK_CONTAINEROF(rdma_wr, struct spdk_nvmf_rdma_request, data.rdma_wr);