In current implementation RDMA qpair is destroyed right after disconnect. That is not graceful qpair shutdown process since there can be requests submitted to HW and we may receive completions for already destroyed/freed qpair. To avoid this, only disconnect qpair in ctrlr_disconnect_qpair transport callback, all other resources will be released in ctrlr_delete_io_qpair cb. This patch is useful when nvme poll groups are used since in that case we use shared CQ, if the disconnected qpair has WRs submitted to HW then qpair's destruction will be deferred to poll group. When nvme poll groups are not used, this patch doesn't change anything, in that case destruction flow is still ungraceful. However since CQ is destroyed immediately after qpair, we shouldn't receive any requests which point to released resources. A correct solution for non-poll group case requires async diconnect API which may lead to significant rework. There is a bug when Soft Roce is used - we may receive a completion with "normal" status when qpair is already disconnected and all nvme requests are aborted. Added a workaround for it. Signed-off-by: Alexey Marchuk <alexeymar@mellanox.com> Change-Id: I0680d9ef9aaa8737d7a6d1454cd70a384bb8efac Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/10327 Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Community-CI: Mellanox Build Bot Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com> Reviewed-by: Shuhei Matsumoto <shuheimatsumoto@gmail.com> Reviewed-by: Jim Harris <james.r.harris@intel.com> |
||
---|---|---|
.. | ||
Makefile | ||
nvme_rdma_ut.c |