nvme: add spdk_nvme_poll_group_all_connected

Performance tools such as nvme-perf may want to
create lots of qpairs to measure scaling, and then
want to set async_mode = true to amortize the
connection cost across the group of connections.

But we don't want connections to be connecting
in the background while we are doing I/O.  So add
a new API spdk_nvme_poll_group_all_connected to
check if all of the qpairs are connected.

Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I109f9ee96b6d6d3263e20dc2d3b3e11a475d246d
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17637
Community-CI: Mellanox Build Bot
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
This commit is contained in:
Jim Harris 2023-04-19 20:32:14 +00:00 committed by David Ko
parent 6335d88c8a
commit 31f126b46c
3 changed files with 49 additions and 0 deletions

View File

@ -2693,6 +2693,24 @@ int spdk_nvme_poll_group_destroy(struct spdk_nvme_poll_group *group);
int64_t spdk_nvme_poll_group_process_completions(struct spdk_nvme_poll_group *group,
uint32_t completions_per_qpair, spdk_nvme_disconnected_qpair_cb disconnected_qpair_cb);
/**
* Check if all qpairs in the poll group are connected.
*
* This function allows the caller to check if all qpairs in a poll group are
* connected. This API is generally only suitable during application startup,
* to check when a large number of async connections have completed.
*
* It is useful for applications like benchmarking tools to create
* a large number of qpairs, but then ensuring they are all fully connected before
* proceeding with I/O.
*
* \param group The group on which to poll connecting qpairs.
*
* return 0 if all qpairs are in CONNECTED state, -EIO if any connections failed to connect, -EAGAIN if
* any qpairs are still trying to connected.
*/
int spdk_nvme_poll_group_all_connected(struct spdk_nvme_poll_group *group);
/**
* Retrieve the user context for this specific poll group.
*

View File

@ -140,6 +140,36 @@ spdk_nvme_poll_group_process_completions(struct spdk_nvme_poll_group *group,
return error_reason ? error_reason : num_completions;
}
int
spdk_nvme_poll_group_all_connected(struct spdk_nvme_poll_group *group)
{
struct spdk_nvme_transport_poll_group *tgroup;
struct spdk_nvme_qpair *qpair;
int rc = 0;
STAILQ_FOREACH(tgroup, &group->tgroups, link) {
if (!STAILQ_EMPTY(&tgroup->disconnected_qpairs)) {
/* Treat disconnected qpairs as highest priority for notification.
* This means we can just return immediately here.
*/
return -EIO;
}
STAILQ_FOREACH(qpair, &tgroup->connected_qpairs, poll_group_stailq) {
if (nvme_qpair_get_state(qpair) < NVME_QPAIR_CONNECTING) {
return -EIO;
} else if (nvme_qpair_get_state(qpair) == NVME_QPAIR_CONNECTING) {
rc = -EAGAIN;
/* Break so that we can check the remaining transport groups,
* in case any of them have a disconnected qpair.
*/
break;
}
}
}
return rc;
}
void *
spdk_nvme_poll_group_get_ctx(struct spdk_nvme_poll_group *group)
{

View File

@ -124,6 +124,7 @@
spdk_nvme_poll_group_remove;
spdk_nvme_poll_group_destroy;
spdk_nvme_poll_group_process_completions;
spdk_nvme_poll_group_all_connected;
spdk_nvme_poll_group_get_ctx;
spdk_nvme_ns_get_data;