This significantly speeds up testing with high connection workloads (i.e. -P 64) with TCP especially. We already set async_mode=true all of the time for the bdev/nvme module, so there's no reason we shouldn't do it in perf too. After allocating all of the IO qpairs, busy poll the poll group, using the new spdk_nvme_poll_group_all_connected() API to ensure the qpairs are all connected before proceeding with I/O. Signed-off-by: Jim Harris <james.r.harris@intel.com> Change-Id: If0c3c944cd5f3d87170a5bbf7d766ac1a4dcef7c Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17578 Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com> Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com> Community-CI: Mellanox Build Bot Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com> |
||
---|---|---|
.. | ||
.gitignore | ||
Makefile | ||
perf.c | ||
README.md |
Compiling perf on FreeBSD
To use perf test on FreeBSD over NVMe-oF, explicitly link userspace library of HBA. For example, on a setup with Mellanox HBA,
LIBS += -lmlx5