Currently we have stat per bdev I/O channel, but for NVMe bdev
multipath, we don't have stat per I/O path. Especially for
active-active mode, we may want to observe each path's statistics.
This patch support IO stat for nvme_io_path. Record each nvme_io_path
stat using structure spdk_bdev_io_stat.
The following is the comparison of bdevperf test.
Test on Arm server with the following basic configuration.
1 Null bdev: block size: 4K, num_blocks:16k
run bdevperf with io size=4k, qdepth=1/32/128, rw type=randwrite/mixed with 70% read/randread
Each time run 30 seconds, each item run for 16 times and get the average.
The result is as follows.
qdepth type IOPS(default) IOPS(this patch) diff
1 randwrite 7795157.27 7859909.78 0.83%
1 mix(70% r) 7418607.08 7404026.54 -0.20%
1 randread 8053560.83 8046315.44 -0.09%
32 randwrite 15409191.3 15327642.11 -0.53%
32 mix(70% r) 13760145.97 13714666.28 -0.33%
32 randread 16136922.98 16038855.39 -0.61%
128 randwrite 14815647.56 14944902.74 0.87%
128 mix(70% r) 13414858.59 13412317.46 -0.02%
128 randread 15508642.43 15521752.41 0.08%
Change-Id: I4eb5673f49d65d3ff9b930361d2f31ab0ccfa021
Signed-off-by: Richael Zhuang <richael.zhuang@arm.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14743
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>