If we specify a preferred path manually for each NVMe bdev, we will be able to realize a simple static load balancing and make the failover more controllable in the multipath mode. The idea is to move I/O path to the NVMe-oF controller to the head of the list and then clear the I/O path cache for each NVMe bdev channel. We can set the I/O path to the I/O path cache directly but it must be conditional and make the code very complex. Hence, let find_io_path() do that. However, a NVMe bdev channel may be acquired after setting the preferred path. To cover such case, sort the nvme_ns list of the NVMe bdev too. This feature supports only multipath mode. The NVMe bdev module supports failover mode too. However, to support the latter, the new RPC needs to have trid as parameters and the code and the usage will be come very complex. Add a note for such limitation. To verify one by one exactly, add unit test. Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com> Change-Id: Ia51c74f530d6d7dc1f73d5b65f854967363e76b0 Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12262 Community-CI: Mellanox Build Bot Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com> Reviewed-by: <tanl12@chinatelecom.cn> Reviewed-by: GangCao <gang.cao@intel.com> Reviewed-by: Ben Walker <benjamin.walker@intel.com> Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com> Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> |
||
---|---|---|
.. | ||
accel | ||
bdev | ||
blob | ||
blobfs | ||
env_dpdk | ||
event | ||
scheduler | ||
sock | ||
Makefile |