Commit Graph

8 Commits

Author SHA1 Message Date
Shuhei Matsumoto
8f9b977504 bdev/nvme: Add active/active policy for multipath mode
The NVMe bdev module supported active-passive policy for multipath mode
first. By this patch, the NVMe bdev module supports active-active policy
for multipath node next. Following the Linux kernel native NVMe multipath,
the NVMe bdev module supports round robin algorithm for active-active
policy.

The multipath policy, active-passive or active-active, is managed per
nvme_bdev. The multipath policy is copied to all corresponding
nvme_bdev_channels.

Different from active-passive, active-active caches even non_optimized
path to provide load balance across multiple paths.

Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: Ie18b24db60d3da1ce2f83725b6cd3079f628f95b
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12001
Community-CI: Mellanox Build Bot
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
2022-05-05 07:11:24 +00:00
Shuhei Matsumoto
22b77a3c80 bdev/nvme: Set preferred I/O path in multipath mode
If we specify a preferred path manually for each NVMe bdev, we will
be able to realize a simple static load balancing and make the failover
more controllable in the multipath mode.

The idea is to move I/O path to the NVMe-oF controller to the head of
the list and then clear the I/O path cache for each NVMe bdev channel.
We can set the I/O path to the I/O path cache directly but it must be
conditional and make the code very complex. Hence, let find_io_path() do
that.

However, a NVMe bdev channel may be acquired after setting the preferred
path. To cover such case, sort the nvme_ns list of the NVMe bdev too.

This feature supports only multipath mode. The NVMe bdev module supports
failover mode too. However, to support the latter, the new RPC needs to
have trid as parameters and the code and the usage will be come very
complex. Add a note for such limitation.

To verify one by one exactly, add unit test.

Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: Ia51c74f530d6d7dc1f73d5b65f854967363e76b0
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12262
Community-CI: Mellanox Build Bot
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Reviewed-by: <tanl12@chinatelecom.cn>
Reviewed-by: GangCao <gang.cao@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2022-05-05 07:11:24 +00:00
Boris Glimcher
88653b4fe0 sock: fix typo
`enable_dynamic_zerocopy` is not even defined anywhere

Signed-off-by: Boris Glimcher <Boris.Glimcher@emc.com>
Change-Id: I5c7a97bf661db233a5bf62f8eb00170685790cee
Signed-off-by: Boris Glimcher <Boris.Glimcher@emc.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12436
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2022-05-04 08:03:48 +00:00
Richael Zhuang
9bff828f99 sock: introduce dynamic zerocopy according to data size
MSG_ZEROCOPY is not always effective as mentioned in
https://www.kernel.org/doc/html/v4.15/networking/msg_zerocopy.html.

Currently in spdk, once we enable sendmsg zerocopy, then all data
transferred through _sock_flush are sent with zerocopy, and vice
versa. Here dynamic zerocopy is introduced to allow data sent with
MSG_ZEROCOPY or not according to its size, which can be enabled by
setting "enable_dynamic_zerocopy" as true.

Test with 16 P4610 NVMe SSD, 2 initiators, target's and initiators'
configurations are the same as spdk report:
https://ci.spdk.io/download/performance-reports/SPDK_tcp_perf_report_2104.pdf

For posix socket, rw_percent=0(randwrite), it has 1.9%~8.3% performance boost
tested with target 1~40 cpu cores and qdepth=128,256,512. And it has no obvious
influence when read percentage is greater than 50%.

For uring socket, rw_percent=0(randwrite), it has 1.8%~7.9% performance boost
tested with target 1~40 cpu cores and qdepth=128,256,512. And it still has
1%~7% improvement when read percentage is greater than 50%.

The following is part of the detailed data.

posix:
qdepth=128
rw_percent      0             |           30
cpu  origin  thisPatch  opt   | origin  thisPatch opt
1	286.5	298.5	4.19%		 307	304.15	-0.93%
4	1042.5	1107	6.19%		1135.5	1136	0.04%
8	1952.5	2058	5.40%		2170.5	2170.5	0.00%
12	2658.5	2879	8.29%		3042	3046	0.13%
16	3247.5	3460.5	6.56%		3793.5	3775	-0.49%
24	4232.5	4459.5	5.36%		4614.5	4756.5	3.08%
32	4810	5095	5.93%		4488	4845	7.95%
40	5306.5	5435	2.42%		4427.5	4902	10.72%

qdepth=512
rw_percent      0             |           30
cpu  origin  thisPatch  opt   | origin  thisPatch opt
1    275	 287	4.36%		294.4	295.45	0.36%
4	 979	1041	6.33%		1073	1083.5	0.98%
8	1822.5	1914.5	5.05%		2030.5	2018.5	-0.59%
12	2441	2598.5	6.45%		2808.5	2779.5	-1.03%
16	2920.5	3109.5	6.47%		3455	3411.5	-1.26%
24	3709	3972.5	7.10%		4483.5	4502.5	0.42%
32	4225.5	4532.5	7.27%		4463.5	4733	6.04%
40	4790.5	4884.5	1.96%		4427	4904.5	10.79%

uring:
qdepth=128
rw_percent      0             |           30
cpu  origin  thisPatch  opt   | origin  thisPatch opt
1	270.5	287.5	6.28%		295.75	304.75	3.04%
4	1018.5	1089.5	6.97%		1119.5	1156.5	3.31%
8	1907	2055	7.76%		2127	2211.5	3.97%
12	2614	2801	7.15%		2982.5	3061.5	2.65%
16	3169.5	3420	7.90%		3654.5	3781.5	3.48%
24	4109.5	4414	7.41%		4691.5	4750.5	1.26%
32	4752.5	4908	3.27%		4494	4825.5	7.38%
40	5233.5	5327	1.79%		4374.5	4891	11.81%

qdepth=512
rw_percent      0             |           30
cpu  origin  thisPatch  opt   | origin  thisPatch opt
1	259.95	 276	6.17%		286.65	294.8	2.84%
4	955 	1021	6.91%		1070.5	1100	2.76%
8	1772	1903.5	7.42%		1992.5	2077.5	4.27%
12	2380.5	2543.5	6.85%		2752.5	2860	3.91%
16	2920.5	3099	6.11%		3391.5	3540	4.38%
24	3697	3912	5.82%		4401	4637	5.36%
32	4256.5	4454.5	4.65%		4516	4777	5.78%
40	4707	4968.5	5.56%		4400.5	4933	12.10%

Signed-off-by: Richael Zhuang <richael.zhuang@arm.com>
Change-Id: I730dcf89ed2bf3efe91586421a89045fc11c81f0
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12210
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2022-04-28 07:29:28 +00:00
Shuhei Matsumoto
2a6a64485c bdev/nvme: Add bdev_nvme_get_io_paths RPC to monitor I/O path states
Add an new RPC bdev_nvme_get_io_paths to query all active I/O paths.

One io_path belongs to One nvme_bdev_channel.
Each nvme_bdev_channel is associated with one nvme_bdev.

If the RPC bdev_nvme_get_io_paths has a bdev name as a parameter
it can use spdk_for_each_channel() simply for the corresponding
nvme_bdev.

However, users will want to know I/O paths of all nvme_bdevs like
the RPC bdev_get_bdevs.

One io_path has one nvme_qpair. One nvme_qpair belongs to one
nvme_poll_group. By relying on these relationships, the RPC
bdev_nvme_get_io_paths traverses all nvme_poll_groups by using
spdk_for_each_channel() to g_bdev_nvme_ctrlrs.

The RPC bdev_nvme_get_io_paths has two modes, display all or
the specified NVMe bdev's active I/O paths.

The specified bdev name is used just for comparison and empty
array is returned if no matched io_path is found.

Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: I4a0dbf3ef7aaa9a7b7345fc03dc493cc6d37bc99
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12146
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2022-04-22 09:44:57 +00:00
Andreas Economides
3b047a6162 nvmf/vfio-user: support shadow doorbells
As per the NVMe specification, a host can identify two areas of guest
memory: one of which is used for the host-written doorbells, and one of
which contains event indexes. The host writes to the shadow doorbell
area, but also writes to the controller's BAR0 doorbell area if the
corresponding event index is crossed by the update. This avoids many
mmio exits in interrupt mode, where BAR0 doorbells are not directly
mapped into the guest VM, with greatly improved performance.

This isn't a useful feature in BAR0 doorbells are mapped into the VM, so
we explicitly disable support in that case.

NB: the Windows NVMe driver doesn't yet support this feature.

Although the specification says that the admin queues should also engage
in this behaviour, in practice, no VM does, so have to include some
hacks to account for this.

Co-authored-by: John Levon <john.levon@nutanix.com>
Signed-off-by: John Levon <john.levon@nutanix.com>
Change-Id: I0646b234d31fbbf9a6b85572042c6cdaf8366659
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11492
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
2022-04-21 08:12:29 +00:00
zhangduan
31db7b139b nvme_tcp: set transport_ack_timeout to ack_timeout
The value of ack_timeout is calculated according to
the formula 2^(transport_ack_timeout) msec.

Signed-off-by: zhangduan <zhangd28@chinatelecom.cn>
Change-Id: I5a938635d70693ddd405fa5907555bb745b4df0f
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12215
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2022-04-20 08:21:42 +00:00
Konrad Sztyber
7610bc38dc scripts: move python modules to python directory
Up until now, importing an SPDK RPC python module was just a matter of
`import rpc`.  It's fine until there's another module called `rpc`
installed on the system, in which case it's impossible to import both of
them.  Therefore, to avoid this problem, all of the modules were moved
to a separate directory under the "spdk" namespace.

The decision to move to a location under a separate directory was
motivated by the fact that a directory called scripts/spdk would look
pretty confusing.  Moreover, it should make it also easier to package
these scripts as a python package.

Other than moving the packages, all of the imports were updated to
reflect these changes.  Files under python now use relative imports,
while those under scripts/ use the "spdk" namespace and have their
PYTHONPATH extended with python directory.

Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: Ib43dee73921d590a551dd83885e22870e72451cf
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/9692
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2022-04-05 14:40:47 +00:00