Commit Graph

6798 Commits

Author SHA1 Message Date
Parameswaran Krishnamurthy
6d4d5c62e7 test/nvmf: Add unittest script for Nvme mDNS discovery service
Adding script mdns_discovery.sh to test the mDNS discovery service at the host.
The script tests the RPC commands bdev_nvme_start_mdns_discovery, bdev_nvme_stop_mdns_discovery
and bdev_nvme_get_mdns_discovery_info.

avahi-publish tool is used to simulate the advertisement of the discovery service of the target subsystem.

Signed-off-by: Parameswaran Krishnamurthy <parameswaran.krishna@dell.com>
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: Ieace458f87b5d9bd51aeb651d9419d07b6fee6d0
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16140
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Boris Glimcher <Boris.Glimcher@emc.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
2023-01-19 10:51:09 +00:00
Krzysztof Karas
e64728f092 bdev/crypto: make sure that vbdev_crypto_destruct() returns 1
Make vbdev_crypto_destruct() return 1 to signal that program
execution should wait for spdk_bdev_destruct_done() function,
which is added inside _device_unregister_cb().

This change is related to _vdev_dev_get() not being able
to find the devices, when called from _cryptodev_sym_session_free(),
as it uses device driver name, which might already be freed.
This occurs only during bdev module finish, when crypto bdevs
are being unregistered and vbdev_crypto_finish() proceeds to
call bdev name deletion without waiting for the unregister
callbacks to complete, which ultimately results in reading
freed pointers.

This only happens when code execution takes path for DPDK 22.11+.

Change-Id: Id9a43d07c90aef7a82867383fd77354ac521a3e7
Signed-off-by: Krzysztof Karas <krzysztof.karas@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16290
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2023-01-19 10:28:52 +00:00
Alexey Marchuk
10dcf2dbd2 accel/dpdk_cryptodev: Update rte_cryptodev usage for DPDK 22.11
This patch is a combination of commits which update vdev_crypto:

110d8411e bdev/crypto: do not create mempool for session private data
495055b05 bdev/crypto: update rte_cryptodev usage for DPDK 22.11
02caed6b5 bdev/crypto: remove mempool usage matching < DPDK 19.02
5887eb321 bdev/crypto: do not track type of crypto session

Signed-off-by: Alexey Marchuk <alexeymar@nvidia.com>
Change-Id: I30c4f76e4e7b4865a7daa638d357888bb5e02071
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16039
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
2023-01-19 08:25:48 +00:00
Richael Zhuang
4d7b2b36aa bdev_nvme: record io paths' stat before being destroyed
The io paths' stat will get lost when they are destroyed. Record
the stat in the nvme_ns structure.

Change-Id: I12fc0b04fac0d59e7465fe543ee733f2822a9cdb
Signed-off-by: Richael Zhuang <richael.zhuang@arm.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14744
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2023-01-19 01:57:11 +00:00
Richael Zhuang
f61b004197 bdev_nvme: update nvme_io_path stat when IO completes
Currently we have stat per bdev I/O channel, but for NVMe bdev
multipath, we don't have stat per I/O path. Especially for
active-active mode, we may want to observe each path's statistics.

This patch support IO stat for nvme_io_path. Record each nvme_io_path
stat using structure spdk_bdev_io_stat.

The following is the comparison of bdevperf test.

Test on Arm server with the following basic configuration.
1 Null bdev: block size: 4K, num_blocks:16k
run bdevperf with io size=4k, qdepth=1/32/128, rw type=randwrite/mixed with 70% read/randread

Each time run 30 seconds, each item run for 16 times and get the average.

The result is as follows.

qdepth type   IOPS(default) IOPS(this patch)  diff
1   randwrite   7795157.27  7859909.78       0.83%
1   mix(70% r)  7418607.08  7404026.54      -0.20%
1   randread    8053560.83  8046315.44      -0.09%

32  randwrite   15409191.3  15327642.11	    -0.53%
32  mix(70% r)  13760145.97 13714666.28	    -0.33%
32  randread    16136922.98 16038855.39	    -0.61%

128 randwrite   14815647.56 14944902.74	     0.87%
128 mix(70% r)  13414858.59 13412317.46	    -0.02%
128 randread    15508642.43 15521752.41	     0.08%

Change-Id: I4eb5673f49d65d3ff9b930361d2f31ab0ccfa021
Signed-off-by: Richael Zhuang <richael.zhuang@arm.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14743
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
2023-01-19 01:57:11 +00:00
Richael Zhuang
2f500a23fb bdev/nvme: support switch to another io path after a number of IOs
Support to specify rr_min_io for multipath round-robin policy,
which makes I/O switches to another io path after rr_min_io I/Os are
rounted to current io path.

Change-Id: I09f0d8d24271c0178ff816fa63ce8576b6e8ae47
Signed-off-by: Richael Zhuang <richael.zhuang@arm.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15445
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
2023-01-19 01:57:11 +00:00
Richael Zhuang
6aa4edc27d bdev/nvme: select io path according to outstanding io numbder
Support selecting io path according to number of outstanding io of
each path in a channel. It's optional, and can be set by calling
RPC "bdev_nvme_set_multipath_policy -s queue_depth".

Change-Id: I82cdfbd69b3e105c973844c4f34dc98f0dca2faf
Signed-off-by: Richael Zhuang <richael.zhuang@arm.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14734
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2023-01-19 01:57:11 +00:00
Alexey Marchuk
8f36853a84 dpdk_cryptodev: Check queue capacity before submitting a task
When we submit more tasks than supported by qp,
extra tasks are queued on io_channel. Later completion
poller tries to resubmit these tasks one by one. That
is not efficient since every enqueu_burst may cause
doorbell updates in HW.

Instead add a check for qpir capacity and submit
appropriate number of requests. If qpair is full,
tasks are queued in dedicated list. This approach
should remove or minimize the need to resubmit
individual crypto operations.

This also handles a case where there are no entries
in global pools (crypto_ops or rte_mbuf)

Fixes issue #2756

Signed-off-by: Alexey Marchuk <alexeymar@nvidia.com>
Change-Id: Iab50e623e7a82a4f5bef7a1e4434e593240ab633
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15769
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
2023-01-18 18:19:50 +00:00
Alexey Marchuk
bf8e0656e8 dpdk_cryptodev: Remove limit on max IO size
Previously vbdev_crypto used DPDK directly and
the restriction on max IO size was propagated to
generic bdev layer which split big IO requests.

Now, when DPDK code is a standalone accel module,
this restriction on max IO size is not visible to
the user and we should get rid of it.

To remove this limitation, allow to submit crypto
operations for part of logical blocks in big IO,
the rest blocks will be processed when all submitted
crypto ops are completed.

To verify this patch, add a functional test which
submits big IO verify mode

Signed-off-by: Alexey Marchuk <alexeymar@nvidia.com>
Change-Id: I0ee89e98195a5c744f3fb2bfc752b578965c3bc5
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15768
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2023-01-18 18:19:50 +00:00
GangCao
687d5a8766 lib/part: check the return of spdk_bdev_register
Change-Id: I855a68dfcf6da565a97e33e4389eee5ed6141f74
Signed-off-by: GangCao <gang.cao@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16079
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Xiaodong Liu <xiaodong.liu@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2023-01-18 15:15:02 +00:00
Sebastian Brzezinka
c5d0fac1b9 test/fuzz: enable lcov for llvm-fuzzing
Lcov is disable for clang due to being time-consuming. This patch
enabled it for fuzzer only. `llvm-gcov.sh` is a wrapper for llvm-cov

Signed-off-by: Sebastian Brzezinka <sebastian.brzezinka@intel.com>
Change-Id: I96ef6ad4fc4ecb92b063070fd2410ca88209f5b7
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15356
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2023-01-18 08:35:25 +00:00
Shuhei Matsumoto
a3ae6eaa75 bdev/nvme: Add an option for the RDMA SRQ size
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: I8e678b5681c8039ccd359de8a797ede4eaddf8b5
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14914
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2023-01-17 23:53:01 +00:00
Shuhei Matsumoto
bcd987ea2d nvme_rdma: Support SRQ for I/O qpairs
Support SRQ in RDMA transport of NVMe-oF initiator.

Add a new spdk_nvme_transport_opts structure and add rdma_srq_size
to the spdk_nvme_transport_opts structure.

For the user of the NVMe driver, provide two public APIs,
spdk_nvme_transport_get_opts() and spdk_nvme_transport_set_opts().

In the NVMe driver, the instance of spdk_nvme_transport_opts,
g_spdk_nvme_transport_opts, is accessible throughtout.

From an issue that async event handling caused conflicts between
initiator and target, the NVMe-oF RDMA initiator does not handle
the LAST_WQE_REACHED event. Hence, it may geta WC for a already
destroyed QP. To clarify this, add a comment in the source code.

The following is a result of a small performance evaluation using
SPDK NVMe perf tool. Even for queue_depth=1, overhead was less than 1%.
Eventually, we may be able to enable SRQ by default for NVMe-oF
initiator.

1.1 randwrite, qd=1, srq=enabled
./build/examples/perf -q 1 -s 1024 -w randwrite -t 30 -c 0XF -o 4096 -r
========================================================
                                                                                                              Latency(us)
Device Information                                                        :       IOPS      MiB/s    Average        min        max
RDMA (addr:1.1.18.1 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  0:  162411.97     634.42       6.14       5.42     284.07
RDMA (addr:1.1.18.1 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  1:  163095.87     637.09       6.12       5.41     423.95
RDMA (addr:1.1.18.1 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  2:  164725.30     643.46       6.06       5.32     165.60
RDMA (addr:1.1.18.1 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  3:  162548.57     634.96       6.14       5.39     227.24
========================================================
Total                                                                     :  652781.70    2549.93       6.12

1.2 randwrite, qd=1, srq=disabled
./build/examples/perf -q 1 -s 1024 -w randwrite -t 30 -c 0XF -o 4096 -r
========================================================
                                                                                                              Latency(us)
Device Information                                                        :       IOPS      MiB/s    Average        min        max
RDMA (addr:1.1.18.1 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  0:  163398.03     638.27       6.11       5.33     240.76
RDMA (addr:1.1.18.1 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  1:  164632.47     643.10       6.06       5.29     125.22
RDMA (addr:1.1.18.1 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  2:  164694.40     643.34       6.06       5.31     408.43
RDMA (addr:1.1.18.1 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  3:  164007.13     640.65       6.08       5.33     170.10
========================================================
Total                                                                     :  656732.03    2565.36       6.08       5.29     408.43

2.1 randread, qd=1, srq=enabled
./build/examples/perf -q 1 -s 1024 -w randread -t 30 -c 0xF -o 4096 -r '
========================================================
                                                                                                              Latency(us)
Device Information                                                        :       IOPS      MiB/s    Average        min        max
RDMA (addr:1.1.18.1 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  0:  153514.40     599.67       6.50       5.97     277.22
RDMA (addr:1.1.18.1 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  1:  153567.57     599.87       6.50       5.95     408.06
RDMA (addr:1.1.18.1 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  2:  153590.33     599.96       6.50       5.88     134.74
RDMA (addr:1.1.18.1 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  3:  153357.40     599.05       6.51       5.97     229.03
========================================================
Total                                                                     :  614029.70    2398.55       6.50       5.88     408.06

2.2 randread, qd=1, srq=disabled
./build/examples/perf -q 1 -s 1024 -w randread -t 30 -c 0XF -o 4096 -r '
========================================================
                                                                                                              Latency(us)
Device Information                                                        :       IOPS      MiB/s    Average        min        max
RDMA (addr:1.1.18.1 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  0:  154452.40     603.33       6.46       5.94     233.15
RDMA (addr:1.1.18.1 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  1:  154711.67     604.34       6.45       5.91      25.55
RDMA (addr:1.1.18.1 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  2:  154717.70     604.37       6.45       5.88     130.92
RDMA (addr:1.1.18.1 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  3:  154713.77     604.35       6.45       5.91     128.19
========================================================
Total                                                                     :  618595.53    2416.39       6.45       5.88     233.15

3.1 randwrite, qd=32, srq=enabled
./build/examples/perf -q 32 -s 1024 -w randwrite -t 30 -c 0XF -o 4096 -r 'trtype:RDMA adrfam:IPv4 traddr:1.1.18.1 trsvcid:4420'
========================================================
                                                                                                              Latency(us)
Device Information                                                        :       IOPS      MiB/s    Average        min        max
RDMA (addr:1.1.18.1 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  0:  672608.17    2627.38      47.56      11.33     326.96
RDMA (addr:1.1.18.1 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  1:  672386.20    2626.51      47.58      11.03     221.88
RDMA (addr:1.1.18.1 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  2:  673343.70    2630.25      47.51       9.11     387.54
RDMA (addr:1.1.18.1 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  3:  672799.10    2628.12      47.55      10.48     552.80
========================================================
Total                                                                     : 2691137.17   10512.25      47.55       9.11     552.80

3.2 randwrite, qd=32, srq=disabled
./build/examples/perf -q 32 -s 1024 -w randwrite -t 30 -c 0XF -o 4096 -r 'trtype:RDMA adrfam:IPv4 traddr:1.1.18.1 trsvcid:4420'
========================================================
                                                                                                              Latency(us)
Device Information                                                        :       IOPS      MiB/s    Average        min        max
RDMA (addr:1.1.18.1 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  0:  672647.53    2627.53      47.56      11.13     389.95
RDMA (addr:1.1.18.1 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  1:  672756.50    2627.96      47.55       9.53     394.83
RDMA (addr:1.1.18.1 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  2:  672464.63    2626.81      47.57       9.48     528.07
RDMA (addr:1.1.18.1 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  3:  673250.73    2629.89      47.52       9.43     389.83
========================================================
Total                                                                     : 2691119.40   10512.19      47.55       9.43     528.07

4.1 randread, qd=32, srq=enabled
./build/examples/perf -q 32 -s 1024 -w randread -t 30 -c 0xF -o 4096 -r
========================================================
                                                                                                              Latency(us)
Device Information                                                        :       IOPS      MiB/s    Average        min        max
RDMA (addr:1.1.18.1 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  0:  677286.30    2645.65      47.23      12.29     335.90
RDMA (addr:1.1.18.1 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  1:  677554.97    2646.70      47.22      20.39     196.21
RDMA (addr:1.1.18.1 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  2:  677086.07    2644.87      47.25      19.17     386.26
RDMA (addr:1.1.18.1 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  3:  677654.93    2647.09      47.21      18.92     181.05
========================================================
Total                                                                     : 2709582.27   10584.31      47.23      12.29     386.26

4.2 randread, qd=32, srq=disabled
./build/examples/perf -q 32 -s 1024 -w randread -t 30 -c 0XF -o 4096 -r
========================================================
                                                                                                              Latency(us)
Device Information                                                        :       IOPS      MiB/s    Average        min        max
RDMA (addr:1.1.18.1 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  0:  677432.60    2646.22      47.22      13.05     435.91
RDMA (addr:1.1.18.1 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  1:  677450.43    2646.29      47.22      16.26     178.60
RDMA (addr:1.1.18.1 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  2:  677647.10    2647.06      47.21      17.82     177.83
RDMA (addr:1.1.18.1 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  3:  677047.33    2644.72      47.25      15.62     308.21
========================================================
Total                                                                     : 2709577.47   10584.29      47.23      13.05     435.91

Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Signed-off-by: Denis Nagorny <denisn@nvidia.com>
Signed-off-by: Evgeniy Kochetov <evgeniik@nvidia.com>
Change-Id: I843a5eda14e872bf6e2010e9f63b8e46d5bba691
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14174
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2023-01-17 23:53:01 +00:00
Shuhei Matsumoto
4999a9850c nvme_rdma: Move responses from rdma_qpair into a separate object
Move parallel arrays of response buffers and response SGLs from
qpair to a new responses object.

Use options to create the responses object.

Use spdk_zmalloc() to allocate the responses object because qpair
is also allocated by spdk_zmalloc().

The purpose is to share the code and the data structure between
SRQ is enabled and disabled.

Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Signed-off-by: Denis Nagorny <denisn@nvidia.com>
Signed-off-by: Evgeniy Kochetov <evgeniik@nvidia.com>
Change-Id: Ia23fe7328ae1f2f551fed5863fd1414f8567d602
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14172
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
2023-01-17 23:53:01 +00:00
Konrad Sztyber
9cdbd9e4f3 accel: support appending encrypt/decrypt operations
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I7bbe90936ff11b50a7cca7b15eade2025daac83b
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16292
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2023-01-17 23:34:43 +00:00
Konrad Sztyber
1f3c37468c ut/accel: don't stub isa-l crypto functions
Unit tests are already linked with isa-l-crypto if CONFIG_ISAL_CRYPTO is
set, so there's no need to stub them.  And by not stubbing them, we can
do tests involving actual encryption/decryption.

Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I162a2cd26112cc5adb8eeed7336f4280aa4bdb6b
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16291
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
2023-01-17 23:34:43 +00:00
Konrad Sztyber
3de19b0b55 accel: allow modules to report memory domain support
Accel modules can now implement the get_memory_domains() callback to
indicate the types of memory domains they support.  If unimplemented, a
module is assumed not to support memory domains and accel will take care
of pulling/pushing data to local buffers prior to passing a task to be
executed by a module.

For now, similarly to the bdev layer, we only check if a module supports
memory domains, but we don't verify the types of the domains.  That
could be easily added in the future, if necessary.

Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: Ia513f4f31124672b705b6dd33a2624f0ae94d3ce
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16027
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2023-01-17 23:34:43 +00:00
Konrad Sztyber
a6fef9b194 accel: store in-use modules in an extra structure
It allows accel to store private data per each opcode/module without
having to change externally visible structures or allocate anything when
a module is registered. Since a single module can service multiple
opcodes at the same time, so some of these values might be duplicated.
However, there are only a handful of opcodes, so it shouldn't be a
problem.

Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I609a6ccc2d241cb9b8273cc2c6d1933d2bc25e0e
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16026
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2023-01-17 23:34:43 +00:00
Konrad Sztyber
81fe7ef0af accel: push data if dstbuf is in remote memory domain
If the destination buffer is in remote memory domain, we'll now push the
temporary bounce buffer to that buffer after a task is executed.

This means that users can now build and execute sequence of operations
using buffers described by memory domains.  For now, it's assumed that
none of the accel modules support memory domains, so the code in the
generic accel layer will always allocate temporary bounce buffers and
pull/push the data before handing a task to a module.

Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: Ia6edf266fe174eee4d28df0ca570c4d825436e60
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15948
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2023-01-17 23:34:43 +00:00
Konrad Sztyber
316f9ea3f5 accel: pull data if srcbuf is in remote memory domain
If the source buffer is from a remote memory domain, we will now pull it
to the temporary bounce buffer before a task is executed.

Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I476684a4359410c69dd69a2b425b9e61d4c55a7e
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15947
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2023-01-17 23:34:43 +00:00
Konrad Sztyber
957076108f accel: remove nbytes from spdk_accel_task
All operations are using iovecs to describe their buffers and only
encrypt/decrypt additionally used nbytes to store the total size of a
src buffer.  We don't really need this value in the generic accel code,
so we can let modules calculate it, if necessary.  That way, we won't
waste cycles calculating it if a module doesn't use it and it makes the
code a bit easier, as we won't have to deal with the fact that nbytes is
only valid for certain operations.

Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I29252be34a9af9fd40f4c7fec9d0a0c1139c562d
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16306
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2023-01-17 23:34:43 +00:00
Konrad Sztyber
1866faffe2 accel: use iovecs for compress operations
Also, since this was the last operation using dst and nbytes, these
fields were removed from spdk_accel_task.

Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I0d6b090e101c016d1bdcbe7a3bee7d6f691f1c9e
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15943
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
2023-01-17 23:34:43 +00:00
Konrad Sztyber
a374f8ba19 accel: use iovecs for copy+crc32c operations
Also, since this was the last operation using src, remove this field
from spdk_accel_task.

Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I55fd98697ef4f92a13dd0563b4adf9ccb0af171b
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15942
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2023-01-17 23:34:43 +00:00
wanghailiangx
1b566ac7d9 test/ftl: add cases line to cover RPC bdev_ftl_unload
Change-Id: I34074846d812d9bdf47af25f1275978c3508084b
Signed-off-by: Hailiang Wang <hailiangx.e.wang@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15696
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2023-01-17 20:32:58 +00:00
Shuhei Matsumoto
c22b052b60 bdev/raid0: Support resize when increasing the size of base bdevs
Implement the resize function for RAID0. raid0_resize() calculate the
new raid_bdev's block count and if it is different from the old block
count, call spdk_bdev_notify_blockcnt_change() with the new block count.

A raid0 bdev always opens all base bdevs. Hence, if the size of base
bdevs are reduced, resize fails now. This limitation will be removed
later.

Add a simple functional test for this feature. The test is to create
a raid0 bdev with two null bdevs, resize one null bdev, check if the
raid0 bdev is not resize, resize another null bdev, check if the raid0
bdev is resized.

test/iscsi_tgt/resize/resize.sh was used a reference to write the test.
Using jq rather than grep&sed is better and hence replace grep&sed by jq
of test/iscsi_tgt/resize/resize.sh together in this patch.

Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: I07136648c4189b970843fc6da51ff40355423144
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16261
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Artur Paszkiewicz <artur.paszkiewicz@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Xiaodong Liu <xiaodong.liu@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2023-01-17 19:45:34 +00:00
Karol Latecki
c6d73b5aaf test/vhost: add iobuf options to performance test
Recent changes done to iobuf and accel framework
require us to adjust iobuf pool sizes when running
tests with high number of VMs and Vhost controllers.

Signed-off-by: Karol Latecki <karol.latecki@intel.com>
Signed-off-by: Michal Berger <michal.berger@intel.com>
Change-Id: I1a445379e755939875aebe97a6360ec0b0586287
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16267
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2023-01-17 09:50:00 +00:00
Sebastian Brzezinka
e54ffeb6c5 llvm/vfio: dump fuzzer logs to file
Fuzzer logs may become huge, it's better to store it as file.

Signed-off-by: Sebastian Brzezinka <sebastian.brzezinka@intel.com>
Change-Id: Ia85eb88fd648dc2fb90f5a3bd389e6df2ef0106e
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15365
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
2023-01-16 17:08:44 +00:00
Sebastian Brzezinka
5303e1bd54 llvm_vfio_fuzz: keep corpus files
Keep corpus directory that trigers new code coverage.

Signed-off-by: Sebastian Brzezinka <sebastian.brzezinka@intel.com>
Change-Id: I2a5154472588669fddd87c97cc952da1a92ae0ee
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15105
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Michal Berger <michal.berger@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
2023-01-16 17:08:44 +00:00
Konrad Sztyber
3d1d5452e0 accel: use iovecs for crc32c operations
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: Ic9f1f002edf273e9cd2247f353b5d7de9d2dea05
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15941
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2023-01-16 15:35:15 +00:00
Konrad Sztyber
bc6a14636a accel: use iovecs for fill operations
Also, make it possible to remove copy operations following a fill
operation if they're using the same buffers.

Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I7da195ce80650a02c5db99d9400ee692f797b1f8
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15940
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
2023-01-16 15:35:15 +00:00
Konrad Sztyber
42c19a8c92 ut/accel: use decompress in seq completion error test
Some of the copy operations can be elided, so they're not the best for
this kind of test.  So, use another operation, decompress, that can be
appended to an accel sequence.

Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: Ic59e7678436bdf1d5ab6eb103de4cc0c0c347b9f
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16018
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
2023-01-16 15:35:15 +00:00
Konrad Sztyber
4d1ba5f294 accel: use iovecs for compare operations
Also, replace src2 with an iovec + iovcnt and rename it to s2 to
keep the naming consistent with the source buffer (s).

Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I44787128377addd514818ec5aaec084b1a31f0c3
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15939
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
2023-01-16 15:35:15 +00:00
Konrad Sztyber
135396b0bc accel: use iovecs for dualcast operations
Also, replace dst2 with an iovec + iovcnt and rename it to d2 to
keep the naming consistent with the destination buffer (d).

Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: Ib394c127eeb5890451535ff485f96f7edd2897a4
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15938
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
2023-01-16 15:35:15 +00:00
Konrad Sztyber
dee8e1f4c0 accel: use iovecs for copy operations
This patch is first in the series of patches aimed to make all accel
operations describe their buffers with iovecs.  The intention is to make
it easier to handle tasks in a generic way.

It doesn't mean that we change the API - all function signatures are
preserved.  If a function doesn't use iovecs, we use the aux_iovs array.
However, this does mean that each accel module that provides support for
a given operation will need to be adjusted to use iovecs.

Additionally, update the unit test checking copy elision to verify the
buffers of the copy operation that is left.

Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I9e6d8d1be3b8b9706cb4a6222dad30e8c373d8fb
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15937
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
2023-01-16 15:35:15 +00:00
Konrad Sztyber
58b12fc4b9 accel: support for buffers allocated from accel domain
Users can now specify buffers allocated through `spdk_accel_get_buf()`
when appending operations to a sequence.  When an operation in a
sequence is executed, we check it if it uses buffers from accel domain,
allocate data buffers and update all operations within a sequence that
were also using those buffers.

Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I430206158f6a4289e15f04ddb18f0d1a2137f0b4
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15748
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
2023-01-16 15:35:15 +00:00
Michal Berger
db772436ca misc: Fixes targeted for latest shellcheck
Following directives are being fixed:

- SC2317
- SC2004

Signed-off-by: Michal Berger <michal.berger@intel.com>
Change-Id: Ia080044aa5b7c885a01556b6927933b81f98eb9d
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16025
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Pawel Piatek <pawelx.piatek@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2023-01-16 09:45:23 +00:00
John Levon
9fa252375a util: add spdk_iov_one()
It's common to set up an iovec around a single buffer; add a helper for
this.

Signed-off-by: John Levon <john.levon@nutanix.com>
Change-Id: Ic4183e29d78549ec102045c6af0b5ff448cb5c59
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16192
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
2023-01-16 09:38:43 +00:00
John Levon
47568c65de util: add spdk_iov_memset()
And use it in a couple of places.

Signed-off-by: John Levon <john.levon@nutanix.com>
Change-Id: I4b86cef0e9489c1435c0206dd6c5cda4ffe4d33a
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16191
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
2023-01-16 09:38:43 +00:00
MengjinWu
eb7506a1b4 lib/thread: iobuf get/put functions will not add offset
When a buffer is get, it does not need to reserve the space
for tailq header.

Signed-off-by: MengjinWu <mengjin.wu@intel.com>
Change-Id: I0aa2d77739fbb86a6e2df1c00a772aff1cb7c6e4
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16181
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2023-01-16 08:35:33 +00:00
Dennis Maisenbacher
a36785df71 nvmf: Add ZNS specific identify functions for NVMe-oF ZNS support
In order to connect to a zoned SPDK NVMe-oF target the ZNS specific
identify functions must be implemented and the supported ZNS opcodes
must be set accordingly.

Implementing ZNS specific identify functions to return the 'I/O Command
Set specific Identify Namespace data structure (CNS 05h)'
(`spdk_nvmf_ns_identify_iocs_specific`) and 'I/O Command Set specific
Identify Controller data structure (CNS 06h)'
(`spdk_nvmf_ctrlr_identify_iocs_specific`).

Those functions return a null filled data structure for any I/O Command
Set other than ZNS.

Signed-off-by: Dennis Maisenbacher <dennis.maisenbacher@wdc.com>
Change-Id: I6b9529ce0a86400afb01d4e09cbdb3e5c3a68514
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16044
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2023-01-16 08:30:34 +00:00
Changpeng Liu
eb5789ceae test/vfio_user: use 2048MiB static memory size for bdevperf
Previously we use 1024MiB static memory for bdevperf, but it may invoke
DPDK dynamic memory allocation when calling `spdk_zmalloc`, and this
part of new memory region isn't registered to remote target process,
vfio-user like solution is designed for pre-allocated memory, so here
we can increase the static memory size as a workaround.

Also add debug log when testing.

Fix issue #2846.

Change-Id: I509093a12a63db2c9e9797da10eab9b5ee0b3aac
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16141
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
2023-01-16 08:22:08 +00:00
Karol Latecki
a6b62f3221 test/nvmf: reduce connect_disconnect iterations
Reduce connect-disconnect iterations to 5 to
save execution time. 5 should be enough for a
basic test which is significantly extended in
nightly version.

Change-Id: I44549ccb96f69e925471acc91a1704a0b9e61d2b
Signed-off-by: Karol Latecki <karol.latecki@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16212
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: John Kariuki <John.K.Kariuki@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
2023-01-16 08:16:12 +00:00
Changpeng Liu
ac6ed1e540 test/vfio_user: update VFIO-USER QEMU version
Branch `vfio-user-patch1-noreq` is recommended to use for VFIO-USER
VM test cases now.

Change-Id: I8550d995795d923483877d9a81063f198a65d74a
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15914
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Karol Latecki <karol.latecki@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
2023-01-16 08:13:32 +00:00
Karol Latecki
9c53c34656 test/nvmf: unmask nvme disconnect in tests
We're not doing any type of "negative" testing here
so we don't expect "nvme disconnect" to fail in
these tests so that would be neccesary to mask it.

Change-Id: Id8ae8706d33f1db74f5e5da811bb542859b55c44
Signed-off-by: Karol Latecki <karol.latecki@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16211
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
2023-01-13 08:55:21 +00:00
Karol Latecki
6c502fa7c0 test/nvmf: reduce workload runtime in host/timeout
Running 20sec workload multiple times in the test
takes some time. Reduce the run times to shorten
the execution.

Signed-off-by: Karol Latecki <karol.latecki@intel.com>
Change-Id: I4ecfa9d48f7ccaabb2a3707093da7662b5e5e807
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16214
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
2023-01-13 08:55:21 +00:00
Karol Latecki
4b3210bd7d test/nvmf: remove sleeps from nvmf_vfio_user test
This should save a few seconds of execution time.

Signed-off-by: Karol Latecki <karol.latecki@intel.com>
Change-Id: I96ade7da77ee9031fc20e7d93d3ab130b9d9be1e
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16213
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Michal Berger <michal.berger@intel.com>
2023-01-13 08:55:21 +00:00
Changpeng Liu
d6db5988c5 test/vhost: use light IO workloads for live migration tests
Use lightweight workload test cases in VM for the purpose
to keep number of dirty pages is in low rate of VM's total
memory.

Fix issue #2805.

Change-Id: I52efd0d0522ccef713ba2c3a451daac0683234dc
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15954
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
2023-01-13 07:47:10 +00:00
Shuhei Matsumoto
ae620784bd bdev/nvme: Retry I/O to the same path if error is I/O error
When an I/O gets an I/O error, the I/O path to which the I/O was
submitted may be still available. In this case, the I/O should be
retried to the same I/O path. However, a new I/O path was always
selected for an I/O retry.

For the active/passive policy, the same I/O path was selected naturally.
However, for the active/active policy, it was very likely that a
different I/O path was selected.

To use the same I/O path for an I/O retry, add a helper function
bdev_nvme_retry_io() into bdev_nvme_retry_ios() and replace
bdev_nvme_submit_request() by bdev_nvme_retry_io(). bdev_nvme_retry_io()
checks if nbdev_io->io_path is not NULL and is available. Then, call
_bdev_nvme_submit_request() if true, or call bdev_nvme_submit_request()
otherwise. For I/O path error, clear nbdev_io->io_path for
clarification. Add unit test to verify this change.

Linux kernel native NVMe multipath already takes this approach. Hence,
this change will be reasonable.

Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: I7022aafd8b1cdd5830c4f743d64b080aa970cf8d
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16015
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Richael <richael.zhuang@arm.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2023-01-13 00:47:04 +00:00
Shuhei Matsumoto
21160add26 bdev/nvme: Factor out request submit functions into a helper function
The following patches will change I/O retry to use the same io_path if
it is still available. However, bdev_nvme_submit_request() always calls
bdev_nvme_find_io_path() first. For I/O retry, if possible, we want to
skip calling bdev_nvme_find_io_path() and use nbdev_io->io_path instead.
To reuse the code as much as possible and not to touch the fast code
path, factor out request submit functions from
bdev_nvme_submit_request() into _bdev_nvme_submit_request().

While developing this patch, a bug/mismatch was found such that
bdev_io->internal.ch was different from ch of
bdev_nvme_submit_request(). Fix it together in this patch.

Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: Id003e033ecde218d1902bca5706c772edef5d5e5
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16013
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2023-01-13 00:47:04 +00:00
Parameswaran Krishnamurthy
2796687d54 nvme: Added support for TP-8009, Auto-discovery of Discovery controllers for NVME initiator using mDNS using Avahi
Approach:
Avahi Daemon needs to be running to provide the mDNS server service. In the SPDK, Avahi-client library based client API is implemented.
The client API will connect to the Avahi-daemon and receive events for new discovery and removal of an existing discovery entry.

Following sets on new RPCs have been introduced.

scripts/rpc.py bdev_nvme_start_mdns_discovery -b cdc_auto -s _nvme-disc._tcp

User shall initiate an mDNS based discovery using this RPC. This will start a Avahi-client based poller
looking for new discovery events from the Avahi server. On a new discovery of the discovery controller,
the existing bdev_nvme_start_discovery API will be invoked with the trid of the discovery controller learnt.
This will enable automatic connection of the initiator to the subsystems discovered from the discovery controller.
Multiple mdns discovery instances can be run by specifying a unique bdev-prefix and a unique servicename to discover as parameters.

scripts/rpc.py bdev_nvme_stop_mdns_discovery -b cdc_auto

This will stop the Avahi poller that was started for the specified service.Internally bdev_nvme_stop_discovery
API will be invoked for each of the discovery controllers learnt automatically by this instance of mdns discovery service.
This will result in termination of connections to all the subsystems learnt by this mdns discovery instance.

scripts/rpc.py bdev_nvme_get_mdns_discovery_info

This RPC will display the list of mdns discovery instances running and the trid of the controllers discovered by these instances.

Test Result:

root@ubuntu-pm-18-226:~/param-spdk/spdk/build/bin# ./nvmf_tgt -i 1 -s 2048 -m 0xF
root@ubuntu-pm-18-226:~/param-spdk/spdk# scripts/rpc.py bdev_nvme_start_mdns_discovery -b cdc_auto -s _nvme-disc._tcp
root@ubuntu-pm-18-226:~/param-spdk/spdk#
root@ubuntu-pm-18-226:~/param-spdk/spdk# scripts/rpc.py bdev_nvme_get_mdns_discovery_info
[
  {
    "name": "cdc_auto",
    "svcname": "_nvme-disc._tcp",
    "referrals": [
      {
        "name": "cdc_auto0",
        "trid": {
          "trtype": "TCP",
          "adrfam": "IPv4",
          "traddr": "66.1.2.21",
          "trsvcid": "8009",
          "subnqn": "nqn.2014-08.org.nvmexpress.discovery"
        }
      },
      {
        "name": "cdc_auto1",
        "trid": {
          "trtype": "TCP",
          "adrfam": "IPv4",
          "traddr": "66.1.1.21",
          "trsvcid": "8009",
          "subnqn": "nqn.2014-08.org.nvmexpress.discovery"
        }
      }
    ]
  }
]
root@ubuntu-pm-18-226:~/param-spdk/spdk#
root@ubuntu-pm-18-226:~/param-spdk/spdk# scripts/rpc.py bdev_nvme_get_discovery_info
[
  {
    "name": "cdc_auto0",
    "trid": {
      "trtype": "TCP",
      "adrfam": "IPv4",
      "traddr": "66.1.2.21",
      "trsvcid": "8009",
      "subnqn": "nqn.2014-08.org.nvmexpress.discovery"
    },
    "referrals": []
  },
  {
    "name": "cdc_auto1",
    "trid": {
      "trtype": "TCP",
      "adrfam": "IPv4",
      "traddr": "66.1.1.21",
      "trsvcid": "8009",
      "subnqn": "nqn.2014-08.org.nvmexpress.discovery"
    },
    "referrals": []
  }
]
root@ubuntu-pm-18-226:~/param-spdk/spdk# scripts/rpc.py bdev_get_bdevs
[
  {
    "name": "cdc_auto02n1",
    "aliases": [
      "600110d6-1681-1681-0403-000045805c45"
    ],
    "product_name": "NVMe disk",
    "block_size": 512,
    "num_blocks": 32768,
    "uuid": "600110d6-1681-1681-0403-000045805c45",
    "assigned_rate_limits": {
      "rw_ios_per_sec": 0,
      "rw_mbytes_per_sec": 0,
      "r_mbytes_per_sec": 0,
      "w_mbytes_per_sec": 0
    },
    "claimed": false,
    "zoned": false,
    "supported_io_types": {
      "read": true,
      "write": true,
      "unmap": true,
      "write_zeroes": true,
      "flush": true,
      "reset": true,
      "compare": true,
      "compare_and_write": true,
      "abort": true,
      "nvme_admin": true,
      "nvme_io": true
    },
    "driver_specific": {
      "nvme": [
        {
          "trid": {
            "trtype": "TCP",
            "adrfam": "IPv4",
            "traddr": "66.1.1.40",
            "trsvcid": "4420",
            "subnqn": "nqn.2014-08.com.sanblaze:virtualun.virtualun.3.0"
          },
          "ctrlr_data": {
            "cntlid": 3,
            "vendor_id": "0x0000",
            "model_number": "SANBlaze VLUN P3T0",
            "serial_number": "00-681681dc681681dc",
            "firmware_revision": "V10.5",
            "subnqn": "nqn.2014-08.com.sanblaze:virtualun.virtualun.3.0",
            "oacs": {
              "security": 0,
              "format": 1,
              "firmware": 0,
              "ns_manage": 1
            },
            "multi_ctrlr": true,
            "ana_reporting": true
          },
          "vs": {
            "nvme_version": "2.0"
          },
          "ns_data": {
            "id": 1,
            "ana_state": "optimized",
            "can_share": true
          }
        }
      ],
      "mp_policy": "active_passive"
    }
  },
  {
    "name": "cdc_auto00n1",
    "aliases": [
      "600110da-09a6-09a6-0302-00005eeb19b4"
    ],
    "product_name": "NVMe disk",
    "block_size": 512,
    "num_blocks": 2048,
    "uuid": "600110da-09a6-09a6-0302-00005eeb19b4",
    "assigned_rate_limits": {
      "rw_ios_per_sec": 0,
      "rw_mbytes_per_sec": 0,
      "r_mbytes_per_sec": 0,
      "w_mbytes_per_sec": 0
    },
    "claimed": false,
    "zoned": false,
    "supported_io_types": {
      "read": true,
      "write": true,
      "unmap": true,
      "write_zeroes": true,
      "flush": true,
      "reset": true,
      "compare": true,
      "compare_and_write": true,
      "abort": true,
      "nvme_admin": true,
      "nvme_io": true
    },
    "driver_specific": {
      "nvme": [
        {
          "trid": {
            "trtype": "TCP",
            "adrfam": "IPv4",
            "traddr": "66.1.2.40",
            "trsvcid": "4420",
            "subnqn": "nqn.2014-08.com.sanblaze:virtualun.virtualun.2.0"
          },
          "ctrlr_data": {
            "cntlid": 1,
            "vendor_id": "0x0000",
            "model_number": "SANBlaze VLUN P2T0",
            "serial_number": "00-ab09a6f5ab09a6f5",
            "firmware_revision": "V10.5",
            "subnqn": "nqn.2014-08.com.sanblaze:virtualun.virtualun.2.0",
            "oacs": {
              "security": 0,
              "format": 1,
              "firmware": 0,
              "ns_manage": 1
            },
            "multi_ctrlr": true,
            "ana_reporting": true
          },
          "vs": {
            "nvme_version": "2.0"
          },
          "ns_data": {
            "id": 1,
            "ana_state": "optimized",
            "can_share": true
          }
        }
      ],
      "mp_policy": "active_passive"
    }
  },
  {
    "name": "cdc_auto01n1",
    "aliases": [
      "600110d6-dce8-dce8-0403-00010b2d3d8c"
    ],
    "product_name": "NVMe disk",
    "block_size": 512,
    "num_blocks": 32768,
    "uuid": "600110d6-dce8-dce8-0403-00010b2d3d8c",
    "assigned_rate_limits": {
      "rw_ios_per_sec": 0,
      "rw_mbytes_per_sec": 0,
      "r_mbytes_per_sec": 0,
      "w_mbytes_per_sec": 0
    },
    "claimed": false,
    "zoned": false,
    "supported_io_types": {
      "read": true,
      "write": true,
      "unmap": true,
      "write_zeroes": true,
      "flush": true,
      "reset": true,
      "compare": true,
      "compare_and_write": true,
      "abort": true,
      "nvme_admin": true,
      "nvme_io": true
    },
    "driver_specific": {
      "nvme": [
        {
          "trid": {
            "trtype": "TCP",
            "adrfam": "IPv4",
            "traddr": "66.1.1.40",
            "trsvcid": "4420",
            "subnqn": "nqn.2014-08.com.sanblaze:virtualun.virtualun.3.1"
          },
          "ctrlr_data": {
            "cntlid": 3,
            "vendor_id": "0x0000",
            "model_number": "SANBlaze VLUN P3T1",
            "serial_number": "01-6ddce86d6ddce86d",
            "firmware_revision": "V10.5",
            "subnqn": "nqn.2014-08.com.sanblaze:virtualun.virtualun.3.1",
            "oacs": {
              "security": 0,
              "format": 1,
              "firmware": 0,
              "ns_manage": 1
            },
            "multi_ctrlr": true,
            "ana_reporting": true
          },
          "vs": {
            "nvme_version": "2.0"
          },
          "ns_data": {
            "id": 1,
            "ana_state": "optimized",
            "can_share": true
          }
        }
      ],
      "mp_policy": "active_passive"
    }
  },
  {
    "name": "cdc_auto01n2",
    "aliases": [
      "600110d6-dce8-dce8-0403-00010b2d3d8d"
    ],
    "product_name": "NVMe disk",
    "block_size": 512,
    "num_blocks": 32768,
    "uuid": "600110d6-dce8-dce8-0403-00010b2d3d8d",
    "assigned_rate_limits": {
      "rw_ios_per_sec": 0,
      "rw_mbytes_per_sec": 0,
      "r_mbytes_per_sec": 0,
      "w_mbytes_per_sec": 0
    },
    "claimed": false,
    "zoned": false,
    "supported_io_types": {
      "read": true,
      "write": true,
      "unmap": true,
      "write_zeroes": true,
      "flush": true,
      "reset": true,
      "compare": true,
      "compare_and_write": true,
      "abort": true,
      "nvme_admin": true,
      "nvme_io": true
    },
    "driver_specific": {
      "nvme": [
        {
          "trid": {
            "trtype": "TCP",
            "adrfam": "IPv4",
            "traddr": "66.1.1.40",
            "trsvcid": "4420",
            "subnqn": "nqn.2014-08.com.sanblaze:virtualun.virtualun.3.1"
          },
          "ctrlr_data": {
            "cntlid": 3,
            "vendor_id": "0x0000",
            "model_number": "SANBlaze VLUN P3T1",
            "serial_number": "01-6ddce86d6ddce86d",
            "firmware_revision": "V10.5",
            "subnqn": "nqn.2014-08.com.sanblaze:virtualun.virtualun.3.1",
            "oacs": {
              "security": 0,
              "format": 1,
              "firmware": 0,
              "ns_manage": 1
            },
            "multi_ctrlr": true,
            "ana_reporting": true
          },
          "vs": {
            "nvme_version": "2.0"
          },
          "ns_data": {
            "id": 2,
            "ana_state": "optimized",
            "can_share": true
          }
        }
      ],
      "mp_policy": "active_passive"
    }
  }
]
root@ubuntu-pm-18-226:~/param-spdk/spdk#

root@ubuntu-pm-18-226:~/param-spdk/spdk# scripts/rpc.py bdev_nvme_stop_mdns_discovery -b cdc_auto
root@ubuntu-pm-18-226:~/param-spdk/spdk#
root@ubuntu-pm-18-226:~/param-spdk/spdk# scripts/rpc.py bdev_nvme_get_mdns_discovery_info
[]
root@ubuntu-pm-18-226:~/param-spdk/spdk# scripts/rpc.py bdev_nvme_get_discovery_info
[]
root@ubuntu-pm-18-226:~/param-spdk/spdk# scripts/rpc.py bdev_get_bdevs
[]
root@ubuntu-pm-18-226:~/param-spdk/spdk#

Signed-off-by: Parameswaran Krishnamurthy <parameswaran.krishna@dell.com>
Change-Id: Ic2c2e614e2549a655c7f81ae844b80d8505a4f02
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15703
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-by: Boris Glimcher <Boris.Glimcher@emc.com>
Reviewed-by: <qun.wan@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2023-01-12 17:22:48 +00:00