Also, since this was the last operation using src, remove this field
from spdk_accel_task.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I55fd98697ef4f92a13dd0563b4adf9ccb0af171b
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15942
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
When initializing SPDK, the used DPDK args are printed.
Unfortunately before each argument a timestamp is added.
Rather than use SPDK_PRINTF for each argument, bunch
up whole line to be printed and then print it in one go.
Please see before:
[2022-12-20 13:52:05.647131] [ DPDK EAL parameters: [2022-12-20
13:52:05.647145] spdk_tgt [2022-12-20 13:52:05.647159] --no-shconf
[2022-12-20 13:52:05.647170] -c 0x1 [2022-12-20 13:52:05.647185]
--huge-unlink [2022-12-20 13:52:05.647199] --log-level=lib.eal:6
[2022-12-20 13:52:05.647221] --log-level=lib.cryptodev:5 [2022-12-20
13:52:05.647232] --log-level=user1:6 [2022-12-20 13:52:05.647251]
--iova-mode=pa [2022-12-20 13:52:05.647261] --base-virtaddr=0x200000000000
[2022-12-20 13:52:05.647275] --match-allocations [2022-12-20
13:52:05.647286] --file-prefix=spdk_pid1352179 [2022-12-20 13:52:05.647307]
]
And after:
[2022-12-20 13:52:29.038353] [ DPDK EAL parameters: spdk_tgt --no-shconf -c
0x1 --huge-unlink --log-level=lib.eal:6 --log-level=lib.cryptodev:5
--log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000
--match-allocations --file-prefix=spdk_pid1358716 ]
Change-Id: I4c6c25818ae99bad942bf61ab590f971d339ffc6
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16031
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Also, make it possible to remove copy operations following a fill
operation if they're using the same buffers.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I7da195ce80650a02c5db99d9400ee692f797b1f8
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15940
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Also, replace src2 with an iovec + iovcnt and rename it to s2 to
keep the naming consistent with the source buffer (s).
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I44787128377addd514818ec5aaec084b1a31f0c3
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15939
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Also, replace dst2 with an iovec + iovcnt and rename it to d2 to
keep the naming consistent with the destination buffer (d).
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: Ib394c127eeb5890451535ff485f96f7edd2897a4
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15938
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
This patch is first in the series of patches aimed to make all accel
operations describe their buffers with iovecs. The intention is to make
it easier to handle tasks in a generic way.
It doesn't mean that we change the API - all function signatures are
preserved. If a function doesn't use iovecs, we use the aux_iovs array.
However, this does mean that each accel module that provides support for
a given operation will need to be adjusted to use iovecs.
Additionally, update the unit test checking copy elision to verify the
buffers of the copy operation that is left.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I9e6d8d1be3b8b9706cb4a6222dad30e8c373d8fb
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15937
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Users can now specify buffers allocated through `spdk_accel_get_buf()`
when appending operations to a sequence. When an operation in a
sequence is executed, we check it if it uses buffers from accel domain,
allocate data buffers and update all operations within a sequence that
were also using those buffers.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I430206158f6a4289e15f04ddb18f0d1a2137f0b4
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15748
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
It's common to set up an iovec around a single buffer; add a helper for
this.
Signed-off-by: John Levon <john.levon@nutanix.com>
Change-Id: Ic4183e29d78549ec102045c6af0b5ff448cb5c59
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16192
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
And use it in a couple of places.
Signed-off-by: John Levon <john.levon@nutanix.com>
Change-Id: I4b86cef0e9489c1435c0206dd6c5cda4ffe4d33a
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16191
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
When a buffer is get, it does not need to reserve the space
for tailq header.
Signed-off-by: MengjinWu <mengjin.wu@intel.com>
Change-Id: I0aa2d77739fbb86a6e2df1c00a772aff1cb7c6e4
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16181
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
This VTune integration was added many years ago, but
hasn't been tested and to my knowledge is not being
used by anyone. The statistics it enables are very
limited, specific to the bdev nvme module with no
insight into the rest of an SPDK application.
So deprecate this support now, we will remove it
immediately after the v23.01 release.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I5552d85084c350e9d0b2570946801acd65a89d64
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16294
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
In order to connect to a zoned SPDK NVMe-oF target the ZNS specific
identify functions must be implemented and the supported ZNS opcodes
must be set accordingly.
Enable the zone management send and receive opcodes within the
`g_cmds_and_effect_log_page`. If the backing zoned bdev supports the
zone append command the `nvmf_get_cmds_and_effects_log_page` function
will respect that in the returned data structure.
Signed-off-by: Dennis Maisenbacher <dennis.maisenbacher@wdc.com>
Change-Id: Id9dd22a8696aa28177cc52e1f3587e10194de910
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16045
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
In order to connect to a zoned SPDK NVMe-oF target the ZNS specific
identify functions must be implemented and the supported ZNS opcodes
must be set accordingly.
Implementing ZNS specific identify functions to return the 'I/O Command
Set specific Identify Namespace data structure (CNS 05h)'
(`spdk_nvmf_ns_identify_iocs_specific`) and 'I/O Command Set specific
Identify Controller data structure (CNS 06h)'
(`spdk_nvmf_ctrlr_identify_iocs_specific`).
Those functions return a null filled data structure for any I/O Command
Set other than ZNS.
Signed-off-by: Dennis Maisenbacher <dennis.maisenbacher@wdc.com>
Change-Id: I6b9529ce0a86400afb01d4e09cbdb3e5c3a68514
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16044
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Layout's regions need to be aligned to write unit size.
Calculate the exact amount of bands needed for metadata, rather than
assuming 1 band is enough.
Signed-off-by: Kozlowski Mateusz <mateusz.kozlowski@intel.com>
Signed-off-by: Mariusz Barczak <mariusz.barczak@intel.com>
Change-Id: Ib304ea65a35d8b34518efda02379072355c0cd10
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16218
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Community-CI: Mellanox Build Bot
Don't let the invalidity value continuously drop in degenerate scenarios -
previously could have happened if band_cmp picked based on other value, when invalidity is similar.
Signed-off-by: Kozlowski Mateusz <mateusz.kozlowski@intel.com>
Signed-off-by: Mariusz Barczak <mariusz.barczak@intel.com>
Change-Id: I33166501e832cd7f359b3acef1e614cf9b1288d5
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16215
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
If host disconnect the connection when fabric commands are offload to
DSA, there will be use-after-free problems.
Now, disable the offload of fabrics command.
Fix issue 2828
Signed-off-by: MengjinWu <mengjin.wu@intel.com>
Change-Id: I669b01728e1ad275b7b121d47141bdf3fe5f7d9f
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15992
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Always add -larchive to DPDK static link args if
libarchive is available. This is less fragile than
previous mechanism of trying to remove
RTE_HAS_LIBARCHIVE to keep DPDK from trying to use it.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: Ib26fc204927d8967b98d416373fc91446169d5af
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15951
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Refactoring to avoid code duplication in the following commits.
Signed-off-by: Dennis Maisenbacher <dennis.maisenbacher@wdc.com>
Change-Id: I5a597a02c810cfa1fad6dc397d012cf6a3f189ca
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16043
Community-CI: Mellanox Build Bot
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Use uint32_t to avoid overflow when left shift by 16 places
This patch fix issue #2858
Signed-off-by: Sebastian Brzezinka <sebastian.brzezinka@intel.com>
Change-Id: I07f4328674ae7bd7525792ca1e424e85a932c87f
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16180
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
spdk_bdev_module_init() must only be called if the module sets
async_init to true. This patch fixes the doc string to match the
implementation and adds an assert() to catch API usage errors early.
Change-Id: I677345de028c8f7597ecf81ff9b9b855867bbf01
Signed-off-by: Mike Gerdts <mgerdts@nvidia.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16133
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
As an lvstore is being loaded, blobs are itereated with
spdk_bs_iter_next(), which opens a blob, calls lvol_next_lvol(), then
closes the blob. Since the blob struct that is passed to
load_next_lvol() is only transiently opened, it should not be stored
with the lvol.
A short while later, the lvstore opens each lvol by calling
spdk_lvol_open() from _vbdev_lvs_examine_cb(). At that time, the lvstore
holds the open reference and only then is it safe to keep a reference to
the blob.
Change-Id: I309227b23b59058a58167a9dac35af5fabc29d98
Signed-off-by: Mike Gerdts <mgerdts@nvidia.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14965
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
spdk_nvme_cpl_get_status_string() returns a string which contains upper
cases, spaces, and hyphens. To use the returned string for JSON RPC, we
have to convert it to a string which contains only lowercases and
underscores.
For our convenience, add a new API spdk_strcpy_replace() to replace
all occurrences of the search string with the replacement string.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: I3ca9774d0bfb2d0bb7bd7412bc671e6f69104b7d
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16054
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Used (void) on cmd and removed increment to fix
clang 15 werror.
vmd.c:368:11: error: variable 'cmd' set but not used [-Werror,-Wunused-but-set-variable]
uint16_t cmd = dev->header->zero.command;
^
1 error generated.
Signed-off-by: Kamil Godzwon <kamilx.godzwon@intel.com>
Change-Id: I4e383ac41b46d13df0210bf90f11f6130290f243
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16127
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Patch below started checking development version of DPDK
using rte_version_release():
(32e6ffb) env_dpdk: add support for DPDK main branch for 23.03
rte_version_release() is present starting with DPDK 21.11,
so it broke earlier versions like DPDK 20.11 packaged on Fedora 35.
SPDK supports only last two DPDK LTS versions, which does not include DPDK 20.11.
Yet there is no need to break older versions unnecessarily.
Another aspect is that rte_version_release() is marked as experimental,
so it could change in the future. Only using stable rte_version(),
helps with forwards compatibility too.
Change-Id: Id17d643a12dcfc03c2d4688d1bc5030dc339f428
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reported-by: Michal Berger <michal.berger@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16017
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Michal Berger <michal.berger@intel.com>
Added spdk_nvme_qpair_get_num_outstanding_reqs to get the number
of outstanding reqs for a specific qpair.
Change-Id: I55d75a7363ac63bd26db76594e70e8b17b3e5830
Signed-off-by: Richael Zhuang <richael.zhuang@arm.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15916
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Added num_outstanding_reqs in struct spdk_nvme_qpair to record outstanding
req number in each qpair. This can be used by multipath to select I/O
path.
Increment num_outstaning_reqs when req is removed from free_req queue and
decrement it when req is put back in free_req queue.
Change-Id: I31148fc7d0a9a85bec4c56d1f6e3047b021c2f48
Signed-off-by: Richael Zhuang <richael.zhuang@arm.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15875
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
bdev_crypto uses memset() to zero secrets passed
by the user (cleanup/error path) which is not safe -
compiler may detect that the buffer being zeroed
is not accessed any more and may "optimize" (drop)
zerofying.
C11 standard introduces memset_s which guarantess to
change the buffer content, but this function is optional,
gcc may not support it. As alternative, add not optimal
from performance point of view default implementation.
Add unit test to math_ut.c to avoid creating new .c file
for 1 simple test
Signed-off-by: Alexey Marchuk <alexeymar@nvidia.com>
Change-Id: I11c7d15610df02e4a3761a88c85f6f8c54fb4b0a
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16038
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
The FIELD_OK macro in blob_open_opts_copy() should consider offsets in
struct spdk_blob_open_opts, not struct spdk_blob_opts.
Signed-off-by: Mike Gerdts <mgerdts@nvidia.com>
Change-Id: I62e22acbe7dfb994453a379c92f78b7e9bc7fc13
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14962
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Blob IDs are sequentially assigned starting at 0x100000000. When
debugging with a small number of blob IDs, it is much more intuitive to
see blob ID 0x100000000 rather than blob ID 4294967296.
In commit 76a577b082 a similar change was
made to blobcli.
Signed-off-by: Mike Gerdts <mgerdts@nvidia.com>
Change-Id: Ic5321a83b57cf8c9f8df48cd424a926b6fec4ba8
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14963
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
It's the same as spdk_iovcpy(), but the dst/src buffers can overlap.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I6daa0a846d7d1deac2c01d1a1be09171fa8bf796
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15747
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
The data buffers backed by these accel buffers aren't allocated
immediately, but only when they're necessary to execute a given
operation. It allows users to append operations to a sequence, without
actually reserving large space for the data. That way, if some of these
buffers aren't needed to execute a sequence, they won't be allocated.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: Ieeea8a011b40c7f2f33e9a6f03fe34264e9316f3
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15746
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
It will be used for allocating buffers from accel domain and
allocating bounce buffers to push/pull the data from memory domains for
modules that don't support memory domains.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: Idbe4d2129d0aff87d9e517214e9f81e8470c5088
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15745
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
This domain is meant to represent data being transformed by accel
engine. Users will be able to allocate buffers from that memory domain
and use them when appending operations to an accel sequence.
Since these buffers are only meant to be used as placeholders for actual
buffers, none of the push/pull/translate callbacks are implemented. To
access the data after it was transformed by accel, users should make
sure that the final command's destination buffer isn't allocated from
accel memory domain.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: Ia031c7b205e98792d0a93f01513101b86afa9faa
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15744
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reversing a sequence means that the order of its operations is reversed,
i.e. the first operation becomes last and vice versa. It's especially
useful in read paths, as it makes it possible to build the sequence
during submission, then, once the data is read from storage, reverse the
sequence and execute it.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I93d617c1e6d251f8c59b94c50dc4300e51908096
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15636
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Operation sequence should always be treated as a whole, meaning that
users cannot rely on the contents of any intermediate buffers and should
only care about the buffer that's the destination of the whole
operation. This allows us to remove some of those copy operations by
changing source / destination buffer of a preceding / following
operation.
If a sequence is using buffers from non-local memory domain, users can
append a copy operation to a sequence to specify a local destination
buffer. If the module executing the operations is aware of memory
domains, this can avoid doing an extra spdk_memory_domain_pull_data().
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I93b94d46ee32700819e9e6f1c55350692db8a67a
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15530
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
This patch introduces the concept of chaining multiple accel operations
and executing them all at once in a single step. This means that it
will be possible to schedule accel operations at different layers of the
stack (e.g. copy in NVMe-oF transport, crypto in bdev_crypto), but
execute them all in a single place. Thanks to this, we can take
advantage of hardware accelerators that supports executing multiple
operations as a single operation (e.g. copy + crypto).
This operation group is called spdk_accel_sequence and operations can be
appended to that object via one of the spdk_accel_append_* functions.
New operations are always added at the end of a sequence. Users can
specify a callback to be notified when a particular operation in a
sequence is completed, but they don't receive the status of whether it
was successful or not. This is by design, as they shouldn't care about
the status of an individual operation and should rely on other means to
receive the status of the whole sequence. It's also important to note
that any intermediate steps within a sequence may not produce observable
results. For instance, appending a copy from A to B and then a copy
from B to C, it's indeterminate whether A's data will be in B after a
sequence is executed. It is only guaranteed that A's data will be in C.
A sequence can also be reversed using spdk_accel_sequence_reverse(),
meaning that the first operation becomes last and vice versa. It's
especially useful in read paths, as it makes it possible to build the
sequence during submission, then, once the data is read from storage,
reverse the sequence and execute it.
Finally, there are two ways to terminate a sequence: aborting or
executing. It can be aborted via spdk_accel_sequence_abort() which will
execute individual operations' callbacks and free any allocated
resources. To execute it, one must use spdk_accel_sequence_finish().
For now, each operation is executed one by one and is submitted to the
appropriate accel module. Executing multiple operations as a single one
will be added in the future.
Also, currently, only fill and copy operations can be appended to a
sequence. Support for more operations will be added in subsequent
patches.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: Id35d093e14feb59b996f780ef77e000e10bfcd20
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15529
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Existing `vhost_user_session_send_event` is only used to
stop vhost user device's session now, so we rename it to
`vhost_user_wait_for_session_stop` and also rename the
whole function calls when stopping the device with
more apposite names.
Change-Id: Ib8ea48273e85f7856ca2dfca57b5fd933ac4cf7a
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15296
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
For vhost-user device, the variable `active_session_num` is used
to count number of sessions of a vhost-user device, we don't use
it anywhere, and the assertion of this variable is already
guaranteed by `vsessions_num`, so just remove it.
Change-Id: I335a75d17583b3744a41152b35cd5a1a8762a687
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15189
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
If we kill the vhost process while VM is connected, the `g_fini_cb`
will not be called due to active session is in the vhost-user device,
but we're sure that this VM is stopped for this case, because
`vhost_driver_unregister` is called in the shutdown thread, so here
we reuse `g_vhost_user_started` flag for this case and free the sessions,
the following call to `vhost_driver_unregister` can also handle this
case, because the Unix Domain socket is already unregistered.
Fixes commit 327d1c98 ("vhost: defer vhost_dev_unregister until scsi tgts removed")
Change-Id: I4f368ac8c304dd9525d15abdce8fd5b2ed79b96e
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15623
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
`rte_vhost_driver_unregister` API for removing socket is not
asynchronous, it may call SPDK ops for adding a new connection
or removing a connection, so we can't hold the user device lock
when calling this function, and reject to add a new connection
while calling `rte_vhost_driver_unregister`.
Fix issue #2748.
Change-Id: I5594224f26374b2336d64175ecd5e5ec3d545a58
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15483
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
`spdk_vhost_dev` is created|deleted via RPC or APIs, and
we use a global `spdk_vhost_lock` to protect it, but for
some other places such as: vhost-user message processing,
we also use the global lock for now, actually we don't
need to use this lock, because these vhost-user messages
processing will not delete nor add vhost devices.
While here, we add a `spdk_vhost_user_dev` access lock to
protect vhost-user message processing as an optimization.
Change-Id: Ia9c45b056cebb7b65f458d56ed775a15e386f905
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15184
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Feng Li <lifeng1519@gmail.com>
nvmf_ctrlr_cmd_connect() can only handle a request in one buffer
(req->data); sanity check it's not split across IOVs.
Signed-off-by: John Levon <john.levon@nutanix.com>
Change-Id: I595d8542ce71e56cf2b074f4cf41bce440f6dc26
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16123
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Thanos Makatos <thanos.makatos@nutanix.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
This code has a similar potential problem as the identify
and log page commands did: stop using req->data in favour of IOVs.
We also need to fix the unit tests to initialize the iovs.
We don't change the existing "set" behaviour of requiring a single IOV
here.
Signed-off-by: John Levon <john.levon@nutanix.com>
Change-Id: I257567a7abd5fc3ed9ee21b432c7da7d70fbbde0
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16122
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Thanos Makatos <thanos.makatos@nutanix.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
In the previous fix:
adc2942ad nvmf: nvmf_ctrlr_get_log_page use iovs to store the log page
a data corruption bug in the log page code was fixed. Previously, it
used req->data, which may be too short a buffer in the case that the
buffer is split across more than one IOV. req->data is never safe to use
in this situation. The code was changed to use the provided iovs instead
of req->data.
However, the identify command handling was still vulnerable to this
problem, and has been seen in real life at least with a CentOS guest VM.
The fix is basically the same: use the IOV utility functions to write
out the response instead of directly trying to use req->data.
Signed-off-by: John Levon <john.levon@nutanix.com>
Change-Id: I00445895af20e43be73189629576eee0667f86dd
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16121
Reviewed-by: Thanos Makatos <thanos.makatos@nutanix.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Move the IOV handling code in ctrlr.c to the top of the file, for
subsequent use.
Signed-off-by: John Levon <john.levon@nutanix.com>
Change-Id: Ibddde1cb964d8aaecf4673ffa6d4147d0a48020c
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16120
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Thanos Makatos <thanos.makatos@nutanix.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Add a define for the Identify command buffer instead of using a raw
value.
Signed-off-by: John Levon <john.levon@nutanix.com>
Change-Id: I9073ff84e2fa2ef9268051b898fe1027d8e97baa
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16119
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
In preparation for supporting additional claim types, create a claim
type that represents the current claim type. Everything that sticks to
the public APIs should continue to work as before.
Signed-off-by: Mike Gerdts <mgerdts@nvidia.com>
Change-Id: I0d02e4b3f4bbf4eb5a7391028aa31e999f9da915
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15286
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
In preparation for an updated claims API, refactor
bdev->internal.claim_module into a union that will eventually hold
different information based on the the type of claim.
Change-Id: I7ade6f03128bdb0f8375a95ae953cb63d6aa686d
Signed-off-by: Mike Gerdts <mgerdts@nvidia.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15285
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Community-CI: Mellanox Build Bot
This calls bdev_ok_to_examine() once per bdev_examine(). Prior to this
commit, bdev_ok_to_examine() may be called up to twice per bdev module.
The results returned by bdev_ok_to_examine() could be affected by:
1. g_bdev_opts.bdev_auto_examime changing
2. spdk_bdev_examine() being called on a particular bdev
3. An alias being added for an existing bdev
It's not clear that anything good comes from racing in conditions 1 and
3. In condition 2, spdk_bdev_examine() calls bdev_examine(), so any
required examine_config() and examine_disk() calls are still made, just
now with less of a race with the previous invocation of
spdk_examine_confg().
Signed-off-by: Mike Gerdts <mgerdts@nvidia.com>
Change-Id: I496fc44fd74693837d6b449d7fa60f58f9dbf36f
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15284
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Community-CI: Mellanox Build Bot
This closes races between concurrent spdk_bdev_module_claim_bdev()
and/or spdk_bdev_module_release_bdev() calls affecting the same bdev by
holding bdev->internal.spinlock while claiming and releasing a bdev. It
also closes a potential TOCTOU bug in that optimizing compilers probably
already eliminate in bdev_finish_unregister_bdevs_iter() and documents
that bdev->internal.claim_module is protected by
bdev->internal.spinlock.
This can be removed when the bdev_register_examine_thread deprecation
is removed.
Signed-off-by: Mike Gerdts <mgerdts@nvidia.com>
Change-Id: Ib48552df065d5172139a61bbc00b391f36552c0c
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15282
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Community-CI: Mellanox Build Bot
Since bdev_examine() can happen on any thread and it happens without any
other lock being held on the spdk_bdev_module, it is possible for
multiple threads to try to simultaneously increment
module->internal.action_in_progress. Decrements may also race.
This commit adds bdev_module->internal.spinlock and holds it while
modifying module->internal.action_in_progress.
This can be removed when the bdev_register_examine_thread deprecation
is removed.
Change-Id: I9c401eeb3c7c97c484e16fa9cfd82668b32e508b
Signed-off-by: Mike Gerdts <mgerdts@nvidia.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15281
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Community-CI: Mellanox Build Bot
This introduces a deprecation for calling spdk_bdev_register() and
spdk_bdev_examine() on a thread other than the app thread. The
deprecation period starts in SPDK 23.01 and removal is expected in SPDK
23.05.
The intent of this deprecation is to ensure that bdev modules'
examine_config() and examine_disk() callbacks are only ever called on
the app thread. This largely a formalization of what has long happened
due to the RPC poller running on the first thread started by
spdk_app_start().
Signed-off-by: Mike Gerdts <mgerdts@nvidia.com>
Change-Id: Ic9d7b87b6522be20357d2eab2d0c77cd5753452f
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15690
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Mateusz Kozlowski <mateusz.kozlowski@intel.com>
Some memory alloc in nvme_allocate_request_user_copy, and submit
through nvme_qpair_submit_request, if nvme ctrlr is failed or
qpair state not meet the requirements, submit will return -ENXIO,
and call nvme_free_request(), but it will not free
req->payload.contig_or_cb_arg, those memory only gets freed when the
request is actually completed, through nvme_user_copy_cmd_complete().
Let's fix this by add check when submit failed.
Fixes issue #2832
Change-Id: I54f0fc60dbb53ced9f52da7d89017be13db2eee1
Signed-off-by: Fengnan Chang <changfengnan@bytedance.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15985
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Xiaodong Liu <xiaodong.liu@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
rc to -ENXIO and goto error, make all error handle in one place,
so it's easy to add more check in later patch.
Change-Id: I13edeef75bbf6c52e18d6b94b78c2e560012bfee
Signed-off-by: Fengnan Chang <changfengnan@bytedance.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16004
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Xiaodong Liu <xiaodong.liu@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
The spdk_nvme_ctrlr_opts now supports a transport_tos option
that allows setting of the 'type of service' value in the IPv4 header.
This is needed to support lossless RoCE setups.
Note: Only RDMA is supported at this point.
Change-Id: I21825fc197c60f539a7d2d651a970ea380d8b56d
Signed-off-by: Michael Haeuptle <michael.haeuptle@hpe.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15908
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Add spdk_nvme_cpl_get_status_type_string() to return ASCII
string for the type of an error.
Append a dummy entry to return "RESERVED" for unknown types.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: Ibc07132ee067f146ac149884c6344f313bfcbfff
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15835
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
spdk_nvme_cpl_get_status_string() will be used to count and display
NVMe specific errors via JSON-RPC. This patch is a preparation.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: Ia96890172d752d2906549e3033c0b26eef9c20bd
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15834
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
However, when querying or resetting module specific statistics,
the generic bdev layer have to access it.
For this purpose, add functions pointers.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: Ie86d0a4a406cec7e0f1e9a62de5982cd3d877eae
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14839
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Define struct spdk_bdev_io_error_stat privately in lib/bdev/bdev.c.
Add a pointer to struct spdk_bdev_io_error_stat to struct
spdk_bdev_io_stat.
Allocate spdk_bdev_io_error_stat for bdev and RPC, but do not allocate
spdk_bdev_io_error_stat for I/O channel.
Dump the contents of spdk_bdev_io_error_stat only if its total is
non-zero.
As a result of these, only spdk_bdev_get_device_stat() can query
spdk_bdev_io_error_stat for the bdev_get_iostat RPC. This will be
acceptable.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: Idae868afe65347a96529eedc3dcc692101de4a29
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14826
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
The following patches will make some of io_stat helper functions
public APIs. Then, for consistency, bdev_ + verb + _io_stat will
be better naming rules.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: If36d4ed29253e87954c23c270e8414731d083f03
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15896
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Commit 41f59559e added code to skip adding EXITING connections
to the new poll group in the full_feature_migrate message
callback. The problem is that since the connection is in
EXITING state and is not in a poll group, it will never move
to EXITED state, nor get removed from g_active_conns, and
hence will block the iscsi subsystem from being able to
shutdown.
So instead, assert that the connection is not in EXITED
state. If it is in EXITING state, we will add it to the
poll group, and then when the poll group is next polled,
it will destroy the connection, moving it to EXITED
state and removing it from the g_active_conns STAILQ.
This fix is related to issue #2416.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: Ie8e64c811a5602ba4b28871bc535f5fa49dffc18
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16019
Reviewed-by: Michal Berger <michal.berger@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Many parts of the blobstore.c seem to have gone with the assumption that
blob creation, deletion, etc. all happen on the md thread. This
assumption would allow modification of the bs->used_md_pages and
bs->used_clusters bit arrays without holding a lock. Placing
"assert(spdk_get_thread() == bs->md_thread)" in bs_claim_md_page() and
bs_claim_cluster() show that each of these functions are called on other
threads due writes to thin provisioned volumes.
This problem was first seen in the wild with this failed assertion:
bs_claim_md_page: Assertion
`spdk_bit_array_get(bs->used_md_pages, page) == false' failed.
This commit adds "assert(spdk_spin_held(&bs->used_lock))" in those
places where bs->used_md_pages and bs->used_lock are modified, then
holds bs->used_lock in the places needed to satisfy these assertions.
Signed-off-by: Mike Gerdts <mgerdts@nvidia.com>
Change-Id: I0523dd343ec490d994352932b2a73379a80e36f4
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15953
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Community-CI: Mellanox Build Bot
A future commit will add to the complexity when returning with a
non-zero value. Rather than further complicating the several error
return locations, all affected error returns are handled after the error
label.
Signed-off-by: Mike Gerdts <mgerdts@nvidia.com>
Change-Id: I56e8e338b0560f849399c085d0bb07efb7df26fb
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15983
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Community-CI: Mellanox Build Bot
A future commit may need to release a lock before returning. This
refactors blob_resize() to always return at end of the function using an
out label and goto.
Signed-off-by: Mike Gerdts <mgerdts@nvidia.com>
Change-Id: I671fbdbe0e3b766c264c45589dad3a864ba1f192
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15982
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Community-CI: Mellanox Build Bot
Convert bs->used_lock to a spinlock. This is being done to help with
the debugging and fixing of a race that has led to a failed assertion in
bs_claim_md_page.
Signed-off-by: Mike Gerdts <mgerdts@nvidia.com>
Change-Id: I11b80096de022f79a217c65d787ee57ca54240f9
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15952
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Community-CI: Mellanox Build Bot
The bs->used_clusters_mutex protects used_md_pages, used_clusters, and
num_free_clusters. A more generic name is appropraite. The next patch in
this series will convert it from a mutex to a spinlock and having
"mutex" or "spin" in the name is of little help to maintainers, so a
more generic name is used.
Signed-off-by: Mike Gerdts <mgerdts@nvidia.com>
Change-Id: I5ce7b85b84fdec2a0c5d2ac959e0109e1d80c7f5
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15981
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Community-CI: Mellanox Build Bot
Basic IO completion counting can be done at the common
layer, to enable some level of stat tracking even for
transports that don't have transport-specific tracking
yet.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: If04f854b97440089b8ad149b64cb59173c73975c
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15912
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Community-CI: Mellanox Build Bot
For validation of upcoming DPDK releases, pci_dpdk needs
to initialize and work.
This patch adds support for testing DPDK main branch,
with appropriate notice given when that DPDK version is used.
Change-Id: I5257beac3a3926bd432d9c00e50858facd21e6f5
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15891
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Since SPDK holds copies of local DPDK headers for DPDK PCI API,
the same headers will now be used as includes.
It was already the case for DPDK 22.11, but not for DPDK 22.07.
Change-Id: I5859a630d1fb20b4ebf8628adb962f5e46c23788
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15969
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Shortly after DPDK 22.11 release it was amended with single
patch, which bumped the minor version.
No changes have occurred to the DPDK PCI API.
Change-Id: I94dadb23b3ad79cfbb21e848d718d909493137d1
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15890
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Reviewed-by: Michal Berger <michal.berger@intel.com>
As spdk_lvs_init() validates arguments, it uses o->cluster_sz in a
comparison but misleadingly prints opts.cluster_sz in the error message.
This changes the error message to print cluster_sz from the proper
structure.
Signed-off-by: Mike Gerdts <mgerdts@nvidia.com>
Change-Id: I810bf9ad4a24ed7cc844c2835e0edda988cb2cbe
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15970
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
The internal mempools were replaced with the newly added iobuf
interface.
To make sure we respect spdk_bdev_opts's (small|large)_buf_pool_size, we
call spdk_iobuf_set_opts() from spdk_bdev_set_opts(). These two options
are now deprecated and users should switch to spdk_iobuf_set_opts().
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: Ib1424dc5446796230d103104e272100fac649b42
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15328
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
This is done in a couple of places, so it makes sense to extract it to a
separate function.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: Id34b2545d9912c2b7b65b1277711e9683db92658
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15327
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Users can now specify a number of small/large buffers to be cached on
each iobuf channel. Previously, we relied on the cache of the
underlying spdk_mempool, which has per-core caches. However, since iobuf
channels are tied to a module and an SPDK thread, each module and each
thread is now guaranteed to have a number of buffers available, so it
won't be starved by other modules/threads.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I1e29fe29f78a13de371ab21d3e40bf55fbc9c639
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15634
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
The idea behind "iobuf" is to have a single place for allocating data
buffers across different libraries. That way, each library won't need
to allocate its own mempools, therefore decreasing the memory footprint
of the whole application.
There are two reasons for putting these kind of functions in the thread
library. Firstly, the code is pretty small, so it doesn't make sense to
create a new library. Secondly, it relies on the IO channel abstraction,
so users will need to pull in the thread library anyway.
It's very much inspired by the way bdev layer handles data buffers (much
of the code was directly copied over). There are two global mempools,
one for small and one for large buffers, and per-thread queues that hold
requests waiting for a buffer. The main difference is that we also need
to track which module requested a buffer in order to allow users to
iterate over its pending requests.
The usage is fairly simple:
```
/* Embed spdk_iobuf_channel into an existing IO channel */
struct foo_channel {
...
struct spdk_iobuf_channel iobuf;
};
/* Embed spdk_iobuf_entry into objects that will request buffers */
struct foo_object {
...
struct spdk_iobuf_entry entry;
};
/* Register the module as iobuf user */
spdk_iobuf_register_module("foo");
/* Initialize iobuf channel in foo_channel's create cb */
spdk_iobuf_channel_init(&foo_channel->iobuf, "foo", 0, 0);
/* Finally, request a buffer... */
buf = spdk_iobuf_get(&foo_channel->iobuf, length,
&foo_objet.entry, buf_get_cb);
...
/* ...and release it */
spdk_iobuf_put(&foo_channel->iobuf, buf, length);
```
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: Ifaa6934c03ed6587ddba972198e606921bd85008
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15326
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Recently while updating the copyright notices
throughout SPDK the headers, env_dpdk copies of
DPDK headers were modified too.
This patch brings them to the exact version as
in DPDK upstream.
Change-Id: If30b8556386a539d81d2fc1a5e42293522ed91f5
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15856
Community-CI: Mellanox Build Bot
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Reviewed-by: Michal Berger <michal.berger@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Copies of headers for DPDK PCI API were created before the
actual DPDK 22.11 release. The rte_bus_pci.h was
modified slightly with addition of rte_compat.h
include.
Please see relevant DPDK patch:
(1094dd9)cleanup compat header inclusions
This patch only makes the two align.
Change-Id: Ieb0397c6cf2d9027cf600bd0e064863b3782b846
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15855
Community-CI: Mellanox Build Bot
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Reviewed-by: Michal Berger <michal.berger@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
This patch adds "--msg-mempool-size" option for spdk app to allow
reactors' msg_mempool_size being configurable via commond line.
We tested the rbd_bdev performance for Ceph CTX sharing with high RBD volume count via bdevperf.
When testing with 256 volumes and limited Ceph CTX (e.g., 2 Ceph ctx for 256 volumes,
which are created though bdev_rbd_register_cluster),
error message "the *ERROR*: msg could not be allocated error message"
keeps showing and the bdev_perf program hangs.
We found the issue from the limited msg_mempool_size size, which is hardcoded
by SPDK_DEFAULT_MSG_MEMPOOL_SIZE in thread.h.
Therefore, we enable the "--msg-mempool-size" option to allow configurable msg_mempool_size.
Signed-off-by: Yue-Zhu <yue.zhu@ibm.com>
Change-Id: I54db7fd46247b2f18112bb994ecce6f4b7e5bf9c
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15552
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
The description was swapped with removal release, causing the logs to
look like this:
foo_bar: deprecation 'v23.05' scheduled for removal in foo.bar hit 1 times
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I422a35c5ec20c8a817bed0dd5d565dfc53ef6dc9
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15874
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Mike Gerdts <mgerdts@nvidia.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Community-CI: Mellanox Build Bot
The kernel vfio_pci driver module introduced vf_token checking
mechanism since kernel version 5.7, and has been supported by
DPDK. So add support for it to deal with the scenario of VF.
Signed-off-by: Jun Zeng <jun1.zeng@intel.com>
Change-Id: Ie9700fa395327da4e847c6213167284c148a64e3
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14424
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
In _free_ctrlr(), ->endpoint can never be NULL, and the code was
self-contradictory; assume it's not NULL.
Signed-off-by: John Levon <john.levon@nutanix.com>
Change-Id: I81a449123ca05f64460380dc3a8ad8af2143d166
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15831
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Use standard "sqid" naming for a log message.
Signed-off-by: John Levon <john.levon@nutanix.com>
Change-Id: Icca8415cd17272ca7bd82667721c4131dd1df7f1
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15828
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
This fixes the behavior of spdk_vhost_(set|get)_coalescing() on
non-vhost-user devices.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: Ia17cd4c0ed4bad262090e05f83727c1516c21f92
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15772
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
The current code for setting/getting coalescing setting only works with
vhost-user devices, while users can create virtio-blk devices with
non-vhost-user transport. Calling spdk_vhost_(set|get)coalescing() on
such device results in a segfault.
So, spdk_vhost_dev_backend interface is extended with methods to
set / get coalescing parameters. In the following patch, the virtio_blk
interface will be also extended with similar callbacks allowing us to
pipe coalescing settings to the appropriate transport.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: Ide5d5f633b17dcdbedb4b7804d5e45bf41373eca
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15771
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Both the length of a request and the number of ranges to copy are
controlled by the user, so we should check them and return an error
instead of asserting that they're correct.
This fixes the `test/nvmf/target/fabrics_fuzz.sh` test.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I3481c4bb1f2c7676df81f41dfc95ef063924222e
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15805
Reviewed-by: Pawel Piatek <pawelx.piatek@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Found with misspell-fixer.
Signed-off-by: Michal Berger <michal.berger@intel.com>
Change-Id: If062df0189d92e4fb2da3f055fb981909780dc04
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15207
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Use the deprecation API to annotate and log the deprecation of
spdk_nvme_ctrlr_prepare_for_reset() using the tag
"nvme_ctrlr_prepare_for_reset".
Signed-off-by: Mike Gerdts <mgerdts@nvidia.com>
Change-Id: I98fd840aa9acc028a49bb47daf4ab7e88f1eb818
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15756
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
srandomdev is *only* used to emulate arc4random, so only
bother defining it on Linux when it's needed. This avoids
unused errors on newer distros packaging glibc versions
that now defined arc4random.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I6e64a697d9633709cedd0198f75cf094d514562d
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15814
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
This patch fixes issue # 2809, by changing the max
completions processed per poll. A new parameter called
IDXD_MAX_COMPLETIONS is used to set maximum completions
processed per poll to 128 because we observed performance
degradation on a system with 16 NVMe SSDs at a queue depth
of 64 per SSD. When using DSA to compute the data digest,
the target application can issue upto 1024(16x64) request
to compute data digest concurrently to DSA. Limiting the
maximum completions processed per poll to 32 using
DESC_PER_BATCH cause up to 43% IOPS degradation.
Use IDXD_MAX_COMPLETIONS to control the number of
completions proccessed per poll in spdk_idxd_process_event
based on your workload. For example, if your application
is issuing 1000s of concurrent request to DSA you might
want to set IDXD_MAX_COMPLETIONS to a value higher than
128.
Change-Id: I2a1db993283a83a20266f40dac851728d63e6127
Signed-off-by: John Kariuki <John.K.Kariuki@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15801
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
This is in prep for adding a new compressDev accel_fw
module that will contain all of the DPDK compressDev specifics
on it, the vbdev will make calls to the accel_fw instead.
As the accel_fw has SW based compression, we want the configure
option to apply to building the vbdev module but not the accel_sw
software implementation or the upcoming compressdev module.
Renamed to "compress" as reduce is a term specific to the vbdev
implementation of the compression to be provided by the accel_fw
and thus the same reason why we leave the test flag called REDUCE
because it's controlling tests for the reduce library as well as
the vbdev module that is using reduce. The flag does not apply
to the SW implementation of compression.
This does not affect upcoming accel_fw compressdev module, that
will have its own configure option.
Signed-off-by: paul luse <paul.e.luse@intel.com>
Change-Id: If8ed3e48e1e3dabcaad1cd161289e78122cd9d58
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15179
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Per specification.
Signed-off-by: paul luse <paul.e.luse@intel.com>
Change-Id: Ic93349c7d3ed50fa6e502e39db0347141804d4c4
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15673
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
While lvs->destruct is set in a few places, it is never read. Since it
is not used, it is removed.
Signed-off-by: Mike Gerdts <mgerdts@nvidia.com>
Change-Id: Iee21e92c9049d143fca13930b4b5f328f9ec38f0
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15716
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Copy-on-write happens when cluster is written for the first time for
thin provisioned volume. Currently it is implemented as two separate
requests to underlying bdev: read of the whole cluster to bounce
buffer and then write of this buffer to the new location on the same
underlying bdev.
This patch improves copy-on-write flow by utilizing copy command of
underlying bdev if it is supported. In this case we have just one
request to bdev and don't need the bounce buffer.
Signed-off-by: Evgeniy Kochetov <evgeniik@nvidia.com>
Change-Id: I92552e0f18f7a41820d589e7bb1e86160c69183f
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14351
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
New `translate_lba` operation allows to translate blob lba to lba on
the underlying bdev. It recurses down the whole chain of bs_dev's. The
operation may fail to do the translation when blob lba is not backed
by the real bdev. For example, when we eventually hit zeroes device in
the chain.
This operation is used in the next commit to get source LBA for copy
operation.
Signed-off-by: Evgeniy Kochetov <evgeniik@nvidia.com>
Change-Id: I89c2d03d1982d66b9137a3a3653a98c361984fab
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14528
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
In the following patches, nvme_rdma_poll_group_set_cq() will
touch not only CQ but also SRQ and receive WR objects.
All these resources are of a poller.
Hence for clarification, rename nvme_rdma_poll_group_set_cq()
by nvme_rdma_qpair_set_poller().
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: Ic59ba5a45833e39b1b2647c000c8b953f1031d6b
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14910
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Factor out reset failed recvs operation into a helper function
nvme_rdma_reset_failed_recvs(). This will make the following
patches simpler.
For send operation, this change is not required yet, but in future
we may support something like shared SQ. Hence, we do this change
for send operation too.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Signed-off-by: Denis Nagorny <denisn@nvidia.com>
Signed-off-by: Evgeniy Kochetov <evgeniik@nvidia.com>
Change-Id: Ib44acebe63e97e5a60ea6fa701b49278c7f44b45
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14171
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
In the following patches, poll group will have rsps objects and to share
the code between poll group and qpair, option for creation will be used.
As a preparation, merge nvme_rdma_alloc_rsps() and
nvme_rdma_register_rsps() into nvme_rdma_create_rsps(). For consistency,
merge nvme_rdma_alloc_reqs() and nvme_rdma_register_reqs() into
nvme_rdma_create_reqs().
Update unit tests accordingly.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Signed-off-by: Denis Nagorny <denisn@nvidia.com>
Signed-off-by: Evgeniy Kochetov <evgeniik@nvidia.com>
Change-Id: I92ec9e642043da601b38b890089eaa96c3ad870a
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14170
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
When SRQ is supported, recv objects will be allocated by poll group
and qpair will associated and use them. In this case, we do not want
qpair to allocate and free recv objects. When connection is established,
it will be decided if SRQ is used or not. Hence, defer recv objects
allocation until connection is established.
Send objects are not affected directly by SRQ, but
nvme_rdma_register_reqs() no longer does any registration and deferring
send objects allocation makes the code more consistent. Hence, defer
send objects allocation until connection is established too.
Even after this patch, we rely on nvme_rdma_ctrlr_delete_io_qpair()
to free resources completely.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Signed-off-by: Denis Nagorny <denisn@nvidia.com>
Signed-off-by: Evgeniy Kochetov <evgeniik@nvidia.com>
Change-Id: Ic151fad01009d92a7fc809a730e6e9dff1a365f3
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14169
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Response objects will be in poll group when SRQ is enabled. But we want
to share the code to allocate and register response objects between SRQ
is enabled or disabled. To do it cleanly, move
nvme_rdma_qpair_submit_recvs() from nvme_rdma_register_rsps() to
nvme_rdma_connect_established(). A few clean up of error handling are
done in this patch. Unregistration will be done when qpair is
disconnected.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Signed-off-by: Denis Nagorny <denisn@nvidia.com>
Signed-off-by: Evgeniy Kochetov <evgeniik@nvidia.com>
Change-Id: I38dc5a6cb84a6bf56c01d5fb7f2cf3d3b63918e0
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14168
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
This will make the following patches simpler.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: Id3d7c025525b35c1c2b96027430789a8d8f2697b
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14422
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Inline nvme_rdma_post_recv() into the callers.
We do not have any similar helper function for posting send WR.
This will make the following patches simpler and will be reasonable.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: Ia95a4b350942d20bdb65e84f7575c2dcf67c149b
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14421
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Extract and inline the conditional nvme_rdma_qpair_submit_sends()
and nvme_rdma_qpair_submit_recvs() calls.
This will cralify the logic and make the following patches simpler.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: Ibe217c6f4fb2880af1add8c0429f92b4de107da8
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14420
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
When SRQ is supported, rsp array will be in either qpair or poller.
To make this difference transparent, rdma_req caches rdma_rsp and
rdma_rsp caches recv_wr directly instead of caching indecies.
Additionally, do a very small clean up together.
spdk_rdma_get_translation() gets a translation for a single entry
of a rsps array. It is more intuitive to use rsp.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Signed-off-by: Denis Nagorny <denisn@nvidia.com>
Signed-off-by: Evgeniy Kochetov <evgeniik@nvidia.com>
Change-Id: I61c9d6981227dc69d3e306cf51e08ea1318fac4b
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/13602
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Factor out processing recv completion and send completion into helper
functions to make the following patches simpler.
Additionally, invert if condition to check if both send and recv are
completed to make the following patches simpler.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: Idcd951adc7b42594e33e195e82122f6fe55bc4aa
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14419
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Both max and min should be reset periodically. We can use the queue
depth sampling poller to reset these but the queue depth sampling poller
is optional. We extend the bdev_reset_iostat RPC to support mode to
reset all or only max/min fields.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: I9ce54892f6e808f6a82754b6930092f3a16d51ff
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15444
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Add max/min_read/write/unmap_latency_ticks into the struct
spdk_bdev_io_stat.
When initializing or resetting the instance of the struct
spdk_bdev_io_stat, initialize max to 0 and min to UINT64_MAX.
Then update max if a new value is larger than the current max,
and update min if a new value is smaller than the current min.
For the bdev_get_iostat RPC, it prints max and prints min if min is not
UINT64_MAX or 0 if min is UINT64_MAX.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: I1b30b3825c15e37e9f0cf20104b866186de788a2
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14825
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Add a helper function bdev_reset_device_stat() to reset I/O statistics.
This funciton is used for the bdev_reset_iostat RPC.
We do not have any plan to use bdev_reset_device_stat() outside
lib/bdev. Hence, we do not add this as a public API.
Then, add a new RPC bdev_reset_iostat to reset I/O statistics of a
single bdev or all bdevs.
Resetting I/O statistics affects all consumers. Add a note to CHANGELOG
and doc/jsonrpc.md.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: I97af09107b5c3ad1f9c19bf3cbf027457c4fbae7
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15350
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Hold not only io_stat pointer but also num_blocks and blocklen in local
variables.
This will shorten and simplify bdev_io_update_io_stat(), and improve
readability and changeability.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: I527b72538a169a1faafd32863ff539306a8763a9
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15732
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
The following patches will add max/min latencies and more optional
counters. This factorization will improve the readability.
In addition to factorization, add spdk_likely to check if completed
successfully or not.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: I57581ece2b73d486aa138f8d26a5afaf6953a322
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15480
Community-CI: Mellanox Build Bot
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Krzysztof Karas <krzysztof.karas@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
For consistency, rename a JSON dump function by bdev_io_stat_dump_json()
and change the parameter order.
Other public APIs and function pointers in the generic bdev layer,
spdk_bdev_dump_info_json(), spdk_bdev_fn_table::dump_info_json, and
spdk_bdev_fn_table::write_config_json have a json_write_ctx pointer
as the last parameter. For consistency, swap a statistics pointer and
a json_write_ctx pointer.
This is another preparation to extend I/O statistics to include error
counters and module specific counters to output these via the
bdev_get_iostat RPC.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: I6f3bb6f2752f7da856d4fe66c0f1f8a2eedc176b
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15731
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Move a JSON dump functionbdev_get_iostat_dump() for I/O statistics
into lib/bdev/bdev.c.
The next patch will rename the function and change the parameter order.
This is another preparation to extend I/O statistics to include error
counters and module specific counters to output these via the
bdev_get_iostat RPC.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: I6a90d15fcbaa2e2a250167754135623bc9e7f362
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14837
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Add helper functions, bdev_io_stat_alloc(), bdev_io_stat_free(),
and bdev_io_stat_get() for struct spdk_bdev_io_stat.
Then replace a bdev_io_stat_add() call by bdev_io_stat_get() at
spdk_bdev_get_device_stat() because the saved data is queried first.
This is another preparation to extend I/O statistics to include error
counters and module specific counters to output these via the
bdev_get_iostat RPC.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: I9547757421a1de1b8cb44e0f8ade4b5c2bcad4e6
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15443
Community-CI: Mellanox Build Bot
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
The following patches will extend I/O statistics to include error
counters and module specific counters to output these via the
bdev_get_iostat RPC.
In this case, the size of the struct spdk_bdev_iostat will be variable.
As a preparation, allocate spdk_bdev_io_stat dynamically.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: I1979a9d867859d5cb5d05717bfcc677f07fa03f8
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15479
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Krzysztof Karas <krzysztof.karas@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Community-CI: Mellanox Build Bot
The following patches will extend I/O statistics to include error
counters and module specific counters to output these via the
bdev_get_iostat RPC.
In this case, the size of the struct spdk_bdev_iostat will be variable.
As a preparation, allocate spdk_bdev_io_stat dynamically.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: I50b57f792b451cf748ea8eb0611fe65d693d5a14
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15478
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Krzysztof Karas <krzysztof.karas@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Community-CI: Mellanox Build Bot
The following patches will extend I/O statistics to include error
counters and module specific counters to output these via the
bdev_get_iostat RPC.
In this case, the size of the struct spdk_bdev_iostat will be variable.
As a preparation, allocate spdk_bdev_io_stat dynamically. For the
per_channel mode, we can share the bdev_ctx->stat because
spdk_bdev_get_io_stat() always overwrites stat.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: I51cd550f52dc3b7d0f3f825fd48bcbeb3ecdcff2
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14836
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Krzysztof Karas <krzysztof.karas@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
When use of deprecated featues is encountered, SPDK now calls
SPDK_LOG_DEPRECATED(). This logs the use of deprecated functionality in
a consistent way, making it easy to add further instrumentation to catch
code paths that trigger deprecated behavior.
Change-Id: Idfd33ade171307e5e8235a7aa0d969dc5d93e33d
Signed-off-by: Mike Gerdts <mgerdts@nvidia.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15689
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Community-CI: Mellanox Build Bot
the mobj is allocating from pdu_data_out_pool,
if pdu_data_out_pool is exhausted, when the pdu is
polled next time, because data_buf_len is modified,
iscsi_pdu_payload_read return -1, and the connection
will be released.
Signed-off-by: lizengwu <786436671@qq.com>
Change-Id: I3ee65472f7ddaa357d7952a5b734540f0bc0b216
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15626
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Previously we use this flag to avoid to call `vhost_dev_unregister`
twice in `subsystem_fini`, but DPDK vhost library will check it,
we don't need this flag actually, but there is one race condition
between adding a new connection and unregistering the socket file
in different threads, so here we just move it to vhost-user device
as the first patch, and then use this flag in coming patch.
Change-Id: I658712dd20331a2e2eb5f4758bf76f748036a131
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15482
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
`vhost_user_dev_unregister` will check if the device is busy,
so we don't need to check `user_dev->pending_async_op_num` here.
For `vdev->registered`, with this check here, we can remove a
device even it didn't have a valid QEMU connection, and since
vhost-scsi supports hotplug feature, we don't need to check this
flag either when it have a valid QEMU connection.
Change-Id: I50cdeb5ca544e2ed93a1bc99ec3da8787a9e5df5
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15481
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Feng Li <lifeng1519@gmail.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Replace comments saying that particular locks must be held with
assertions that enforce that those locks are held. Remove the comments
so that there is no chance of comments and code getting out of sync in
the future.
This also fixes a caller of bdev_close() that did not hold a required
lock.
Signed-off-by: Mike Gerdts <mgerdts@nvidia.com>
Change-Id: I3a540f1ad9b9826f925c523986334aa8fcd302f2
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15440
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
This exercises the parts of spdk_spin_*() that are difficult to test in
unit tests. In particular, it tests multiple SPDK threads running on
different pthreads contending for a lock and it tests pollers and
messages going off CPU with a lock held.
Change-Id: I5cd6ce29c92c44ba63f47332fe339e59eed81553
Signed-off-by: Mike Gerdts <mgerdts@nvidia.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15534
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
This introduces an enhanced spinlock that adds safeguards compared to
the default pthread_spinlock_t. In particular:
- A pthread_spinlock_t is still used, but additional error checking is
performed to ensure there is no undefined behavior on relock,
unlocking when not the owner, or destoying a locked lock.
- The SPDK concurrency model allows an SPDK thread to be migrated
between pthreads. Releasing a pthread spinlock on a different thread
from where it is taken is undefined behavior. If an SPDK spinlock is
held at a time that a time when a poller or message returns control to
thread_poll(), the program will abort.
- SPDK spinlocks can only be obtained from an SPDK thread.
Signed-off-by: Mike Gerdts <mgerdts@nvidia.com>
Change-Id: I6dd6493ab5f5532ae69e20654546405a507eb594
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15277
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Community-CI: Mellanox Build Bot
nvme_tcp_read_pdu in a loop
nvme_tcp_read_pdu itself has a loop in it that runs until no more data
is available, so the extra loop does nothing.
Change-Id: I1471018e396c43187d1f06bd18ce8a6846a71c94
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15139
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
This is unsafe, because we touch need_buf_* queues, which aren't
thread-safe. Also, documented this requirement in
spdk_bdev_io_get_buf()'s description.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: Iabc141e051c543fdd51f079ae212f69e980d8148
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15668
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Add rpc method trace_get_info to show name of shared memory file,
list of the available trace point groups and mask of the available trace points for each group.
Fixes#2747
Signed-off-by: Xinrui Mao <xinrui.mao@intel.com>
Change-Id: I2098283bed454dc46644fd2ca1b9568ab2aea81b
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15426
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Xiaodong Liu <xiaodong.liu@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Community-CI: Mellanox Build Bot
All the other spdk_sock_* functions return -1 and set errno
appropriately, so we should do the same in flush().
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I51cda2c51974c72e82531f06fa31ab89b2329c91
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15642
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
If a lot of qpairs are connected all at once, the
RDMA optimal_poll_group logic does not work correctly,
because it only accounts for qpairs that received
their CONNECT capsule. Now that we have a counter
for a poll group's unassociated qpairs, use that value
to supplement the current io qpair count.
We can just assume for now that all of these unassociated
qpairs are io qpairs. That won't always be true, but
for purposes of picking the optimal poll group it is
sufficient.
Note that for RDMA, we could increment the counters
based on the RDMA qpair ID in the private data in the
rdmacm connect, but to keep the code simpler and common
across all transports, we defer the accounting until
after receiving the CONNECT command, so that it is
the same for all transports.
Fixes issue #2800.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I5897d6ebac23d3b78b100e3fef5a7f9fb5304820
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15695
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Use a local variable to hold the qpair count.
While here, also use pg_current to get the min_value,
this is a bit simpler to read than things like
(*pg)->group.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I65771fb469f021e9e77b8a6c117841b8f4b66af5
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15694
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
We make decisions on how to pick a poll group for a new
qpair by looking at each poll group's current_io_qpairs
count. But this count isn't always accurate since it
doesn't get updated until after the CONNECT has
been received.
This means that if we accept a bunch of connections
all at once, they may all get assigned the same poll
group, because the target poll groups counter doesn't
get immediately incremented.
So add a new counter, current_unassociated_qpairs,
to account for these qpairs. We protect this counter
with a lock, since the accept thread will increment
the counter, and the poll group thread will
decrement it when the qpair receives the CONNECT
allowing us to associated with a subsystem/controller..
If the qpair gets destroyed before the CONNECT is
received, we can use the qpair->connect_received
flag to decrement current_unassociated_qpairs.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I8bba8da2abfe225b3b9f981cd71b6f49e2b87391
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15693
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Currently we use qpair->ctrlr at qpair destroy
time to decide if we need to decrement the
qpair's poll group's qpair count. But this is
not correct - these counters get incremented
when the CONNECT is received, but qpair->ctrlr
doesn't get set until later.
So add a new connect_received bool to the spdk_nvmf_qpair.
Use this instead to determine when we should decrement
the poll group qpair counters.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I174a0fda36c4558171953bf58f2f5117bc074f76
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15692
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
At least recent Linux guest VMs send SPDK_NVME_IDENTIFY_CTRLR_IOCS as a
matter of course. While this isn't supported in lib/nvmf, as this
doesn't represent an error, reduce the log level of the error message so
we don't spam the logs.
Signed-off-by: John Levon <john.levon@nutanix.com>
Change-Id: I095de3e4331b3912cbc457da6d722b9883ec7884
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15646
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Community-CI: Mellanox Build Bot
Issue is found in the virtio_pci_scsi_dev_create() whose
error path is setting the vdev->ctx to NULL before the
destruct operation.
Change-Id: I4ab0fbe300f7413ad4503833088856aa3f4c0734
Signed-off-by: GangCao <gang.cao@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15676
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Optimal I/O boundary causes I/O to be split in the nvme driver. This is
a problem for writes if write_unit_size > 1 because the split I/O may
not match the write_unit_size.
Fixes: #2791
Change-Id: I437e6cb6d8e2415658d5b46539feeacb5363fd46
Signed-off-by: Artur Paszkiewicz <artur.paszkiewicz@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15627
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
NVMf target reports copy command support if all bdevs in the subsystem
support copy IO type. Maximum copy size is reported for each namespace
independently in namespace identify data. For now we support just one
source range.
Note, that command support in the controller is initialized once on
controller create. If another namespace which doesn't support copy
command is added to the subsystem later, it will not be reflected in
the controller data structure and will not be communicated to the
initiator. Attempt to execute copy command on such namespace will
fail. This issue is not specific to copy command and applies also to
write zeroes and unmap (dataset management) commands.
Signed-off-by: Evgeniy Kochetov <evgeniik@nvidia.com>
Change-Id: I5f06564eb43d66d2852bf7eeda8b17830c53c9bc
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14350
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
The correct SPDK thread is already contained in the poll group.
Signed-off-by: Thanos Makatos <thanos.makatos@nutanix.com>
Change-Id: I4eefe2ba60c77c01a866a693bccbb8affc8262ed
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15546
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: John Levon <levon@movementarian.org>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Fixes#2781
This patch fixes two issue causing segfault on r2t:
1. pdu buffer is allocated from immediate_data_pool, but data_buf_len is set as data_out_pool
2. task->desired_data_transfer_length is rewrite by iscsi_send_r2t, which causes a wrong calculated pdu->data_buf_len
Signed-off-by: melon.masou <melon.masou@outlook.com>
Change-Id: I151859afff7104f29ad7f0ec57a8479d88b742bd
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15542
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Added new API 'spdk_bdev_histogram_get_channel' to get histogram of
a specified channel for a bdev. A callback function is passed to it
to process the histogram.
Change-Id: If5d56cbb5fe6c39cda7882f887dcc9c6afa769ac
Signed-off-by: Richael Zhuang <richael.zhuang@arm.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15539
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Print out the specific values in this SPDK_ERRLOG,
this can help to find where the error is.
Change-Id: I2a38aa2d4270e0bbf554ddb348a73d40967d1b16
Signed-off-by: wanghailiangx <hailiangx.e.wang@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15618
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: GangCao <gang.cao@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Community-CI: Mellanox Build Bot
If the calloc failed, the fd was left in the fd_group.
Change-Id: Ie68426a13d342756c20315656f0309440fda6e02
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15475
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Xiaodong Liu <xiaodong.liu@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Thanos Makatos <thanos.makatos@nutanix.com>
Reviewed-by: John Levon <levon@movementarian.org>
SPDK threads generally run on dedicated cores and locks should be rarely
contended. Thus, putting a thread to sleep while waiting on a mutex does
not free up CPU cycles for other pthreads or processes. Even when
running in interrupt mode, lock contention should be low enough that
spinlocks are a net win by avoiding context switches.
Signed-off-by: Mike Gerdts <mgerdts@nvidia.com>
Change-Id: I6e2e78b2835bbadb56bbec34918d998d75280dfd
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15438
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Also add unit tests that explicitly test this
condition. They fail without the nvme driver changes
in this patch.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: Iaa369be341eb4eba394f248990e56dce001d3940
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15579
Reviewed-by: Mariusz Barczak <mariusz.barczak@intel.com>
Reviewed-by: Wojciech Malikowski <wojciech.malikowski@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
For now, just print a loud warning when this case is
violated. We will add a hard assertion and cause the app
to exit with error status in a later release.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: Ic9226f76a4729820f13a2728bea977b6a54f48ee
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15513
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
This function can be useful to query if a thread
had spdk_thread_exit() called on it yet. Internally
we have both EXITING and EXITED state - so
!spdk_thread_is_running() can be used to detect a
thread that is in either of those states.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I2f6fb024a6b1bc895fdc5132c722abc10f5d30f9
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15512
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
This was an accidental remnant from the original
check-in, when we did not have a clear differentiation
between the event and thread libraries.
The rocksdb plugin code will send events to an
lcore - not an SPDK thread. But originally the two
were combined though an API called spdk_allocate_thread.
Once the differentiation was clearly made, we moved to
using spdk_event_allocate() to send events to a specific
lcore, but never removed the spdk_thread.
So now let's just remove the spdk_thread_create since
it is not needed.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I5c6a3c304b7b4183eee90038367fdea7ebd7280f
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15504
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
This requires creating and setting SPDK threads in the
subsystem unit tests as well.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I31acfb1d7e418f011acc9b48933032d8bf8a1c53
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15511
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Currently when a vhost-scsi controller is removed, it
calls spdk_vhost_scsi_dev_remove_tgt on all remaining
targets, and then immediately calls vhost_dev_unregister.
But this path goes into vhost_user_dev_unregister which
immediately returns with error if there are any pending
async operations - and there are since scsi_dev_remove_tgt
is asynchronous.
So instead add the vhost_dev_unregister call to
remove_scsi_tgt, so that the unregister only happens
after the last ref goes away.
This requires changing vhost_fini() to no longer
assume that spdk_vhost_dev_remove() will immediately
unregister the device, since it now happens
asynchronously. Previously vhost_fini() was making
this assumption erroneously - it would call g_fini_cb
without actually checking that the devices had been
unregistered. Because of that incorrect assumption,
we need to do both the vhost and vhost-scsi changes
in the same patch.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I9577901266975447f9acfe53475221113f02fea3
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15510
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
At end of spdk_thread_poll(), if thread is in EXITING state,
we call thread_exit() to see if the thread can move to
EXITED state. If there are any pollers, io_channels
or pending device unregistrations in progress, thread_exit()
will keep the thread in EXITING mode for this iteration.
But a thread may post messages to itself during this cleanup
process, so thread_exit() should also check if there are
any messages on its queue.
Found during testing of spdk_thread lifetime patch set.
rbd bdev module will send messages to itself like this
during cleanup. Without this change, rbd module testing
with bdevperf would cause an spdk_thread to move to
EXITED state prematurely.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: Ie611026a67b7fa48640ae83be03e29a9c64883a2
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15533
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
If a connection is established and we receive a bad
PDU before successful login, the login_timer would not
get unregistered. So ensure the login_timer is always
unregistered in _iscsi_conn_destruct().
Found with Calsoft tests during new spdk_thread_exit()
assertion testing. Lack of unregistration would result
in its associated spdk_thread being unable to exit
cleanly due to the unexpired timer.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I79d427512f7829ad76bf89155e0e14c7bce3a7d7
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15499
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
The "app thread" will always be the first
thread created using spdk_thread_create(). There
are many operations throughout SPDK that implicitly
expect to happen in the context of this app thread,
so by formalizing it we can start to make assertions
on this to help clarify and simplify locking and
synchronization through the code base.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I7133b58c311710f1d132ee5f09500ffeb4168b15
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15497
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Next patch will add a new caller to this function.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I54374c0af3a4a0fdcc5ac9ca25e2c7ef03e99829
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15576
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
`BDEV_IO_NUM_CHILD_IOV` and `BDEV_RESET_IO_DRAIN_RECOMMENDED_VALUE`
are public macro definitions without `SPDK_` prefix, so we add the
`SPDK_` prefix to them.
Change-Id: I4be86459f0b6ba3a4636a2c8130b2f12757ea2da
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15425
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
The bdev hot remove might be an async process. The bdev_open will
return an error during the hot remove process. If someone invoke the
bdev_get_bdevs API when a bdev is in the middle of a hot remove
process, the spdk_for_each_bdev function will stop its loop when a
bdev_open return an error. Thus the bdev_get_bdevs will only return
partual bdevs or even return an empty list if the hot remove bdev is
the first bdev in the loop. When spdk_for_each_bdev and
spdk_for_each_bdev_leaf loop for each bdevs, if a bdev returns an
error, we skip that bdev instead of stop the whole loop.
Signed-off-by: Peng Yu <yupeng0921@gmail.com>
Change-Id: Ib35b817e23e47569fc5762a883b4ff8e322ae173
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15322
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Mike Gerdts <mgerdts@nvidia.com>
Reviewed-by: Xiaodong Liu <xiaodong.liu@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
If the guest performs a hard shutdown we're not deleting the CQs:
nvmf_vfio_user_close_qpair calls delete_sq_done, which won't delete
the CQ because vu_ctrlr->reset_shn is false.
Signed-off-by: Thanos Makatos <thanos.makatos@nutanix.com>
Change-Id: I383fb985340a0d9d0eb7fea7403372cbdc55a089
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15387
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: John Levon <levon@movementarian.org>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
This eventfd may be passed by libvfio-user to the remote process which
might remove the EFD_NONBLOCK flag, in which case we would block
indefinitely.
Signed-off-by: Thanos Makatos <thanos.makatos@nutanix.com>
Change-Id: If9826cd700b4a7b3458a0a8278a96322d99ac08e
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15385
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: John Levon <levon@movementarian.org>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
This patch introduces function spdk_fd_group_get_epoll_event, which
returns the epoll(7) event that caused the file descriptor group
callback function to execute. Rather than changing the signature of
spdk_fd_fn in order to pass the struct epoll_event, which would result
in a gigantic patch where there vast majority of users would simply have
to ignore the new argument, we introduce this new API that allows to
return the epoll_event only when really needed.
Signed-off-by: Thanos Makatos <thanos.makatos@nutanix.com>
Suggested-by: John Levon <john.levon@nutanix.com>
Change-Id: I3debe1382d1c2bfec6ae4fea274ee38ed0b135fe
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14935
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: John Levon <levon@movementarian.org>
The ftl_md_get_buffer_size returns the buffer size in bytes, so we
should divide by the block size, instead of this smaller value. Risks
touching bad memory during dirty shutdown recovery, especially in >16TiB
drives.
Signed-off-by: Kozlowski Mateusz <mateusz.kozlowski@intel.com>
Signed-off-by: Mariusz Barczak <mariusz.barczak@intel.com>
Change-Id: I4095b00a79a1bdbce5046dc46349a9670e41b18e
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15259
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
A metadata region without mirror should have the INVALID enum set,
otherwise it risks touching invalid parts of the array.
The sb_shm_md not being set to NULL could cause the code to touch this
freed pointer in the error path in ftl_md_create -> ftl_md_create_shm ->
ftl_md_invalidate_shm calls.
Signed-off-by: Kozlowski Mateusz <mateusz.kozlowski@intel.com>
Signed-off-by: Mariusz Barczak <mariusz.barczak@intel.com>
Change-Id: I7fe9694dad535de5f6b2a4af27400fa125480605
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15258
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Bumping the IO activity statistics during relocation, compaction, L2P
cache processing and user IO handling. This makes sure poller busy
counter is more accurate.
Signed-off-by: Kozlowski Mateusz <mateusz.kozlowski@intel.com>
Change-Id: Iabf8ec7ca41c01d7a00d3a70825b8d5283ab2bf1
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15257
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Community-CI: Mellanox Build Bot
NVMe controllers can be marked as removed even if we cannot receive
uevents (e.g. by the VMD driver), so we should process them regardless
of hotplug_fd.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: Iaaf13a136929200e824f7a6dd3b5584998801630
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15547
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Tom Nabarro <tom.nabarro@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I2d3825ffcce098909745ba949cdde3eb7f71c703
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15545
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Tom Nabarro <tom.nabarro@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
The previous 14B buffer was too small for VMD devices.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: Ib3984f7104fadbb2fbf7ec56932675d73eda1456
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15532
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Tom Nabarro <tom.nabarro@intel.com>
We should always build all function that are part of the API, even if
some of the libraries they depend on are missing. In that case, they
can return an error instead.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I72b450b3a1d62e222bd843e45be547d926414775
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15414
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Remove automatic generation of UUIDs for bdevs
that do not provide this value themselves.
This is to clarify whether this field can be
depended upon.
Modified match files to reflect change in UUID
generation.
Disabled nullglob shell option, as it deletes
empty arrays during word splitting. Bdevs with no
aliases would instead of "[]", have nullpointer
printed, which makes resulting JSON invalid.
Part of enhancement proposed in #2516.
Change-Id: Ic1d5f8f8d001ae1a219e876aef2a19b1ff0b2f2c
Signed-off-by: Krzysztof Karas <krzysztof.karas@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15150
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Community-CI: Mellanox Build Bot
Add the processing of returning 0 for spdk_accel_get_opc_module_name(),
and remove SPDK_RPC_STARTUP, because this will cause core dumped
when run nvmf_tgt with --wait-for-rpc and no RPC framework_start_init.
Fixes issue: 2770
Change-Id: I1c53ccb8caa52f2eaa0b8b560a021bded49d8fed
Signed-off-by: wanghailiangx <hailiangx.e.wang@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15377
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Add helper functions, bdev_iostat_ctx_alloc() and bdev_iostat_ctx_free()
for the bdev_get_iostat RPC.
The following patches will allocate spdk_bdev_io_stat dynamically for
bdev_get_iostat_ctx.
This is a preparation for that.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: Ib71d6fb92d8134d2282507e62874f19045b630b7
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15442
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
The bdev_get_iostat RPC uses two types of contexts, one to manage the
progress of the bdev_get_iostat RPC and another to call
spdk_bdev_get_device_stat().
However, this was hard to find from the source code.
To make us easier to find this, rename the former by rpc_ctx and the
latter by bdev_ctx. Then rename related functions and variables accordingly.
Furthermore, relocate request and decoder declaration to improve
readability.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: I3472c87fe4ec1f5981a49ef79148534fbb1d46c4
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15349
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
RPC parameters and decoders for the bdev_get_iostat RPC are used only
by rpc_bdev_get_iostat(). Locating RPC parameters and decoders close to
rpc_bdev_get_iostat() clarifies it. Furthermore, this will simplify code
review for the next patch.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: I1b1b428e3eb3bb4422e490c5f4324f0e40f9710f
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15416
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
For I/Os controlled by QoS, TRACE_BDEV_IO_DONE is collected after
redirecting to the original thread. Hence, TRACE_BDEV_IO_START should
be collected on the original thread too.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: I15411be823450ee5ddaa7582509a7aa068476fc5
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14824
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
The 2211 implementation only gets used when runtime
detects the DPDK version is DPDK 22.11. But we still
compile this file even if it gets built against an
older DPDK.
This is typically fine, except there are some interrupt
APIs that changed in DPDK 21.11, so older DPDKs don't
have some of the functions used in this file. We need
to use ifdefs to allow this to compile.
We will need some more work to handle this case properly,
but this patch at least fixes the 2211.c case for now.
We will probably need a 2108.c file that exactly matches
the 2207.c file except for this interrupt API changes.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I6055694ccbb79845798e750ebb7127ec6c160e2e
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15236
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Michal Berger <michal.berger@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Update code for read the virtual address width to use glob to locate the
Intel and AMD iommu capability registers. This code should work for all
AMD numa configurations.
Fixes issue 2730
Signed-off-by: Michael Piszczek <mpiszczek@ddn.com>
Change-Id: Ibf5789087b7e372d892b53101e4c0231809053f0
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14961
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Community-CI: Mellanox Build Bot
Add an optional allowlist for RPC methods: if the method is not listed,
it is not allowed to be called or visible. This can be used to restrict
accidental mis-configurations, and generally helps locking down the
configuration surface.
Signed-off-by: John Levon <john.levon@nutanix.com>
Change-Id: Ied78fc4b14b60cb94ed0852b92deb6df545cbec4
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15275
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
per Intel policy to include file commit date using git cmd
below. The policy does not apply to non-Intel (C) notices.
git log --follow -C90% --format=%ad --date default <file> | tail -1
and then pull just the 4 digit year from the result.
Intel copyrights were not added to files where Intel either had
no contribution ot the contribution lacked substance (ie license
header updates, formatting changes, etc). Contribution date used
"--follow -C95%" to get the most accurate date.
Note that several files in this patch didn't end the license/(c)
block with a blank comment line so these were added as the vast
majority of files do have this last blank line. Simply there for
consistency.
Signed-off-by: paul luse <paul.e.luse@intel.com>
Change-Id: Id5b7ce4f658fe87132f14139ead58d6e285c04d4
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15192
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Community-CI: Mellanox Build Bot
Allow CPU core locks to be enabled and disabled
during runtime. This feature will be useful
in cases like SPDK hot upgrade, where
locking should be disabled temporarily.
Change-Id: I9bc7292fd964abffc7214d074d191f38b13583c3
Signed-off-by: Krzysztof Karas <krzysztof.karas@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15031
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
When running SPDK application on a given set of
CPU cores, create lock files for each of them.
This wil prevent user misconfiguration and
assigning a core to more than one SPDK instance.
The introduced mechanism is based on device locks
implemented in spdk_pci_device_claim() function.
Add a command line option to disable lock files.
This feature will be useful in cases where differing
CPU cores is impossible (eg. setup with only one core
available).
The patch also fixes all existing cases of overlapping
core masks.
Change-Id: Ie9aacb7523a3597b9aa20f2c3fa9efe4db92c44c
Signed-off-by: Krzysztof Karas <krzysztof.karas@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14919
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Additionally, print the string representation of the ctrlr state, as it
makes debugging init failures much easier.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I572ef3d6f7d5bbd52039a8872733578c92be4c4a
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15305
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Add API spdk_bdev_io_get_submit_tsc to get submit tsc of a bdev I/O,
which can be used in bdev modules to avoid calling expensive
spdk_get_ticks().
Change-Id: Ifbcecb1bc663344997c5e73b72a1dfb5d0422946
Signed-off-by: Richael Zhuang <richael.zhuang@arm.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14989
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
This fix allows to use relaxed ordering feature where it is
supported. libibversb checks with the driver if relaxed ordering
access flag is supported and ignores it if not.
Experiments show that set by default it doesn't spoil performance but
allows to reach desired one on AMD EPYC systems. For example fio read
test (ConnectX-6, AMD EPYC 7763, two jobs, queue depth 32, block size
32K) can starve down to 6-7 GiB/s without it. Enabling this option
allows to get bandwidth more than 21 GiB/s.
Change-Id: I5983aed5d1f38ee7bec9c310597731c9a6a329da
Signed-off-by: Denis Nagorny <denisn@nvidia.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14885
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
It doesn't make sense to have the size of the doorbells fixed and then
calculate the maximum number of queue pairs based on it, do it the other
way round. Also, add some sanity checks based on the spec.
Signed-off-by: Thanos Makatos <thanos.makatos@nutanix.com>
Change-Id: I17e3509fb0a011128ca089ce78b7a296262e6f8e
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14932
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
The bdev*_with_md APIs now allow to pass NULL md
pointer, so calling this function without checking
for metadata simplifies code
Signed-off-by: Alexey Marchuk <alexeymar@nvidia.com>
Change-Id: I364a646630bd36120231ea87a41fea05df51befb
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15090
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
In the following patches, we will add a feature to inject data
corruption to the error bdev module. For read I/O, we will have
to inject data corruption at completion. However, if we use
spdk_bdev_part_submit_request(), it will not be possible because we
cannot add any custom operation into the completion callback.
To fix the issue, modify spdk_+bdev_part_submit_request() and
rename it to spdk_bdev_part_submit_request_ext().
Fortunately, we can use stored_user_cb in struct spdk_bdev_io.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: I46d3c40ea88a3fedd8a8fef6b68ee417c814a7a1
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15002
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Session in vhost means an active socket connection from
client(e.g: QEMU or SPDK vhost initiator), but the device
state could be `started` or `stopped` because users may
remove the driver of the device in VM, so in
`foreach_session` we can always call the callback function
without checking the session state, and the callback function
may check the device state if necessary.
Change-Id: Id0fc8c7f6f0915a55a738f0c87ebe6539f7fb2db
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15038
Community-CI: Mellanox Build Bot
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Now we will start the device(virtio-blk and virtio-scsi) when
there is a valid I/O queue(VRING_KICK message), the backend
device `start_session` callback will ensure this check, so
when processing VRING_KICK messages for each vring, we can
just call `new_device` if `started` is false, and if `started`
is true, it means the device is already started, it's safe
for us to add one more vring even the device is started.
With this change, we don't need to wait for the return value
of `start_session` in synchronous mode, just return is OK.
Fix#2518.
Change-Id: I92ba3d4e5c38422d7697c1d13180a4a48f0dd4cd
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14981
Community-CI: Mellanox Build Bot
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
We will stop/start the device multiple times when a new
vring is added, and also stop/start the device when
set vring's callfd, actually we only need to start
the device after a I/O queue is enabled, DPDK rte_vhost
will not help us to start the device in some scenarios,
so this is controlled in SPDK.
Now we improve the workaround to make it consistent with
vhost-user specification.
For each SET_VRING_KICK message, we will setup the new
added vring, and then we try to start the device.
For each SET_VRING_CALL message, we will add one more
interrupt count, previously this is done when enable
the vring, which is not accurate.
For each GET_VRING_BASE message, we will stop the
device before the first message.
With above changes, we will start/stop the device once,
any new added vrings after starting the device will be
polled in next `vdev_worker` poller.
Change-Id: I5a87c73d34ce7c5f96db7502a68c5fa2cb2e4f74
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14928
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
`vdev_worker` in vhost-scsi is used to process request queues,
and `vdev_mgmt_worker` is used to process the event and control
queue, so we don't need to call `vhost_session_used_signal` in
`vdev_worker`, just remove it.
Change-Id: I86f3e90890e6defba69b01fec131afe1adad3a49
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14927
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Currently we will allocate all VQ's tasks when starting
the device, it will not allow us to add new VQ after
starting the device, so here, we move it to VQ setting
function.
Change-Id: I59cfc393d66779ab8a0eb704bc73bcede3f0a2a0
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14926
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
With this change, then we can call vq settings after the
VRING_KICK message, currently we will stop/start device
multiple times when a new vq is added.
Change-Id: Icba3132f269b5b073eaafaa276ceb405f6f17f2a
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14925
Community-CI: Mellanox Build Bot
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Feature negotiation is done after SET_FEATURES message, here we
move it in this message context, so that we can use the negotiated
features before starting the device.
Change-Id: Ic6388dbcebd72bc5ef182e65798d34c07f6fc35c
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14924
Community-CI: Mellanox Build Bot
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Before starting a device, the memory table is already
there, so we can check it earlier.
Change-Id: I4996705501577cfa78c89621f7081eb0c3d4dd78
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14923
Community-CI: Mellanox Build Bot
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
DPDK has merged changes which hide remove some DPDK
object such as rte_device and rte_driver from the
public API.
So we add copies of the necessary header files into
our tree, along with a 22.11-specific pci_dpdk
implementation.
These files are copied over exactly, except for one
#include which needs to change from <> to "" so that
it picks up the header in our tree instead of looking
for it in system headers.
Longer-term we may want to look at ways to automated
checking and updating of these header files. DPDK 22.11
isn't officially released yet, so the header files could
change, but we want to get this in now since without
it SPDK cannot build against DPDK tip at all.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I89ffd0abab52c404cfff911c1c9b0cd9e889241d
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14570
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Copy operation is defined by source and destination LBAs and LBA count
to copy. For destiantion LBA and LBA count we reuse exiting fields
`offset_blocks` and `num_blocks` in `struct spdk_bdev_io`. For source
LBA new field `src_offset_blocks` was added.
`spdk_bdev_get_max_copy()` function can be used to retrieve maximum
possible unsplit copy size. Zero values means unlimited. It is allowed
to submit larger copy size but it will be split into several bdev IOs.
Signed-off-by: Evgeniy Kochetov <evgeniik@nvidia.com>
Change-Id: I2ad56294b6c062595c026ffcf9b435f0100d3d7e
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14344
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Community-CI: Mellanox Build Bot
Add a new parameter "-c" to display the per channel IO statistics
for required Bdev
./scripts/rpc.py bdev_get_iostat -b Malloc0 -h
usage: rpc.py [options] bdev_get_iostat [-h] [-b NAME] [-c]
optional arguments:
-h, --help show this help message and exit
-b NAME, --name NAME Name of the Blockdev. Example: Nvme0n1
-c, --per-channel Display per channel IO stats for specified device
This could give more intuitive information on each channel's processing
of the IOs with the associated thread on the same Bdev.
Please also be aware that the IO statistics are collected from SPDK
thread's related channel's information. So that it is more relating
to the SPDK thread. And in the dynamic scheduling case, different
SPDK thread could be running on the same Core.
In this case, any seperate channel's IO statistics are returned to
the RPC call and if needed, further parse of the data is needed to
get the per Core information although usually there is one thread
per Core.
On the other hand, user could run the framework_get_reactors RPC
method to get the relationship of the thread and CPU Cores so as
to get the precise information of IO runnings on each thread and
each Core for the same Bdev.
Change-Id: I39d6a2c9faa868e3c1d7fd0fb6e7c020df982585
Signed-off-by: GangCao <gang.cao@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/13011
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
And also related function pointers and APIs:
spdk_bdev_for_each_channel_msg;
spdk_bdev_for_each_channel_done;
spdk_bdev_for_each_channel_continue;
Change-Id: I52f0f6f27717d53c238faf2f998810c9c5ee45d4
Signed-off-by: GangCao <gang.cao@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14614
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Community-CI: Mellanox Build Bot
The following patches will allow the caller to specify a custom
completion callback to spdk_bdev_part_submit_request(). To do it
easily, consolidate completions of all I/O types into
bdev_part_complete_io().
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: I083695189daa7e5271787c50947e428d01a83677
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15001
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Krzysztof Karas <krzysztof.karas@intel.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
We do not support Soft RoCE anymore. Remove a workaround for Soft RoCE's
bug that we amy receive a completion without error status after qpair is
disconnected/destroyed. Then add a assert to check if rdma_req->req is
not NULL. This will simplify the code and the following patches.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: I80c349053adc0f79679eaf8a5d7265d555d3c2b0
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14909
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
The following patches will support SRQ and SRQ will be per poller.
We will need SRQ in nvme_rdma_cq_process_completions().
It is not possible to identify poller if poll_group is passed to
nvme_rdma_cq_process_completions().
Based on these thoughts, add poll_group pointer to poller and pass
poller to nvme_rdma_cq_process_completions() instead of poll_group.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Signed-off-by: Denis Nagorny <denisn@nvidia.com>
Signed-off-by: Evgeniy Kochetov <evgeniik@nvidia.com>
Change-Id: I322a7a0cc08bdcc8e87e720ad65dd8f0b6ae9112
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14282
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
NVMe-RDMA target has a helper function get_rdma_qpair_from_wc() and
uses it to identify a qpair from a WC.
NVMe-RDMA initiator has a similar function
nvme_rdma_poll_group_get_qpair_by_id().
NVMe-RDMA initiator will support SRQ in the following patches, and
it will want to identify a qpair from a WC.
get_rdma_qpair_from_wc() of NVMe-RDMA target uses wc->qp_num internally
anyway.
However, the upcoming custom transport for RDMA will have to use other
variables of WC.
Hence, it will be convenient to pass WC instead of qp_num if we consider
future enhancements.
Based on these thoughts, for NVMe-RDMA initiator rename
nvme_rdma_poll_group_get_qpair_by_id() by get_rdma_qpair_from_wc().
remove unnecessary declaration, and pass WC instead of qp_num.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Signed-off-by: Denis Nagorny <denisn@nvidia.com>
Signed-off-by: Evgeniy Kochetov <evgeniik@nvidia.com>
Change-Id: I01ead4730207e2c6ac53b83f151bd5f977a11465
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14279
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Poller will have more shared resources when SRQ is supported.
This is a preparation.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Signed-off-by: Denis Nagorny <denisn@nvidia.com>
Signed-off-by: Evgeniy Kochetov <evgeniik@nvidia.com>
Change-Id: Ic3d1cb93dde3f53653a9536a103e5518cebd58e1
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14173
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
nvme_rdma_ctrlr_disconnect_qpair() does not poll the qpair until it is
actually disconnected if it is in a poll group even if its async mode
is disabled. Hence, spdk_nvme_ctrlr_free_io_qpair() removes the qpair
from a poll group when it is being disconnected.
On the other hand, I/O qpair is destroyed after it is actually
disconnected.
When SRQ is enabled and used, a SRQ is destroyed if the corresponding
poller does not have any I/O qpair after an I/O qpair is removed from
the poller.
In particular, if we use spdk_nvme_ctrlr_free_io_qpair(), a SRQ is
destroyed before the corresponding I/O qpairs are destroyed.
Destroying a SRQ failed because it is still referenced by I/O qpairs.
This bug was found when running the SPDK NVMe perf tool with SRQ.
The reason was we had nvme_rdma_poll_group_process_completions() to call
disconnected_qpair_cb after the qpair is actually disconnected.
However, it is ensured that nvme_rdma_poll_group_process_completions()
calls disconnected_qpair_cb for any disconnected qpair.
Hence, remove a check if qpair->poll_group is not NULL from
nvme_rdma_ctrlr_disconnect_qpair() and update the comment.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: I0fde0d827eec3280e1cc5a0fce34d163a6069bc4
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14908
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
With RDMA, the admin poller can experience a remote disconnect when
processing completions. The admin qpair will be disconnected to handle
this. The disconnect code path will manually complete queued aborts.
However, the completion callback for the abort will attempt to resubmit
other queued aborts from the queue, which will result in a very large
stack and can eventually cause a segfault.
The fix is to not resubmit queued aborts if the admin qpair is in any
kind of failed state.
Change-Id: I4a6f959232c8a1bd30c87ca50459014e556cbaa0
Signed-off-by: Vasuki Manikarnike <vasuki.manikarnike@hpe.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15114
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Michael Haeuptle <michaelhaeuptle@gmail.com>
A loop inside 'nvme_tcp_qpair_process_completions' makes
'max_completions' actually behaving like a minimum:
do {
rc = nvme_tcp_read_pdu(tqpair, &reaped);
[...]
} while (reaped < max_completions);
Before this change 'max_completion' constraint, in its true sense,
was actually not respected and a loop inside 'nvme_tcp_read_pdu'
could be executed indefinitely as long as a recv state changed.
To prevent this behavior, max_completion must be passed to
'nvme_tcp_read_pdu' and used as an additional exit condition.
Signed-off-by: Szulik, Maciej <maciej.szulik@intel.com>
Change-Id: I28da962f4a62f08ddb51915b5d0dae9611a82dee
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15136
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Some reset/disable paths are freeing the shadow doorbells without
switching the SQs back to BAR0. Fix this up, and add a small cleanup
when initializing the shadow doorbells.
Signed-off-by: John Levon <john.levon@nutanix.com>
Change-Id: Ia5e5b91b7dc696a558eb0ad59cc554abced47cca
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14901
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
To support SQs allocated to a poll group other than the controller's
main poll group, we need to make sure to poll those SQs when we wake up
and handle the controller interrupt. As they will be running in a
separate SPDK thread, we will arrange for all poll groups to wake up
when we receive an interrupt corresponding to a vfio-user message
arriving.
This can mean needless wakeups: we don't (yet) have a mechanism to only
wake up the poll groups that correspond to a particular SQ write.
Additionally, as we don't have any notion of a poll group per
controller, this ends up polling all SQs in the entire poll group, not
just the ones corresponding to the controller we were handling.
As this has potential performance issues in many cases, it defaults to
disabled.
Signed-off-by: John Levon <john.levon@nutanix.com>
Change-Id: I3d9f32625529455f8d55578ae9cd7b84265f67ab
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14120
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
When crc32c is invoked with a multiple entry input iov,
only the last op has crc_dst set in order to write the final
crc value into the user supplied location.
spdk_idxd_process_events() for every successfully completed
CRC op writes the value into *op->crc_dst
UNLESS it is NULL.
The problem is that _idxd_prep_batch_cmd() that allocates
new ops left op->crc_dst uninitialized.
This results in a memory corruption (use after free)
in the following scenario:
1) op A is allocated an crc_dst is set to point to user memory X.
2) Op A is compeleted
3) User memory X is freed.
4) Ops B and C are allocated (chained), C has crc_dst set.
=> B reused op A memory and crc_dst still points to the
now stale user location (1)
5) B is complered, spdk_idxd_process_events() writes into X
as B->crc_dst = X.
Fix: _idxd_prep_batch_cmd() should initialize crc_dst to NULL.
Signed-off-by: Anton Eidelman <anton@lightbitslabs.com>
Change-Id: I9e7d57ec43a8fbcb3750906015a5cb7291278c35
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15115
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
We were missing a check when ISAL uses the complete output buffer
on compression to determine whether it was s perfect fit or if
simply not enough buffer was provided.
Signed-off-by: paul luse <paul.e.luse@intel.com>
Change-Id: I73532666f50cb9fbef3c42f6bfb25fc5c7de01c6
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14874
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Prevent user from switching back to static scheduler after
different scheduler has been selected. Currently we
do not have a way to save initial thread distribution
configuration, so each time user switches from dynamic
scheduler back to static, the SPDK threads may end up on
different reactors. This would cause discrepancy in
performance statistics of SPDK managed by static scheduler.
Change-Id: Ic17a6be55eaea0e1a748f92e01f7075540403637
Signed-off-by: Krzysztof Karas <krzysztof.karas@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15055
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
This helps generate slightly better code in this function,
which can have a noticeable impact for high trace
event workloads.
Tested with bdevperf, single malloc or null bdev,
qd=32, 512B randreads on a single Xeon core.
Specify "-e bdev" to enable bdev trace events.
Null:
Before: 8.09M/s (123ns per IO)
After: 8.68M/s (115ns per IO)
Malloc:
Before: 4.21M/s (237ns per IO)
After: 4.34M/s (230ns per IO)
Note that each bdev I/O generates two trace events (START
and END) - meaning this change removes 7-8ns of overhead
for every 2 trace events, at least on my system.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I7021b7f9e28b4a7cb16f8a97b4d4004ae165efd2
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15096
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>