By default, failback to the preferred I/O path is done automatically
if it is restored. Some users may want to keep using the backup I/O
path even if the preferred I/O path is restored. In this case,
bdev_nvme_set_preferred_path can be used to do manual failback.
We may be able to clear/fill I/O path cache more strictly but it will
be complicated and have bugs. This patch does the minimal change,
just skips an apparent case.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: I78fe5faee6ff04e88ae3d7c6be6da1c20637c912
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12431
Community-CI: Mellanox Build Bot
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
The new blobstore ext API is used when the user
provides ext_io_opts in bdev layer.
To store blobstore ext_io_opts, vbdev_lvol reports
non-zero get_ctx_size in bdev module interface.
Signed-off-by: Alexey Marchuk <alexeymar@mellanox.com>
Change-Id: I64076b5369533be0c1d69ca48aef9d70a9abe488
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11373
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
These function accept optional spdk_blob_ext_io_opts
structure. If this structure is provided by the user
then readv/writev_ext ops of base dev will be used
in data path
Signed-off-by: Alexey Marchuk <alexeymar@mellanox.com>
Change-Id: I370dd43f8c56f5752f7a52d0780bcfe3e3ae2d9e
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11371
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Introduce spdk_blob_ext_io_opts structure which
is used in the new *_ext functions.
Zeroes dev is updated with implementation of
readv_ext which uses memory domains memzero
or regular memset().
Signed-off-by: Alexey Marchuk <alexeymar@mellanox.com>
Change-Id: Id94542196eff999827bf00591fd43804256fccb4
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11369
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
In the following patches, nvme_ctrlr_process_init() will be used to
disable the controller when disconnecting the admin qpair for PCIe
transport. In this case, we will have to exit nvme_ctrlr_process_init()
after CSTS.RDY is 0. However, spdk_nvme_ctrlr_reset() and
spdk_nvme_ctrlr_reconnect_poll_async() have to continue
nvme_ctrlr_process_init() until the controller becomes ready.
To differentiate stop and continue clearly, add a new state
NVME_CTRLR_STATE_DISABLED to enum nvme_ctrlr_state.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: Ic0a5fb7114d4eeb1cefec28bc404184768fb0a96
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12613
Community-CI: Mellanox Build Bot
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Michael Haeuptle <michaelhaeuptle@gmail.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
We enable multiple engines by:
* getting rid of the globals that point to the one available HW
and one available SW engine
* adding a submit_tasks() entry point for the SW engine so that
it is treated like any other engine allowing us to just call
submit_tasks() to the assigned engine for the opcode instead of
checking what is supported
* changing the definition of engine capabilities from
"HW accelerated" to simply "supported"
* during init, use a global (g_engines_opc) that contains engines
and is indexed by opcode so we know what the best engine is for each
op code
* future patches will add RPC's to override engine priorities or
specifically assign an opcode(s) to an engine.
Signed-off-by: paul luse <paul.e.luse@intel.com>
Change-Id: I9b9f3d5a2e499124aa7ccf71f0da83c8ee3dd9f9
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11870
Community-CI: Mellanox Build Bot
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
The NVMe bdev module supported active-passive policy for multipath mode
first. By this patch, the NVMe bdev module supports active-active policy
for multipath node next. Following the Linux kernel native NVMe multipath,
the NVMe bdev module supports round robin algorithm for active-active
policy.
The multipath policy, active-passive or active-active, is managed per
nvme_bdev. The multipath policy is copied to all corresponding
nvme_bdev_channels.
Different from active-passive, active-active caches even non_optimized
path to provide load balance across multiple paths.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: Ie18b24db60d3da1ce2f83725b6cd3079f628f95b
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12001
Community-CI: Mellanox Build Bot
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
If we specify a preferred path manually for each NVMe bdev, we will
be able to realize a simple static load balancing and make the failover
more controllable in the multipath mode.
The idea is to move I/O path to the NVMe-oF controller to the head of
the list and then clear the I/O path cache for each NVMe bdev channel.
We can set the I/O path to the I/O path cache directly but it must be
conditional and make the code very complex. Hence, let find_io_path() do
that.
However, a NVMe bdev channel may be acquired after setting the preferred
path. To cover such case, sort the nvme_ns list of the NVMe bdev too.
This feature supports only multipath mode. The NVMe bdev module supports
failover mode too. However, to support the latter, the new RPC needs to
have trid as parameters and the code and the usage will be come very
complex. Add a note for such limitation.
To verify one by one exactly, add unit test.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: Ia51c74f530d6d7dc1f73d5b65f854967363e76b0
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12262
Community-CI: Mellanox Build Bot
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Reviewed-by: <tanl12@chinatelecom.cn>
Reviewed-by: GangCao <gang.cao@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
spdk_bdev_get_acwu() is a 1-based number, so we need
to subtract 1 from it before assigning the value to
nsdata->nacwu.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I32708b28a35670cba6013a48b79389fa48226285
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12399
Community-CI: Mellanox Build Bot
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
When sending a fused compare and write command, we pass a callback
bdev_nvme_comparev_and_writev_done that we expect to be called twice
before marking the io as completed. In order to detect if a call to
bdev_nvme_comparev_and_writev_done is the first or the second one, we
currently rely on the opcode in cdw0. However, cdw0 may be set to 0,
especially when aborting the command. This may cause use-after-free
issues and this may call the user callbacks twice instead of once.
Use a bit in the nvme_bdev_io instead to keep track of the number of
calls to bdev_nvme_comparev_and_writev_done.
Signed-off-by: Alex Michon <amichon@kalrayinc.com>
Change-Id: I0474329e87648e44b08998d0552b2a9dd5d34ac2
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12180
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
This patch adds an extra spdk_thread_send_msg() call to destroy a qpair
to make sure that it isn't freed from the context of a socket write
callback. Otherwise, spdk_sock_close() won't abort pending requests,
causing their completions to be exected after the qpair is freed.
Fixes#2471
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: Ia510d5d754baccca1e444afdb10696ab9b58e28b
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12332
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
There is a race condition when a bdev is unregistered while reset is
submitted from the upper layer very frequently.
spdk_io_device_unregister() may fail because it is called while
spdk_for_each_channel() is processed.
spdk_io_device_unregister io_device bdev_Nvme0n1 (0x7f4be8053aa1)
has 1 for_each calls outstanding
To avoid this failure, defer calling spdk_io_device_unregister() until
reset completes if reset is in progress when unregistration is ready
to do, and then reset completion calls spdk_io_device_unregister()
later.
A bdev cannot be opened if it is already deleting. So we do not need
to hold mutex.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: Ida1681ba9f3096670ff62274b35bb3e4fd69398a
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12222
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Michael Haeuptle <michaelhaeuptle@gmail.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Previously, if a namespace is in ANA inaccessible state, I/O had been
queued infinitely. Fix this issue according to the NVMe spec.
Add a temporary poller anatt_timer and a flag ana_transition_timedout for
each nvme_ns.
Start anatt_timer if the nvme_ns enters ANA transition. If anatt_timer
is expired, set ana_transition_timedout to true. Cancel anatt_timer or
clear ana_transition_timedout if the nvme_ns exits ANA transition.
nvme_io_path_become_available() returns false if ana_transition_timedout
is true.
Add unit test case to verify these addition.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: Ic76933242046b3e8e553de88221b943ad097c91c
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12194
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Monica Kenguva <monica.kenguva@intel.com>
This reduces a lot of casting.
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Change-Id: Ibc04f422858642d0e20c9b020bb6c5d1b70256fe
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11534
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
The code to handle the lingering qpair when deleting it was really
complicated.
The RDMA transport can connect or disconnect qpair asynchronously.
Then we can include the code to handle the lingering qpair into the
code to disconnect qpair now.
If the disconnected qpair is still busy, defer completion of the
disconnection until qpair becomes idle.
If poll group is not used, we can complete disconnection immediately
because cq is already destroyed.
The related data and unit test cases are not necessary anymore.
So delete them in this patch.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: Ic8f81143fcad0714ac9b7db862313aa8094eeefb
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11778
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Add three states, INITIALIZING, EXITING, and EXITED to the rqpair
state.
Add async parameter to nvme_rdma_ctrlr_create_qpair() and set it
to opts->async_mode for I/O qpair and true for admin qpair.
Replace all nvme_rdma_process_event() calls by
nvme_rdma_process_event_start() calls.
nvme_rdma_ctrlr_connect_qpair() sets rqpair->state to INITIALIZING
when starting to process CM events.
nvme_rdma_ctrlr_connect_qpair_poll() calls
nvme_rdma_process_event_poll() with ctrlr->ctrlr_lock if qpair is
not admin qpair.
nvme_rdma_ctrlr_disconnect_qpair() returns if qpair->async is true
or qpair->poll_group is not NULL before polling CM events, or polls
CM events until completion otherwise. Add comments to clarify why
we do like this.
nvme_rdma_poll_group_process_completions() does not process submission
for any qpair which is still connecting.
Change-Id: Ie04c3408785124f2919eaaba7b2bd68f8da452c9
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11442
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
spdk_vhost_dev structure should only contain generic fields
that are to be used by either vhost, vhost_blk or vhost_scsi
layer.
The vhost_user backend can hold its properties in
spdk_vhost_user_dev, which is maintained within rte_vhost.
Both structures contain references back to each other.
The reference in spdk_vhost_dev is a void pointer to
allow future transports to keep the reference
to their own structures.
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Change-Id: I68640c524426d885c20242146365ba242fa9df8e
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11813
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
In support of upcoming patches and to greatly simplify things,
the capabilites enum which held bit positions for each opcode
has been removed. Only the opcodes enum remains and thus only
opcodes are used throughout. For the capabiltiies bitmap a helper
function is added to convert from opcode to bit position. Right
now it is used in the IO path but in upcoming patches that goes away
and the conversion is only done at init time.
Signed-off-by: paul luse <paul.e.luse@intel.com>
Change-Id: Ic4ad15b9f24ad3675a7bba4831f4e81de9b7bc70
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11949
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Patch for not running tests if ASAN and
Valgrind options are both enabled.
Fixes#2422
Signed-off-by: Kamil Godzwon <kamilx.godzwon@intel.com>
Change-Id: I50c91bede687f0aee571c1f2540530a7fafcb49c
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11998
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Reviewed-by: Karol Latecki <karol.latecki@intel.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Previously spdk_vhost_dev_backend held callbacks
for vhost_blk and vhost_scsi functionality, along
with ones that are called by the vhost_user backend.
This patch separates out those callbacks into two
structures:
- spdk_vhost_dev_backend - to be implemented by vhost_blk
and vhost_scsi
- spdk_vhost_user_dev_backend - is only implemented by
vhost_user backend, callbacks for session managment
specific to that transport
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Change-Id: I348090df5dddeb2b1945b082b85aec53d03c781b
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11812
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
there is no group
The real implementation handles this by returning -ENOENT, so do the
same in the test.
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Change-Id: I405b6f60bf4dcdb22c57e48bbaf66d57522a49c5
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11508
Community-CI: Mellanox Build Bot
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Michael Haeuptle <michaelhaeuptle@gmail.com>
Count disconnecting a queue pair as activity so that the unit test
poll_threads() calls don't bail out until the disconnectedd_qpair_cb is
called at least once.
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Change-Id: Idc437d6c589dbf133bfcbb5edba1087f928a718c
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11507
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Michael Haeuptle <michaelhaeuptle@gmail.com>
This was neither set nor used.
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Change-Id: I3119135843c5fc0b8724e593db40df46e6b5bdb0
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12097
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Michael Haeuptle <michaelhaeuptle@gmail.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
The concat module can combine multiple underlying bdevs to a single
bdev. It is a special raid level. You can add a new bdev to the end of
the concat bdev, then the concat bdev size is increased, and it won't
change the layout of the exist data. This is the major difference
between concat and raid0. If you add a new underling device to raid0,
the whole data layout will be changed. So the concat bdev is extentable.
Change-Id: Ibbeeaf0606ff79b595320c597a5605ab9e4e13c4
Signed-off-by: Peng Yu <yupeng0921@gmail.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11070
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
To execute a callback function for each registered bdev or unclaimed
bdev, add new public APIs, spdk_for_each_bdev() and
spdk_for_each_bdev_leaf().
These functions are safe for race conditions by opening before and
closing after executing the provided callback function.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: I59b702ffec7b4fc5e9779de5a3a75d44922b829b
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12088
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Use spdk_bdev_readv/writev_block_ext even when
there is no ext opts passed by bdev layer
Signed-off-by: Alexey Marchuk <alexeymar@mellanox.com>
Change-Id: I0b9f17150cdba1a1023478bae745ab4438ea99bb
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/10070
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
That is a preparation for support of memory domains
in bdev_raid
Signed-off-by: Alexey Marchuk <alexeymar@mellanox.com>
Change-Id: I3a6e01eccd4d7e4bc197dc5ffe268d42081d41de
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11429
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
memory domains
If bdev doesn't support any memory domain then allocate
internal bounce buffer, pull data for write operation before
IO submission, push data to memory domain once IO completes
for read operation.
Update test tool, add simple pull/push functions
implementation.
Signed-off-by: Alexey Marchuk <alexeymar@mellanox.com>
Change-Id: Ie9b94463e6a818bcd606fbb898fb0d6e0b5d5027
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/10069
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Replace spdk_bdev_get_by_name() + spdk_bdev_unregister() by
spdk_bdev_unregister_by_name() wherever possible.
This simplifies the code and makes the code more reliable.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: I91388c9d0b2e244cb745720a480803b03c42a226
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12066
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
To unregister a bdev more correctly, we had to call
spdk_bdev_open_ext(), spdk_bdev_desc_get_bdev(), spdk_bdev_unregister(),
and then spdk_bdev_close(). This was correct but complicated.
Hence add a new public API spdk_bdev_unregister_by_name() which does
the whole correct sequence of bdev unregistration.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: I9068d4ac49dca944436e0ba587308fd356dfef75
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12065
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
The process of matching qpair to poll group is split into
two distinct parts that occur on different threads.
See spdk_nvmf_tgt_new_qpair().
This results in a race condition for TCP between spdk_sock_map_lookup()
and spdk_sock_map_insert(), which are called in spdk_nvmf_get_optimal_poll_group()
and spdk_nvmf_poll_group_add() respectively.
Fixes#2113
This patch picks a hint from nvmf_tcp for next poll group,
which is then passed down to spdk_sock_map_lookup().
When matching placement_id exists, but does not have
a poll group assigned - the hint will be used.
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Change-Id: I4abde2bc9c39225c9f5dd7c3654fa2639bb0a27f
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/10271
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
The rdma buffer for stripping DIF metadata is added. CPU strips the DIF
metadata and copies it to the rdma buffer, improving the rdma write
bandwith. The network bandwidth during 4KB random read test is increased
from 79 Gbps to 99 Gbps, the IOPS is increased from 2075K to 2637K.
Fixes issue #2418
Signed-off-by: Chunsong Feng <fengchunsong@huawei.com>
Change-Id: If1c31256f0390f31d396812fa33cd650bf52b336
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11861
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
When iovs are copied from bounce or to bounce, the bounce is usually
alloced from data_buf_pool for better performance, and is multi iovs
instead of a single buffer. Therefore, block-aligned bounce are
supported.
Signed-off-by: Chunsong Feng <fengchunsong@huawei.com>
Change-Id: If56b21d9e46c73d4c956c227bec33ddd0ab9745b
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11860
Community-CI: Mellanox Build Bot
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
spdk_sock_map_insert() allows for allocating a sock_map
entry, without assigning any sock_group.
This is useful for cases where placement_id determined
by the component using spdk_sock_map_*. See PLACEMENT_MARK mode.
Placement_id's are allocated first, then an empty one is found
using spdk_sock_map_find_free().
Since the above is a valid use case, then entry in sock_map
can exist without a group assigned. spdk_sock_map_lookup() has
to handle such cases, rather than trigger an assert.
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Change-Id: Ia717c38fef5e71fe44471ea12f61a5548463f0cf
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/10725
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: wanghailiang <hailiangx.e.wang@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Dong Yi <dongx.yi@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
The usage of internal API for sock_map was not unit tested,
so far.
This patch adds first set of UT for the sock_map,
expanding it and fixing some issues later in the series.
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Change-Id: Idfce8e19668a87f1d45d73310edb17d71d9f8bd8
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/10724
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
As we now only support a single WQ, there's no need for a teble of
them and no need to assert that the stride from WQ to WQ is the
same as the WQ struct size.
Signed-off-by: paul luse <paul.e.luse@intel.com>
Change-Id: I205f36aae22070f532653726dd75249bbafbe3ef
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12081
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
First in a series of patches that will enable multiple engines
to exist at once and choose the best one based on their priorities
and capabilites, the public API will no longer be needed.
Signed-off-by: paul luse <paul.e.luse@intel.com>
Change-Id: Ia87b83aa2263745a94a822a160b6e97bb2e0dc19
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11948
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
With recent changes libreduce should provide correct buffers
if the driver doesn't support SGL in/out. This patch verifies
that we don't use SGLs when they are not supported.
Since even a single buffer can be split on 2MB page
boundary, it is not enough just to check iovs count.
Added asserts that the first elements of mbufs are
not null to avoid scan build errors
Signed-off-by: Alexey Marchuk <alexeymar@mellanox.com>
Change-Id: I620e43bf5b1abd25cab412fe08346a6d767c9be9
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11973
Community-CI: Mellanox Build Bot
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
rte_pktmbuf_free frees the given mbuf and any chained mbufs.
It can cause double free of some mbuf if we free every mbuf
in a loop. Instead use rte_pktmbuf_free_bulk which correctly release
chained mbufs.
Signed-off-by: Alexey Marchuk <alexeymar@mellanox.com>
Change-Id: I55fd7832ff656f519a4ed2f02de8ef1a0f637a02
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11972
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
In the compression operation we may have SGL input
if user's buffer is fragmented or less than chunk_size.
If the backing device doesn't support SGL input then
we should copy user's buffers into decomp_buffer
(including paddings if any).
In the decompression operation, if the backing device
doesn't support SGL output, we use a single output buffer
which is pointing to decomp_buffer. Once the operation
completes, we should copy the result into user's buffers.
Signed-off-by: Alexey Marchuk <alexeymar@mellanox.com>
Change-Id: Ic7fddd38374bb6898256633eacd192dbaf36541a
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11970
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
The driver always creates a single group containing all of the engines
and a single work queue.
Change-Id: I83f170f966abbd141304c49bd75ffe4608f5ad03
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11533
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
It is not used for anything.
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Change-Id: I1d967b2d0e404756f7ceda98ddc4ee9017ec83f7
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11489
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
It turns out that this can stay on the stack.
Change-Id: I961366307dae5ec7413a86271cd1dfb370b8f9f3
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11488
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
These aren't ever accessed in the main I/O path, so we can read them in
whenever we need them and make the code a lot simpler.
Change-Id: Icfdbfe9f2d9db13f4d0d28b2b4103cd0c443bcf4
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11485
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
This is no longer needed.
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Change-Id: I08c788ca0451e739804b568d613c1e52e071c61f
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11794
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Community-CI: Mellanox Build Bot
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Fill is sent in as a uint8, we need to populate the full uint64
input with the uint8 pattern or we'll get a miscompare. This is
how idxd was doing it, instead of adding the same code to ioat just
move it up a layer.
Signed-off-by: paul luse <paul.e.luse@intel.com>
Change-Id: Ia4aab1c6230f35ab88bb8a0e3b8e16dbd93007c7
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11947
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
- Fixed duplication of key, key2, drv_name, cipher, etc., fields in
struct bdev_names and struct vbdev_crypto. Moved all of them into
the new struct vbdev_crypto_opts, which is re-used by both structs.
This aslo removes duplication in error handling and fininalization
logic that checks the keys are zeroed out and properly freed.
- Moved unhexlify into vbdev rpc code. All keys passed to vbdev
already in the binary form.
- Provide meaningful error messages in the rpc response on keys
validation issues during setup of crypto vbdev.
- Updated unit tests.
Signed-off-by: Yuriy Umanets <yumanets@nvidia.com>
Change-Id: I1fab8771bbbc0cd2f359f0d105fec28fb86893b3
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11631
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
That is done to correctly handle metadata pointer which is part
of ext_opts structure. It will also be used by the next patch to
remove memory_domain pointer if request which uses local buffers
is split
Force the user to set correct ext_opts size, update API functions
description.
Signed-off-by: Alexey Marchuk <alexeymar@mellanox.com>
Change-Id: I77517d70df34a998d718cc6474fb4c538a42918f
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11349
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Bdev modules must not access internal bdev_io
structure, so add a new pointer in a public
section. Pointer in internal section will be
used in next patch
Signed-off-by: Alexey Marchuk <alexeymar@mellanox.com>
Change-Id: Ib631563015b3e5fa9300d22b7ae59d8db43c8275
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/10421
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
This patch is a preparation for enabling of memory domains
pull/psuh functionality. Since memory domains API is
asynchronous, this patch makes asynchronous operations
with bounce buffers.
Signed-off-by: Alexey Marchuk <alexeymar@mellanox.com>
Change-Id: Ieb1f2a0c151149af58cfd7607dbde4c76c3c288d
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/10420
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
SPDK has settled on what the optimal DSA configuration is, so let's
always use it.
Change-Id: I24b9b717709d553789285198b1aa391f4d7f0445
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11532
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
The names on these were changed.
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Change-Id: Ib75a60342c08f72dad39635a9244421c1cca5485
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11793
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Add a new flag is_disconnecting to struct spdk_nvme_ctrlr.
Separate calling nvme_ctrlr_disconnect() and nvme_ctrlr_disconnect_done()
by using the flag is_disconnecting.
Additionally, change nvme_ctrlr_fail() to skip setting ctrlr->is_failed
to true if ctrlr->is_disconnecting is true.
Change-Id: Ie2c74ba41f120662a30f6198751d07005d23abcf
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11000
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
This is a preparation to make nvme_transport_ctrlr_disconnect_qpair()
asynchronous.
For nvme_transport_ctrlr_disconnect_qpair(), factor out operations after
returning from transport's specific ctrlr_disconnect_qpair() into a helper
function nvme_transport_ctrlr_disconnect_qpair_done().
Then move nvme_transport_ctrlr_disconnect_qpair_done() into the end of
the transport specific ctrlr_disconnect_qpair().
Additionally remove the operation to overwrite the qpair state to
DISCONNECTED from nvme_transport_connect_qpair_fail() because
this is duplicated and nvme_transport_ctrlr_disconnect_qpair() is responsible
to make the qpair disconnected even after it completes asynchronously.
Change-Id: I9c8faa7039d306d3e31a8f51826755ce8840a8aa
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/10851
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
For RDMA transport, adminq will find transport error first because
usually only adminq polls CM events.
Change-Id: I7b22cc8883bf02198f1a90d2654c1de6f2e736e6
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11331
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
If qpair is disconnected asynchronously, it takes time from detecting
transport error to actually disconnected. We should avoid using the
path as soon as possible after detecting any transport error.
Poll group clears I/O path cache if it finds transport error and avoid
using the path which had transport error.
These changes will reduce the failover time.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: I00580159a84372a115ed5e62a6ce13eed4368999
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11329
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
spdk_nvme_ctrlr_disconnect() will be made asynchronous in the
following patches and so we will need to have some changes.
spdk_nvme_ctrlr_disconnect() disconnects adminq and ctrlr synchronously
now.
If spdk_nvme_ctrlr_disconnect() is made asynchronous,
spdk_nvme_ctrlr_process_admin_completions() will complete to disconnect
adminq and ctrlr, and will return -ENXIO only if adminq is disconnected.
However even now spdk_nvme_ctrlr_process_admin_completions() returns
-ENXIO if adminq is disconnected.
So as a preparation, set a callback before calling spdk_nvme_ctrlr_disconnect()
and call the callback if it is set and spdk_nvme_ctrlr_process_admin_completions()
returns -ENXIO.
Besides, fix the return value of bdev_nvme_poll_adminq() in this patch.
Change-Id: I2559f86bb8cf9a92b5b386ed816c00b08c9832df
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/10950
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
As we do when deleting ctrlr_channel, disconnect and then free I/O
qpair in a ctrlr reset sequence.
Deleting ctrlr_channel and resetting ctrlr_channel may cause conflicts.
This patch processes such conflicts correctly.
If destroy_ctrlr_channel_cb() is executed between pending and executing
reset_destroy_qpair(), reset_destroy_qpair() is not executed because
ctrlr_channel is not found. In this case, destroy_qpair_channel()
starts disconnecting qpair and deletes ctrlr_channel. Then
disconnected_qpair_cb() releases a reference to poll group.
If destroy_ctrlr_channel_cb() is excuted between executing reset_destroy_qpair()
and disconnected_qpair_cb(), destroy_ctrlr_channel_cb() skips
ctrlr_channel for a reset sequence.
Change-Id: I1f49f74b94aefbea178680aa53ded3a12876c676
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/10766
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
At the moment MLX5 uses different number of qp descriptors than the
other pmd crypto drivers. Adding it to vbdev_crypto on init and re-use
everywhere we need it.
Signed-off-by: Yuriy Umanets <yumanets@nvidia.com>
Change-Id: Iea4d4787fc5fd91f27c4a70cf78c5660f09bc854
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11878
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Since IV length is the same for all pmd crypto drivers,
AES_CBC_IV_LENGTH is renamed to IV_LENGTH.
Signed-off-by: Yuriy Umanets <yumanets@nvidia.com>
Change-Id: If8769db119eb599a17c267e8950f18f5a0ea995b
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11875
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
spdk_nvme_poll_group has followed spdk_nvme_qpair about how to
process I/O qpair deletion inside of a completion context.
spdk_nvme_qpair_process_completions() accesses qpair after
returning from nvme_transport_qpair_process_completions().
So this is reasonable.
On the other hand, if spdk_nvme_poll_group_process_completions()
can execute spdk_nvme_ctrlr_free_io_qpair() inside of a completion
context, the target qpair is ensured to be deleted after returning
from spdk_nvme_ctrlr_free_io_qpair(). Then the target qpair is
not accessed anymore in spdk_nvme_poll_group_process_completions().
Remove two variables, in_completion_context and num_qpairs_to_delete,
of spdk_nvme_transport_poll_group and the related code.
This change is really necessary to support the following case.
In the NVMe bdev module, a nvme_qpair has a qpair and a poll_group
channel. disconnected_qpair_cb calls spdk_nvme_ctrlr_free_io_qpair()
for the qpair and spdk_put_io_channel() to the poll_group_channel.
spdk_nvme_ctrlr_free_io_qpair() is executed after unwinding stack
but spdk_put_io_channel() is executed now. The callback to
spdk_put_io_channel() calls spdk_nvme_poll_group_destroy(). However,
spdk_nvme_ctrlr_free_io_qpair() is not executed. Hence
spdk_nvme_poll_group_destroy() fails.
Update the corresponding stub in unit test together.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: Icd1f1daf049c6c7ffb28790fe87989a1060f8952
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11496
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
This is another preparation to disconnect qpair asynchronously.
Add nvme_qpair object and move the qpair and poll_group pointers and
the io_path_list list from nvme_ctrlr_channel to nvme_qpair. nvme_qpair
is allocated dynamically when creating nvme_ctrlr_channel, and
nvme_ctrlr_channel points to nvme_qpair.
We want to keep the times of references at I/O path. Change nvme_io_path
to point nvme_qpair instead of nvme_ctrlr_channel, and add
nvme_ctrlr_channel pointer to nvme_qpair.
nvme_ctrlr_channel may be freed earlier than nvme_qpair. nvme_poll_group
lists nvme_qpair instead of nvme_ctrlr_channel and nvme_qpair has a
pointer to nvme_ctrlr.
By using the nvme_ctrlr pointer of the nvme_qpair, a helper function
nvme_ctrlr_channel_get_ctrlr() is not necessary any more. Remove it.
Change-Id: Ib3f579d3441f31b9db7d3844ec56c49e2bb53a5d
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11832
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
In the following patches, spdk_nvme_ctrlr_disconnect_io_qpair() will
be changed to be asynchronous, spdk_nvme_ctrlr_disconnect_io_qpair()
will be called first and then spdk_nvme_ctrlr_free_io_qpair() after
the qpair is actually disconnected.
We will not be able to keep the current bdev_nvme_destroy_qpair()
function.
As a preparation, inline bdev_nvme_destroy_qpair() and remove it.
Additionally, this patch has the following changes.
Previously I/O qpair was freed and then I/O path caches were cleared.
Both are SPDK thread local. So there is no dependency for the ordering
of these two operations. However, it will reduce the size of the
following patches if we clear I/O path caches before freeing I/O qpair
when the qpair is disconnected. Hence we clear I/O path caches and then
free I/O qpair.
Remove DTRACE for bdev_nvme_destroy_qpair() for now.
It will be restored in the following patches.
Furthermore, fix potential NULL pointer acces in
bdev_nvme_create_qpair().
Change-Id: I0ab78ccb0d240e56b95b53179341afcd909a31f6
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/10746
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
In test_nvme_free_request, there is valgrind error:
Conditional jump or move depends on uninitialised value(s)
Signed-off-by: Rui Chang <rui.chang@arm.com>
Change-Id: I80f741cac9316d86b060419e3b6fb651fa018aa4
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11826
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
If 0 - UINT32_MAX or UINT32_MAX - 0 is substituted into a int variable,
we cannot get any expected result.
Fix the bug and add unit test case to verify the fix.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: Iad2ea681ad8ad234e70c7310b58785a999612156
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11837
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
The following patches will enable us to specify I/O error resiliency
options per nvme_ctrlr as global options. To do it easier, move
per controller options about I/O error resiliency into struct nvme_ctrlr_opts.
prchk_flags is not exactly for resiliency but move it into struct
nvme_ctrlr_opts too.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: I85fd1738bb6e293cd804b086ade82274485f213d
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11829
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
If 0 - UINT32_MAX or UINT32_MAX - 0 is substituted into a int variable,
we cannot get any expected result.
Fix the bug and add unit test case to verify the fix.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: Ib045273238753e16755328805b38569909c8b83a
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11836
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
The spdk_msg_mempool structure has a fixed size, which is not flexible
enough. The size of the memory allocation can now be changed in the
options given to the spdk_app_start function.
Signed-off-by: Alexis Lescouet <alexis.lescouet@nutanix.com>
Change-Id: I1d6524ab8cf23f69f553aedb0f5b0cdc9dde374b
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11635
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
We will use nvmf library CSTS.CFS instead so that the client can
get this error status.
Change-Id: I42c248a7333d1f9c940bb29135c887a61c906bd4
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11676
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Thanos Makatos <thanos.makatos@nutanix.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Plumbing for flags was added in prior pathces. This patch
introduces and respects the relevant flags for use with PMEM
aka durable memory through the accel_fw, IDXD, IOAT and SW
modules.
Signed-off-by: paul luse <paul.e.luse@intel.com>
Change-Id: I792f31459e061d220965feced60e0c236d819a68
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/9455
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
This patch is just plumbing the flags param. Use of it for PMEM
will come in upcoming patches.
Signed-off-by: paul luse <paul.e.luse@intel.com>
Change-Id: I620df072aaad3f8062a0312bbea3da1bc3f911b9
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/9281
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Reviewed-by: GangCao <gang.cao@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
- Continue init of the other crypto devices (mlx5) after failure of
rte_vdev_init(AESNI_MB) in vbdev_crypto_init_crypto_drivers(). It
simply may not be enabled in DPDK because it requires IPSec_MB>=1.0
installed in the system. Reproduces with --with-dpdk=dpdk/install
option used, when the target DPDK is built without control of IPSec
version from the SPDK side.
- Updated crypto_ut to test the new behavior of error handling from
rte_vdev_init(AESNI_MB) in vbdev_crypto_init_crypto_drivers().
Signed-off-by: Yuriy Umanets <yumanets@nvidia.com>
Change-Id: Icd4db8877afe87db8166c40d6e7b414cd43c9c25
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11624
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
- Switched to using rte_mempool for mbufs instead of spdk_mempool. This
allows using rte pkt_mbuf API that properly handles mbuf fields we need
for mlx5 and we don't have to do it manually when sending crypto ops.
- Using rte_mempool *g_mbuf_mp in vbdev crypto ut and added the mocking
API code.
- crypto_ut update to follow pkt_mbuf API rules.
Signed-off-by: Yuriy Umanets <yumanets@nvidia.com>
Change-Id: Ia5576c672ac2eebb260bfdbb528ddb9edcd8f036
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11623
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
This reverts commit 3bacd6653d.
Change-Id: I8dbaffc9f50cf9627720667644496cdaf4e81c3f
Signed-off-by: paul luse <paul.e.luse@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11723
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
According to NVMe over Fabrics spec number of SGLs supported by the
controller is reported in MSDBD. But it is also implicitly limited by
command capsule size (IOCCSZ) since SGL are passed in capsule.
This patch adjusts max_sges to capsule size if required. Adjustment to
MSDBD is also moved to transport layer because it is fabrics specific
parameter and is not valid for PCIe transport.
Signed-off-by: Evgeniy Kochetov <evgeniik@nvidia.com>
Change-Id: I44918eb949345c61242ca50a524d21d04b6ac058
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11669
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
For the benefit of forthcoming vfio-user changes, register the poll
group poller prior to calling the transport create callback, and pass in
a pointer to the poll group itself.
Signed-off-by: John Levon <john.levon@nutanix.com>
Change-Id: Idbc24126c9d46f8162e4ded07c5a0ecf074fc7dd
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/10718
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
With async connect, we need to avoid the case
where the initiator is sending the icreq, and
meanwhile the application submits enough I/O
such that the request objects are exhausted, leaving
none for the FABRICS/CONNECT command that we need
to send after the icreq is done.
So allocate an extra request, and then use it
when sending the FABRICS/CONNECT command, rather
than trying to pull one from the qpair's STAILQ.
Fixes issue #2371.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: If42a3fbb3fd9d863ee48cf5cae75a9ba1754c349
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11515
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
This has changed to control the number of read buffers allocated to the
group, but it is only valid to set this register if the device has
indicated it supports it. Further, the default value is what we want
anyway, so we can skip setting it altogether.
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Change-Id: Ic54672ea6cb16acc7613860e36d9f7033048bd98
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11484
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
All of the other structs and unions spell out register, so match the
style.
Change-Id: Ie502e80206305037d1518a1db590d89b7479abb4
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11433
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
SPDK can submit more commands to remote NVMf target than allowed by
negotiated queue size. SPDK submits up to SQSIZE commands, but only
SQSIZE-1 are allowed.
Here is a relevant quote from NVMe over Fabrics rev.1.1a ch.2.4.1
“Submission Queue Flow Control Negotiation”:
If SQ flow control is disabled, then the host should limit the number
of outstanding commands for a queue pair to be less than the size of
the Submission Queue. If the controller detects that the number of
outstanding commands for a queue pair is greater than or equal to the
size of the Submission Queue, then the controller shall:
a) stop processing commands and set the Controller Fatal
Status (CSTS.CFS) bit to ‘1’ (refer to section 10.5 in the NVMe Base
specification); and
b) terminate the NVMe Transport connection and end the association
between the host and the controller.
Signed-off-by: Evgeniy Kochetov <evgeniik@nvidia.com>
Change-Id: Ifbcf5d51911fc4ddcea1f7cde3135571648606f3
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11413
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
According to NVMe over Fabrics specification (rev.1.1a) HSQSIZE sent
in RDMA_CM_REQUEST private data (ch.7.3.6.4) shall be the same as
SQSIZE later sent in Connect command (ch.3.3).
SPDK NVMe RDMA initiator adjusts SQSIZE to CRQSIZE received from
target in RDMA_CM_ACCEPT private data. Target is allowed to send
CRQSIZE < HSQSIZE if RNR retries are used. So, it is possible that
SQSIZE sent by SPDK will be lower than previously sent HSQSIZE. There
are targets validating this match and they reject connection from
SPDK.
Linux kernel NVMe initiator doesn't perform such adjustments and
connects well to such targets.
This patch aligns SPDK behavior with specification and Linux kernel
implementation.
Signed-off-by: Evgeniy Kochetov <evgeniik@nvidia.com>
Change-Id: I01968d1c07d284396fa5939932d85841351d7a45
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11350
Community-CI: Mellanox Build Bot
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
There are errors occur that uninitialised value created by a stack allocation when running unittest_accel and unittest_nvme_rdma with valgrind.
Change-Id: I4b48b472cc7c189cbcaf8ca772830a23118e7e17
Signed-off-by: Jaylyn Ren <jaylyn.ren@arm.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/10559
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Xiaodong Liu <xiaodong.liu@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
it shall be executed on ctrlr's thread not subsystem's
Signed-off-by: Jacek Kalwas <jacek.kalwas@intel.com>
Change-Id: I58c60525191085d3d6a583862ba5d71ea90940c7
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11105
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
In many cases, addressing bdevs by their UUIDs is often easier than
using their names, which can be somewhat arbitrary. For instance, the
NVMe bdev builds a name by addng the n{NSID} suffix to the controller's
name, while the UUID is filled with NGUID (if available).
The UUID alias is stored in the form defined by RFC 4122, meaning five
groups of lower-case hexadecimal characters. It's important to note
that bdev layer uses case-sensitive name comparison, so the user needs
to use the same textual UUID representation.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I8b112fb81f29e952459d5f81d97fdc7a591730f8
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11395
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
bdev_multi_allocation tries to write four characters, an integer between
0 and INT_MAX, and a nul byte into 10 characters. That requires at least
15 characters.
This leads to build failures with "make CONFIG_DEBUG=n CONFIG_UBSAN=y".
Change-Id: I8cb9fd4ede31ae24809e4a04fd60a67dae3a0ac4
Signed-off-by: Mike Gerdts <mgerdts@nvidia.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11261
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
An spdk_bs_opts structure is sometimes partially initialized due to
using sizeof(opts) (struct spdk_blob_opts, 64 bytes) rather than
sizeof(bs_opts) (struct spdk_bs_opts, 72 bytes).
Signed-off-by: Mike Gerdts <mgerdts@nvidia.com>
Change-Id: Iaaa89bb419f66969d0888f49f8991c35b3dc5ea4
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11268
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
While not documented as such, spdk_bdev_module_claim_bdev() has always
allowed a bdev that is opened read-only to remain read-only when
claimed. This occurs when NULL is passed in place of an spdk_bdev_desc.
This change updates the function's documentation to match the
implementation and adds a unit test to ensure the current behavior
remains.
Signed-off-by: Mike Gerdts <mgerdts@nvidia.com>
Change-Id: Ief26de60e4408bfe1aa60b7a4e1d8adf273470b6
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11267
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>