Without this, it does not think there is a driver available.
Change-Id: I0b8b42374e0ed82abb22bf27e0b8907bb03c61f6
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12641
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: GangCao <gang.cao@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Currently, when you run vhost user target, no matter if the reactor is
busy or not, spdk_top always shows 100% busy. Fix this by real load
status.
Signed-off-by: Rui Chang <rui.chang@arm.com>
Change-Id: I610a8c2f4e74f46bd56955d31284372c775507ed
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12647
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
These function accept optional spdk_blob_ext_io_opts
structure. If this structure is provided by the user
then readv/writev_ext ops of base dev will be used
in data path
Signed-off-by: Alexey Marchuk <alexeymar@mellanox.com>
Change-Id: I370dd43f8c56f5752f7a52d0780bcfe3e3ae2d9e
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11371
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
When snapshot is created, the new blob is loaded and
examined for BLOB_SNAPSHOT xattr in blob_load_backing_dev
function. At this step there is no such xattr, so zeroes
back_bs_dev is created. Later snapshot inherits back_bs_dev
from original blob, so previously created back_bs_dev can
be lost.
Signed-off-by: Alexey Marchuk <alexeymar@mellanox.com>
Change-Id: I90cc9b02f56598d8c5c7fe00409f571fba0aa91a
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11384
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Community-CI: Mellanox Build Bot
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Introduce spdk_blob_ext_io_opts structure which
is used in the new *_ext functions.
Zeroes dev is updated with implementation of
readv_ext which uses memory domains memzero
or regular memset().
Signed-off-by: Alexey Marchuk <alexeymar@mellanox.com>
Change-Id: Id94542196eff999827bf00591fd43804256fccb4
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11369
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
In the following patches, nvme_ctrlr_process_init() will be used to
disable the controller when disconnecting the admin qpair for PCIe
transport. In this case, we will have to exit nvme_ctrlr_process_init()
after CSTS.RDY is 0. However, spdk_nvme_ctrlr_reset() and
spdk_nvme_ctrlr_reconnect_poll_async() have to continue
nvme_ctrlr_process_init() until the controller becomes ready.
To differentiate stop and continue clearly, add a new state
NVME_CTRLR_STATE_DISABLED to enum nvme_ctrlr_state.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: Ic0a5fb7114d4eeb1cefec28bc404184768fb0a96
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12613
Community-CI: Mellanox Build Bot
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Michael Haeuptle <michaelhaeuptle@gmail.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
These were deprecated in 2019, it's time to remove
support for them now.
Change-Id: I3b75eea83bd7d700d20a6189e8fb6d1f066dc9b4
Signed-off-by: wanghailiangx <hailiangx.e.wang@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12603
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
These were deprecated in 2019, it's time to remove
support for them now.
Change-Id: I32dd9960bc397244d8e3d0a384fc8b67e907bf68
Signed-off-by: wanghailiangx <hailiangx.e.wang@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12601
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Move check if opts needs copy to its own function.
Move check if opts is valid into its own function.
Signed-off-by: Jonas Pfefferle <pepperjo@japf.ch>
Change-Id: Ie006ddd0b642eb97aaa3ab13890800322dee7a42
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12571
Community-CI: Mellanox Build Bot
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
vfio_user/host is the PCI abstraction over vfio-user transport, it's
client library. We will add a target library to emulate PCI devices
in next patch, so new lib/vfio_user contains two libraries, one
is for host, the other one is for target.
Change-Id: I9bb40043105525654360691d6db62e4958384e7f
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12314
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Dong Yi <dongx.yi@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Clarify via a variable name that we're dealing with the admin CQ
specifically.
Signed-off-by: John Levon <john.levon@nutanix.com>
Change-Id: I032f6b27e2d75bffb9d95481f177ce0c3655550c
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12556
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Thanos Makatos <thanos.makatos@nutanix.com>
These were deprecated in 2019, it's time to remove
support for them now.
Change-Id: I6931e80c836b568dec8989dad2a7be4e112c42b4
Signed-off-by: wanghailiangx <hailiangx.e.wang@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12577
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
These were deprecated in 2019, it's time to remove
support for them now.
Fix ocf test script that was still using the
deprecated get_bdevs RPC name - change it to
bdev_get_bdevs.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I7f8caedc250b80503671a0236694181613f63860
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12553
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
These were deprecated in 2019, it's time to remove
support for them now.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I2c9918ed0296f644b0728c5106c47d93e3c7ec30
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12552
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Constantly polling the socket degrades performance significantly.
Polling the socket at a much lower frequency, every 1ms, is good enough
for now.
fixes#2494
Signed-off-by: Thanos Makatos <thanos.makatos@nutanix.com>
Co-authored-by: John Levon <john.levon@nutanix.com>
Change-Id: I4a7d35c45ece863b9df756324c23f41736df49f8
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12494
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
When using the SPDK nvme driver in multi-process mode,
and multiple processes detach from their devices
(i.e. at application exit) very close to each other,
there are cases where a process can have started its
own detach, and then get two remove notifications - one
from its own detach, and another from another process'
detach.
So we need to remove the assertion in pci_device_fini(),
if the removed flag has already been set.
Fixes#2456.
Tested using a modified version of the
nvme_multi_secondary() function in test/nvme/nvme.sh
that starts several additional perf applications.
The test consistently failed without this patch,
and passes every time with this patch.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I91e16985cdc4a463aaac2c45096bb967aab85560
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12454
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Pawel Piatek <pawelx.piatek@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
We enable multiple engines by:
* getting rid of the globals that point to the one available HW
and one available SW engine
* adding a submit_tasks() entry point for the SW engine so that
it is treated like any other engine allowing us to just call
submit_tasks() to the assigned engine for the opcode instead of
checking what is supported
* changing the definition of engine capabilities from
"HW accelerated" to simply "supported"
* during init, use a global (g_engines_opc) that contains engines
and is indexed by opcode so we know what the best engine is for each
op code
* future patches will add RPC's to override engine priorities or
specifically assign an opcode(s) to an engine.
Signed-off-by: paul luse <paul.e.luse@intel.com>
Change-Id: I9b9f3d5a2e499124aa7ccf71f0da83c8ee3dd9f9
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11870
Community-CI: Mellanox Build Bot
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
If we don't set the cid before failing the misordered
command, we use some other random cid, causing the
initiator to think the wrong command was completed.
Fixes#2481.
For this issue, the target was completing a
previously submitted AER, not the fuzzed fused
command. The initiator would then submit another
AER to replace the completed one, but the target
complained that the initiator sent too many AERs
since the target didn't really know it had completed
an AER so hadn't adjusted its num_aer count.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I4bd66f147086b262d0e48b8399d237e5ed3c2651
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12452
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
We see reports that Huawei SSDs can't handle hardware
SGL properly, it requires additional alignment, so add
a quirk here to force Huawei SSDs use PRP instead.
Fix#2489.
Change-Id: I20a57e754bc6ff8666d681191994818f2192decc
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12405
Community-CI: Mellanox Build Bot
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: wanghailiang <hailiangx.e.wang@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: GangCao <gang.cao@intel.com>
Reviewed-by: Dong Yi <dongx.yi@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
spdk_bdev_get_acwu() is a 1-based number, so we need
to subtract 1 from it before assigning the value to
nsdata->nacwu.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I32708b28a35670cba6013a48b79389fa48226285
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12399
Community-CI: Mellanox Build Bot
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
This does not actually need a doubly linked list. Single is enough. That
frees up 4 more bytes in the op for other uses.
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Change-Id: I8c2a30de175b42815afd0a3ba3c694aef2f35882
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12258
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
If we don't find a completion, break out of the loop at the top. This
removes a level of indentation but most importantly makes it very clear
that we only remove elements from ops_outstanding from the front of the
list.
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Change-Id: I5e8784a5af5449c14ff7015bc8e6062e6aee6b4e
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12257
Community-CI: Mellanox Build Bot
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Dong Yi <dongx.yi@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
The partition bdev checks if the underlying device supports
the io type and sends the bdev_io directly down to the bdev.
This patch adds missing compare and compare&write io types
to the partition bdev.
Signed-off-by: Jonas Pfefferle <pepperjo@japf.ch>
Change-Id: Ice7e5c0332ce7e564bad2bb8d7f4bb1d535388c1
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12390
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
The bdevio test app has some test cases verifying
that write zeroes commands are handled correctly,
but using knowledge of the ZERO_BUFFER_SIZE that
the bdev library uses for splitting larger write
zeroes commands. Instead of hardcoding that 1MB
value in bdevio.c, have bdevio.c use ZERO_BUFFER_SIZE
directly instead. But this requires moving
ZERO_BUFFER_SIZE into bdev_internal.h and having
bdevio.c include that file.
We do this instead of putting ZERO_BUFFER_SIZE in
the public API because we don't want users to
make any kind of dependencies on this value.
While here, also rename the tests that are using this
value, so that the test names don't include any reference
to the specific size of this bdev-internal zero buffer
size.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: Ia29d92a706cb1f86b4c29374dc2a9beccf679208
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12383
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
This patch adds process_virtio_blk_request() that will be called
by virtio_blk transports to process incoming requests.
Meanwhile vhost_user_blk_request_finish() will be replaced with
a callback to the virtio_blk transport to notify of the result.
blk_request_finish() should only be called as direct result of
process_virtio_blk_request(), usually from it.
Some error paths now call vhost_user_blk_request_finish() directly,
if vhost_user_process_blk_request() was not called yet.
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Change-Id: I0cce22f15b922fe45f30fb659c384b6e836def4c
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/9537
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
struct spdk_vhost_blk_task shall contain only fields
that are relevant to the generic virtio blk layer.
Moved to vhost_internal.h header to allow for broader
use.
In contrast to spdk_vhost_user_blk_task that is
specific to vhost_user transport.
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Change-Id: I8438d3e6dbca816855f55ee998a632f50acde045
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12282
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Later in the series, the spdk_vhost_blk_task will not contain
vhost_user specific fields. As such the generic part of processing
I/O will not be able to refer to those.
References to req_idx are replaced with pointer to the task,
meanwhile bvsession reference when queueing I/O is removed.
Note that vhost_user fields are accessible in the final callback
for the I/O, and will be printed. See blk_request_finish().
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Change-Id: I100d60968146da778bd6bf4fbcf2a2694d3be6e6
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12335
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Previously the blk_request_queue_io() and blk_request_resubmit()
relied on vdev and channel contained in task or bvsession structures.
In an effor to make the I/O processing for virtio blk not reliant
on vhost_user, this patch caches the vbdev and io channel submited
in process_blk_request().
Later in the series, vhost_user structures will be separated out from
the spdk_vhost_blk_task.
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Change-Id: If1ea38a77af8fcfee12054f5857a6db2db2093c6
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12334
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
ACWU is a 0's based value, and our intent is to
report that our target's ACWU is 1 block. This means
we should report ACWU as 0, not 1.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I6ad0606be07fd38bc6c2e3a8e4bb78225b3dfadc
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12385
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
The spdk_rmb() in nvmf_vfio_user_poll_group_poll() is unnecessary: we
already have a read barrier for SQ tail updates at the per-SQ level, so
this doesn't add anything.
Signed-off-by: John Levon <john.levon@nutanix.com>
Change-Id: I88cddd968f4a949640754526e19cb869d9fb31af
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12381
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
There's no need to spdk_rmb() in nvmf_vfio_user_sq_poll() unless we
actually found the tail has advanced.
Signed-off-by: John Levon <john.levon@nutanix.com>
Change-Id: I778835c527409764c3db78459b2aa76420cc0105
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12378
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Thanos Makatos <thanos.makatos@nutanix.com>
When sending the first part of a fuse command, we set the
first_fused_submitted flag so that we don't ring the doorbell
immediately. When the second part is sent, we ring the doorbell for
both commands.
However, this doesn't work well when we use the option to delay ringing
the doorbell. We send both parts, then later when we try to ring the
doorbell, we don't because of the first_fused_submitted flag from the
first command.
Replace this mechanism by keeping track of the last submitted fuse.
Change-Id: Ia4ac9b3ce9c319ee4c7e42f86eadda93dac85fca
Signed-off-by: Alex Michon <amichon@kalrayinc.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12182
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
We need to keep track of the shadow doorbell buffer locations, and make
sure to re-initialize on resume.
Co-authored-by: Thanos Makatos <thanos.makatos@nutanix.com>
Signed-off-by: John Levon <john.levon@nutanix.com>
Change-Id: If3ba456fb35f6f6199e4ff14cec1aad96775f71a
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12237
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Thanos Makatos <thanos.makatos@nutanix.com>
process_blk_request() accepted spdk_vhost_blk_session argument
which is specific for vhost_user. In preparation to making
this function usable outside of vhost_user, replace this field
with struct spdk_io_channel and spdk_vhost_dev.
For this purpose vhost_user_process_blk_request() was created to
translate from spdk_vhost_blk_task to the generic arguments.
to_blk_dev() was moved further up so it can be more commonly used
throughout the file.
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Change-Id: Ie61a1ae2a615c4f1a95601e533b9eec51998cd07
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12333
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
In the qemu virtualization environment, when the virtio driver of the
windows image is upgraded to virtio-win 0.1.185, and virtio uses
legacy mode. In the negotiation stage of vhost, qemu's host_features
will enable VIRTIO_F_ANY_LAYOUT, and guest_features will also enable
this feature, because qemu's feature is a superset, so the feature
will be passed to the spdk side through the set feature process,
which leads to "VHOST_CONFIG: Processing VHOST_USER_SET_FEATURES
failed. VHOST_CONFIG: vhost message handling failed.", so we enable
this bit for spdk vhost.
Signed-off-by: suhua <suhua1@kingsoft.com>
Signed-off-by: lizhaoxin <lizhaoxin1@kingsoft.com>
Change-Id: I27323bf5a03dce774c8a74cfb070ddd43be05534
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12300
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
There were a couple of places not using the standard formatting for qid
still.
Signed-off-by: John Levon <john.levon@nutanix.com>
Change-Id: If96c3f6d762128b0f274e2c4e9eebf4e80e35139
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12346
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
This patch adds an extra spdk_thread_send_msg() call to destroy a qpair
to make sure that it isn't freed from the context of a socket write
callback. Otherwise, spdk_sock_close() won't abort pending requests,
causing their completions to be exected after the qpair is freed.
Fixes#2471
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: Ia510d5d754baccca1e444afdb10696ab9b58e28b
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12332
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
There is a race condition when a bdev is unregistered while reset is
submitted from the upper layer very frequently.
spdk_io_device_unregister() may fail because it is called while
spdk_for_each_channel() is processed.
spdk_io_device_unregister io_device bdev_Nvme0n1 (0x7f4be8053aa1)
has 1 for_each calls outstanding
To avoid this failure, defer calling spdk_io_device_unregister() until
reset completes if reset is in progress when unregistration is ready
to do, and then reset completion calls spdk_io_device_unregister()
later.
A bdev cannot be opened if it is already deleting. So we do not need
to hold mutex.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: Ida1681ba9f3096670ff62274b35bb3e4fd69398a
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12222
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Michael Haeuptle <michaelhaeuptle@gmail.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Now controller initialization with RDMA
transport is fully async
Signed-off-by: Alexey Marchuk <alexeymar@mellanox.com>
Change-Id: I26e857740d3137d0b0e987facc81fc5f6ef81f2b
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/10756
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Transport specific qpair_abort_reqs() set SC to SC_ABORTED_SQ_DELETION.
However, nvme_qpair_abort_queued_reqs() set SC to SC_ABORTED_BY_REQUEST
even if its call is not requested by the upper layer.
Change nvme_qpair_abort_queued_reqs() to set SC to SC_ABORTED_SQ_DELETION
for consistency.
nvme_qpair_abort_queued_reqs() is used to abort queued requests that
were sent while adminq was connecting. SC_ABORTED_SQ_DELETION will not
be so bad even for the case.
This change is required for the NVMe bdev module to be resilient for I/O
error. The NVMe bdev module does not retry I/O if SC is
SC_ABORTED_BY_REQUEST.
SC is set to SC_INTERNAL_DEVICE_ERROR if a request is failed to submit
to qpair by a generic qpair layer. We can change it to
SC_ABORTED_SQ_DELETION as well but we keep this for now.
SC_INTERNAL_DEVICE_ERROR is also retriable for the NVMe bdev module.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: I7d8d5e97b222fe9275afc4fed024c1654c9579a2
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12121
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
As per the NVMe specification, a host can identify two areas of guest
memory: one of which is used for the host-written doorbells, and one of
which contains event indexes. The host writes to the shadow doorbell
area, but also writes to the controller's BAR0 doorbell area if the
corresponding event index is crossed by the update. This avoids many
mmio exits in interrupt mode, where BAR0 doorbells are not directly
mapped into the guest VM, with greatly improved performance.
This isn't a useful feature in BAR0 doorbells are mapped into the VM, so
we explicitly disable support in that case.
NB: the Windows NVMe driver doesn't yet support this feature.
Although the specification says that the admin queues should also engage
in this behaviour, in practice, no VM does, so have to include some
hacks to account for this.
Co-authored-by: John Levon <john.levon@nutanix.com>
Signed-off-by: John Levon <john.levon@nutanix.com>
Change-Id: I0646b234d31fbbf9a6b85572042c6cdaf8366659
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11492
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
probed
This can cause a mismatch of kernel vs user driver and isn't allowed.
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Change-Id: I9c572ea1fa1da89d7b41e31ab4719eec719fb50a
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/10588
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: John Kariuki <John.K.Kariuki@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
The only place a batch can be created is by assigning it to the channel
now, so this isn't a mistake that can be made and the checks can all be
removed.
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Change-Id: I915edb4f212c0751396554655ffe95ae3bb20cd6
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11538
Community-CI: Mellanox Build Bot
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
This effectively means there is only ever a single batch being build at
a time, which simplifies a lot of the APIs.
Change-Id: Ifd66cd1ce6f6f0abe2011528dd862c5324213658
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11223
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
This lets us use it more widely.
Signed-off-by: John Levon <john.levon@nutanix.com>
Change-Id: I9c67be19020677fab3eafe05c1e0f91c3d04611d
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12307
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
The value of ack_timeout is calculated according to
the formula 2^(transport_ack_timeout) msec.
Signed-off-by: zhangduan <zhangd28@chinatelecom.cn>
Change-Id: I5a938635d70693ddd405fa5907555bb745b4df0f
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12215
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
This reduces a lot of casting.
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Change-Id: Ibc04f422858642d0e20c9b020bb6c5d1b70256fe
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11534
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
When a qpair is disconnected, any outstanding zero-copy requests are
freed to release their buffers before the qpair gets destroyed.
However, if there is a PDU being sent to the host as part of this
request (e.g. C2HData/R2T), we need to wait until that write is done
before freeing the request to avoid freeing it twice.
Fixes#2445
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I2a6e82f26a4f011dfd18c55c821e9039de7e584a
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12255
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
This makes the flag indicate whether there's an outstanding PDU write
for a given request. Additionally, it reduces the number of places we
need to update this flag.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: Id7e587f84955b096c46bfbf88d4dd222214d4a6a
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12254
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
This will make it possible to have some common handling in request's PDU
write completion.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: Icaff38da0e47dd93327e3d8f09edd9fdba8f532e
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12253
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
When an request using zcopy is completed, it might have an unreleased
zcopy_bdev_io attached in three cases:
1) the request was a read,
2) the request was a failed write,
3) the qpair is being disconnected.
The last case was missing from the assertion.
Fixes#2425
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I5cbeaa198a1fd878c98caf148a0bc47060e35bca
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12263
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Now that IO qpairs can be created asynchronously, we need to make sure
that all the create IO CQ/SQ commands can be executed simultaneously.
It is pretty common to create multiple IO qpairs at the same time, e.g.
adding an NVMe bdev to an nvmf subsystem will create an IO qpair on each
poll group. In that case, if the number of cores exceed the size of the
admin queue (actually it can be even lower due to outstanding AERs), we
might run out nvme_requests on the admin queue.
The chosen minimum value for the admin queue size, 256, should be enough
to cover most cases.
Fixes#2465
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I55c59aef64f3fdb33f7b4824d3e9beb403602633
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12270
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
The RDMA transport can disconnect qpair asynchronously now.
Previously, we tried to release the resource of the qpair after disconnected.
However it did not work because it was done when deleting the qpair.
The admin qpair was not deleted in a ctrlr reset sequence.
This patch tries to satisfy the same aim again but by a different way.
Previously, we released the resource of the qpair before starting
actual disconnection process. This patch release the resource of the
qpair after the qpair is actually disconnected.
The related patches are:
b9518a5540eb09178a59
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: Id6a814895a35b1589b781a91744ef872b42aaa69
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11783
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
The code to handle the lingering qpair when deleting it was really
complicated.
The RDMA transport can connect or disconnect qpair asynchronously.
Then we can include the code to handle the lingering qpair into the
code to disconnect qpair now.
If the disconnected qpair is still busy, defer completion of the
disconnection until qpair becomes idle.
If poll group is not used, we can complete disconnection immediately
because cq is already destroyed.
The related data and unit test cases are not necessary anymore.
So delete them in this patch.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: Ic8f81143fcad0714ac9b7db862313aa8094eeefb
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11778
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Include delayed disconnect/connect retries with finite times into
the state machine of asynchronous qpair connnection.
We do not need to call back to the common transport layer but
we need to do the following, clear rqpair->cq before starting disconnection
if qpair uses poll group, and clear qpair->transport_failure_reason after
disconnected.
Additionally locate the new state STALE_CONN before INITIALIZING
because cq is not ready to use for admin qpair when the state is
STALE_CONN.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: Ibc779a2b772be9506ffd8226d5f64d6d12102ff2
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11690
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
TCP transport already does it but was not documented clearly.
RDMA and PCIe transports follow it and document it clearly.
Then we can check each qpair's state if
spdk_nvme_poll_group_process_completions() returns -ENXIO before
disconnected_qpair_cb() is called.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: I2afe920cfd06c374251fccc1c205948fb498dd33
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11328
Community-CI: Mellanox Build Bot
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Add three states, INITIALIZING, EXITING, and EXITED to the rqpair
state.
Add async parameter to nvme_rdma_ctrlr_create_qpair() and set it
to opts->async_mode for I/O qpair and true for admin qpair.
Replace all nvme_rdma_process_event() calls by
nvme_rdma_process_event_start() calls.
nvme_rdma_ctrlr_connect_qpair() sets rqpair->state to INITIALIZING
when starting to process CM events.
nvme_rdma_ctrlr_connect_qpair_poll() calls
nvme_rdma_process_event_poll() with ctrlr->ctrlr_lock if qpair is
not admin qpair.
nvme_rdma_ctrlr_disconnect_qpair() returns if qpair->async is true
or qpair->poll_group is not NULL before polling CM events, or polls
CM events until completion otherwise. Add comments to clarify why
we do like this.
nvme_rdma_poll_group_process_completions() does not process submission
for any qpair which is still connecting.
Change-Id: Ie04c3408785124f2919eaaba7b2bd68f8da452c9
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11442
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
This was added in patch 07526d85, back in March 2018.
This was before DPDK supported dynamic hugepage allocations.
Presumably this flag was added to reduce the amount of
memory lost due to mempool buffers that would otherwise
span an IOVA boundary (mostly typical with IOMMU off and
we are relying on physical addresses).
Removing it simplifies any code in SPDK that uses
mempool buffers for DMA operations, since it doesn't have
to worry about splitting buffers that span an IOVA
boundary - DPDK has already done it for us.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I49f6c1407fad02acae7e07c9dd00cb0449bd3554
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12277
Community-CI: Mellanox Build Bot
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Both blk_request_finish() and invalid_blk_request()
acomplished the same thing, with variation on handled
statuses and debug logs.
Consolidating those two into single function will help
later on when replacing completion of request processing
to single callback.
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Change-Id: Iae7b93db01bfd98819b2bb8fad9e11afcdb3a459
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12196
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
This patch adds vhost_blk_[get/put]_io_channel() to be used
by virtio_blk transports.
Functions related to vhost_user sessions were modified to
use it.
dummy_io_channel reference is managed at the vhost_blk
layer and as such continues to use the spdk_[get/put]_io_channel()
APIs. The description is updated to reflect its not specific to
vhost_user transport.
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Change-Id: I6644198da83bfa0210c167e203d3875e96f1e7ea
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11101
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
The vhost_user_config_json() will be replaced with callback
to virtio_blk transport.
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Change-Id: I6ea0ea38f505f0d354cd34ee5ab9cd3a858bd82e
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/9538
Community-CI: Mellanox Build Bot
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
spdk_vhost_dev structure should only contain generic fields
that are to be used by either vhost, vhost_blk or vhost_scsi
layer.
The vhost_user backend can hold its properties in
spdk_vhost_user_dev, which is maintained within rte_vhost.
Both structures contain references back to each other.
The reference in spdk_vhost_dev is a void pointer to
allow future transports to keep the reference
to their own structures.
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Change-Id: I68640c524426d885c20242146365ba242fa9df8e
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11813
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
In a similar manner for what we do for other per IO data-structures of cmds,
cpls and bufs, use the conventional huge-pages based spdk allocation scheme
for RDMA requests and receives.
Change-Id: I4c2e86e928106e78c053f24915e2a9ce1a200c78
Signed-off-by: Or Gerlitz <ogerlitz@nvidia.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12273
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
To maximize cache locality, use lifo and not fifo when managing objects
which are used per IO such as the RDMA receive elements queue.
Change-Id: Id8917558acc1bec29943fcbae6afe6b072bde6ac
Reported-by: Jim Harris <james.r.harris@intel.com>
Signed-off-by: Or Gerlitz <ogerlitz@nvidia.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12272
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Due to the same reason as transport_ack_timeout for
RDMA transport, TCP transport also needs ack timeout.
This timeout in msec will make TCP socket to wait for
ack util closes connection.
Signed-off-by: zhangduan <zhangd28@chinatelecom.cn>
Change-Id: I81c0089ac0d4afe4afdd2f2c7e5bff1790f59199
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12214
Community-CI: Mellanox Build Bot
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
In support of upcoming patches and to greatly simplify things,
the capabilites enum which held bit positions for each opcode
has been removed. Only the opcodes enum remains and thus only
opcodes are used throughout. For the capabiltiies bitmap a helper
function is added to convert from opcode to bit position. Right
now it is used in the IO path but in upcoming patches that goes away
and the conversion is only done at init time.
Signed-off-by: paul luse <paul.e.luse@intel.com>
Change-Id: Ic4ad15b9f24ad3675a7bba4831f4e81de9b7bc70
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11949
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
spdk_for_each_bdev() can be applied simply to rpc_bdev_get_bdevs()
because the callback function rpc_dump_bdev_info() is synchronous.
spdk_for_each_bdev() cannot be applied simply to rpc_bdev_get_iostat().
The factored-out callback function _bdev_get_device_stat() is
asynchronous. Add desc pointer to struct spdk_bdev_io_stat and open
before and close after executing spdk_bdev_get_device_stat().
Replace spdk_bdev_get_by_name() by spdk_bdev_open_ext() when a bdev name
is specified.
spdk_bdev_register() checks if the name of each bdev is set and each bdev
is opened while collecting its stats now. Hence it is not possible that
spdk_bdev_get_name() returns NULL. Simplify rpc_bdev_get_iostat_cb()
based on this fact.
Furthermore, we want to fail the RPC for all failures. The callback
function to spdk_bdev_get_device_stat() is executed after stack
unwinding if successful. Defer starting RPC response until it is ensured
that all spdk_bdev_get_device_stat() calls will succeed or there is no bdev.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: I7b036d6d707c49d19c8922a159b12b5b5ce7ca41
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12089
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
OpenSSL 3.0 deprecated the MD5_xxx APIs, so switch
the md5 code in the iscsi library to use the EVP
APIs recommended by OpenSSL instead.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: Ic5e3cd6e30ebc8b027f0715434cc3be045f1b770
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12240
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Some SeaBIOS versions are not aligning virtio_blk_outhdr
on 8-byte boundary, causing ubsan failures. To be safe,
let's just make sure on our end that we only access a
properly aligned structure by copying this small (16-byte)
data structure to a local structure variable.
Fixes issue #2452.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: Iacad72c3a1759fb8dc5ba411272a34d93ef2a6fc
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12238
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Only FDs are used for passing them to another process,
we can unlink them after creation.
Here we only unlink the files created in vfio-user,
and there is still one file created via libvfio-user,
it will be fixed via
https://github.com/nutanix/libvfio-user/issues/660.
Partly fix issue #2449.
Change-Id: Ie27640e0cb85f44596e9d0ad5a2b67adf0419f5c
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12195
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Reviewed-by: John Levon <levon@movementarian.org>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
We can use lib/vfio-user API to setup BAR0 doorbells,
existing implementation is redundant.
Change-Id: Ib880d167c84c6b8482bf1a35559a34c939f6a02d
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12211
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: John Levon <levon@movementarian.org>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
The config_bus_number is an offset within the config space reserved for
the devices behind the VMD, while bus_number refers to the actual bus
number assigned by VMD that depend on the VMCAP and VMCONFIG registers.
So, to access the mapped config space we have to use config_bus_number.
We didn't do that when resetting root ports', which could lead to
segfaults if these values were different, as we'd access unmapped
memory.
Fixes#2451
Change-Id: I4e7bbb81400462284014565099bec98f6171c8c9
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12208
Reviewed-by: Tom Nabarro <tom.nabarro@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Previously spdk_vhost_dev_backend held callbacks
for vhost_blk and vhost_scsi functionality, along
with ones that are called by the vhost_user backend.
This patch separates out those callbacks into two
structures:
- spdk_vhost_dev_backend - to be implemented by vhost_blk
and vhost_scsi
- spdk_vhost_user_dev_backend - is only implemented by
vhost_user backend, callbacks for session managment
specific to that transport
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Change-Id: I348090df5dddeb2b1945b082b85aec53d03c781b
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11812
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Previously g_packed_ring_recovery was set globally.
Setting that during controller creation, would affect
all previously created controllers.
This is now set on per-controller basis and only
enabled if packed ring feature is used.
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Change-Id: Idcc7231471446c805154648ab835a6af78f6543c
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12040
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Separate parsing generic rpc vhost params form device specific,
this solution allow to create various device which share
common parameters.
Change-Id: I50b1a89a8260fb1394880a750591e95539995288
Signed-off-by: Sebastian Brzezinka <sebastian.brzezinka@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12026
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
None of the current DEBUGLOGs are in any kind of
performance critical code path. Making them INFOLOG
means that we can enable them even in release builds
to get additional information if needed.
(Note: planning to do this with other libraries and
modules as well.)
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I61765fab843a06c36ac1979151589e8f57fea76e
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12209
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
No functional change; this just makes the poll code a little easier to
read.
Signed-off-by: John Levon <john.levon@nutanix.com>
Change-Id: If6d1dcd940ed5b461856b535b1bf01c4efa8612a
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12076
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
RDMA READs can cause cmds to be submitted to the target
layer in a different order than they were received
from the host. Normally this is fine, but not for
fused commands.
So track fused commands as they reach
nvmf_rdma_request_process(). If we find a pair of
sequential commands that don't have valid FUSED settings
(i.e. NONE/SECOND, FIRST/NONE, FIRST/FIRST), we mark
the requests as "fused_failed" and will later fail them
just before they would be normally sent to the target
layer.
When we do find a pair of valid fused commands (FIRST
followed by SECOND), we will wait until both are
READY_TO_EXECUTE, and then submit them to the target
layer consecutively.
This fixes issue #2428 for RDMA transport.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I01ebf90e17761499fb6601456811f442dc2a2950
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12018
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
To execute a callback function for each registered bdev or unclaimed
bdev, add new public APIs, spdk_for_each_bdev() and
spdk_for_each_bdev_leaf().
These functions are safe for race conditions by opening before and
closing after executing the provided callback function.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: I59b702ffec7b4fc5e9779de5a3a75d44922b829b
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12088
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Bdev open/close will be done for each bdev when traversing the bdev
list. This patch is a preparation.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: I2486bd823953fe020ed6106844877e1cf49d8a0d
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12126
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
spdk_io_device_unregister() send message to call its callback. So to
make the following patches easier, consolidate g_bdev_mgr.mutex unlocks
to the end of spdk_bdev_close().
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: Ib3b5c72be06e764918da30d7aa9fbc2ccd33956e
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12125
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Bdev open/close will be done for each bdev when traversing the bdev
list. This patch is a preparation.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: I4e4fe6f1248176631a74c09585c931b21eb49d2b
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12124
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
That will allow to pass ext opts to bdev layer
Since in ext API metadata is passed as part of ext IO opts
structure and ext opts can be NULL (e.g. upper layed used
regular API), in that case we use *blocks_with_md API
Signed-off-by: Alexey Marchuk <alexeymar@mellanox.com>
Change-Id: I1bfb3fcb11bf42e100ecc7e4058087f12086db3a
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11048
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
memory domains
If bdev doesn't support any memory domain then allocate
internal bounce buffer, pull data for write operation before
IO submission, push data to memory domain once IO completes
for read operation.
Update test tool, add simple pull/push functions
implementation.
Signed-off-by: Alexey Marchuk <alexeymar@mellanox.com>
Change-Id: Ie9b94463e6a818bcd606fbb898fb0d6e0b5d5027
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/10069
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
To unregister a bdev more correctly, we had to call
spdk_bdev_open_ext(), spdk_bdev_desc_get_bdev(), spdk_bdev_unregister(),
and then spdk_bdev_close(). This was correct but complicated.
Hence add a new public API spdk_bdev_unregister_by_name() which does
the whole correct sequence of bdev unregistration.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: I9068d4ac49dca944436e0ba587308fd356dfef75
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12065
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
The process of matching qpair to poll group is split into
two distinct parts that occur on different threads.
See spdk_nvmf_tgt_new_qpair().
This results in a race condition for TCP between spdk_sock_map_lookup()
and spdk_sock_map_insert(), which are called in spdk_nvmf_get_optimal_poll_group()
and spdk_nvmf_poll_group_add() respectively.
Fixes#2113
This patch picks a hint from nvmf_tcp for next poll group,
which is then passed down to spdk_sock_map_lookup().
When matching placement_id exists, but does not have
a poll group assigned - the hint will be used.
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Change-Id: I4abde2bc9c39225c9f5dd7c3654fa2639bb0a27f
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/10271
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
The rdma buffer for stripping DIF metadata is added. CPU strips the DIF
metadata and copies it to the rdma buffer, improving the rdma write
bandwith. The network bandwidth during 4KB random read test is increased
from 79 Gbps to 99 Gbps, the IOPS is increased from 2075K to 2637K.
Fixes issue #2418
Signed-off-by: Chunsong Feng <fengchunsong@huawei.com>
Change-Id: If1c31256f0390f31d396812fa33cd650bf52b336
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11861
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Currently the idxd driver requires VFIO so avoid unexpected errors
if someone tries without it (with UIO).
Temp workaround for issue #2316
Signed-off-by: paul luse <paul.e.luse@intel.com>
Change-Id: I430cd2193bc8dbd6939af7d0ca799832e7a73213
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11816
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
In the current behavior the iovcnt is lowered before sending it to
the next BDEV in the stack - however if the returned value is ENOMEM
(due to eg. not enough bdev requests in the pool), the request needs
to be returned to its original state, as it would be resubmitted with
skipped iov entries.
Signed-off-by: Kozlowski Mateusz <mateusz.kozlowski@intel.com>
Change-Id: I7240510a2ec04594b248f7347e86ac11ecfd26a0
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11976
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
When iovs are copied from bounce or to bounce, the bounce is usually
alloced from data_buf_pool for better performance, and is multi iovs
instead of a single buffer. Therefore, block-aligned bounce are
supported.
Signed-off-by: Chunsong Feng <fengchunsong@huawei.com>
Change-Id: If56b21d9e46c73d4c956c227bec33ddd0ab9745b
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11860
Community-CI: Mellanox Build Bot
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>