This has been implicitly required before, and all
in-tree apps (except accel_perf) set it, so let's
explicitly require it. This name gets used for
things like the shm name for spdk trace event file.
While here, add the name for accel_perf.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I47a22466550d4b31bacafee58d30339b4f22f4b4
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/13876
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Michal Berger <michal.berger@intel.com>
Reviewed-by: Dong Yi <dongx.yi@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
When decoupling a snapshot from its parent, we need to clear its parent.
So we should remove the xattr BLOB_SNAPSHOT. Modifying the xattrs of a blob
only works if its metadata are not in read-only mode.
By default, a snapshot is in read-only mode so this operation fails. When we
later want to delete the snapshot, we will see that it has a parent, so we will
try to remove the snapshot from its parent's clones list. This will cause a
crash.
The fix is to remove the BLOB_SNAPSHOT xattr only after setting the snapshot's
metadata in rw mode.
Signed-off-by: Alex Michon <amichon@kalrayinc.com>
Change-Id: I80efa6dd3dcb38b4c738ce2e97aa2ffc281cefa5
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/13723
Community-CI: Mellanox Build Bot
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
DPDK may use this NULL pointer to access its member,
And then got segmentation fault. But we only need it
exit or report normal error.
To minimize the impact, and to prevent these going on,
we add check the error return for creating NULL mempool
in spdk_thread_lib_init_ext in spdk_reactors_init.
when error returning from spdk_thread_lib_init_ext in spdk_reactors_init.
It contains thread_lib_init which reports error for failed mempool.
Thus, codes will return and will not cause segmentation fault.
Fixes issue #2620.
Signed-off-by: yidong0635 <dongx.yi@intel.com>
Change-Id: I63369fdaeb231196e8f8daa826eb5b057ed829b8
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/13842
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Michal Berger <michal.berger@intel.com>
Here should return -ENOMEM, and other places are
changed.
Signed-off-by: yidong0635 <dongx.yi@intel.com>
Change-Id: Id81cd7485733e66d996b1501061a45f774f2b51a
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/13863
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Michal Berger <michal.berger@intel.com>
Parameter `num_queues` for virtio_scsi PCI device means
maximum number of queues, it SHOULD include the `eventq`
and `controlq`, while for `vhost_user` RPC call, it means
the number of IO queues, so here we use it as `max_queues`
in lib/virtio and add the fixed number queues for `vhost_user`
SCSI device.
Also fix `vhost_fuzz` to get `num_queues` earlier than
negotiate the feature bits.
Change-Id: I41b3da5e4b4dc37127befd414226ea6eafcd9ad0
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/13791
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
The `vhost_user` socket transport APIs are already in the
same source file, so just call the function directly.
No code logic changes in this commit.
Change-Id: If471b9b0166d43591fb8614e95a17473c964e87c
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/13789
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Similar with NVMe device driver, here `virtio` is a specification
abstraction library, `pci` and `vhost_user` are transports layer,
here we merge vhost_user.c and virtio_user.c into one new source
file `virtio_vhost_user.c` so that to make code more clear.
No logic change, just code movement in this commit.
Change-Id: I8e3e5c477e7c45e6eeebad240b8cc3c9476b86d1
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/13788
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
While running into this function, even the subsystem can't be
destroyed due to error subsystem state, it's better to continue
the execution.
Continue to fix#2590, QEMU is stuck for the failure case, and
nvmf target should process such error because it may support other
normal subsystems at the same time.
Change-Id: Ib05e24996378b52070d2b760519f476f9b2d7e76
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/13839
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Dong Yi <dongx.yi@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
The RPC provides a list of initialized engine names along with
that engine's supported operations.
Signed-off-by: paul luse <paul.e.luse@intel.com>
Change-Id: I59f9e5cb7aa51a6193f0bd2ec31e543a56c12f17
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/13745
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
In prep for upcoming patch that will provide an RPC to override
and automatic assignment of an op code to an engine.
Signed-off-by: paul luse <paul.e.luse@intel.com>
Change-Id: I17d4b962fb376a77f97ce051a513679d0fba698e
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12829
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
fix: If the first six characters of two scsi lun's name are the same,
such as aaaaaa0 and aaaaaa1, so do theirs naa identifier
Signed-off-by: Bin Yang <bin.yang@jaguarmicro.com>
Change-Id: I4e0541b372a0e20e95e0a24d62dd3d85b7abe230
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/13824
Community-CI: Mellanox Build Bot
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
This is another preparation to create and use ibv_context and pd.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Signed-off-by: Denis Nagorny <denisn@nvidia.com>
Signed-off-by: Evgeniy Kochetov <evgeniik@nvidia.com>
Change-Id: Id594fa1ccb2daf535b1aaaef0a397bda2ec98578
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/13710
Community-CI: Mellanox Build Bot
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
The following patches will create and use ibv_context and pd
explicitly instead of using default ibv_context and pd created
by rdmacm.
As a preparation, pass pd instead of cm_id to nvme_rdma_reg_mr().
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Signed-off-by: Denis Nagorny <denisn@nvidia.com>
Signed-off-by: Evgeniy Kochetov <evgeniik@nvidia.com>
Change-Id: Ifdcd18ed363b8ba4a23a920bf3559237e38821c6
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/13599
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
spdk in interrupt, reactor dosen't correctly handle exited threads,
causing vhost threads still in reactor's lw_threads list. The fix
will do cleanup thread when it's state becomes EXITED. Though it's
exposed in v22.05.x, but the master branch also has the problem.
We will do this as below:
(1) When thread's state becomes SPDK_THREAD_STATE_EXITED, reactor
process thread exits first.
(2) Then reactor do remove lw_thread and destroy it.
Fix issue: #2574
Signed-off-by: Apokleos <oliverliyn@gmail.com>
Change-Id: I3ac2681d70480563db3a0aee4aff61c2f272b140
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/13706
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Xiaodong Liu <xiaodong.liu@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
If Controller Fatal Status (CFS) bit is set, there's no point in waiting
for CSTS.RDY and the only way to move forward with the initialization is
to perform a controller reset.
This fixes issues with test/nvme/sw_hotplug.sh when running under qemu.
It seems that during that test, qemu marks the emulated NVMe drives as
fatal, so if we didn't check CSTS.CFS, the initialization would time
out.
Fixes#2201.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I97712debc80c3dd6199545d393c0f340f29d33b2
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/13820
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Michal Berger <michal.berger@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Sometimes VM may get a kernel panic when starting, and SPDK CI will kill
`nvmf_tgt` after 60 seconds, and for this exception, SPDK will raise an
assertion when destroying the subsystem, while here, we remove this
assertion and print the error information.
CI will still mark this case as a failed case, then we can use this error
information to understand error subsystem state in vfio-user.
Fix issue #2590.
Change-Id: I20b16f9e96a566730eca2dd9ea165645bd9160bd
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/13773
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Dong Yi <dongx.yi@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
this is a prework for further changes - with lock on generic layer
lock on specific transport (e.g. tcp, rdma) layer becomes optional
possibly it won't be required if some contract introduced on public
interfaces (to be considered)
- spdk_nvmf_poll_group_[create|destroy]
- spdk_nvmf_tgt_listen_ext, spdk_nvmf_tgt_stop_listen
- spdk_nvmf_get_optimal_poll_group
Signed-off-by: Jacek Kalwas <jacek.kalwas@intel.com>
Change-Id: Ib132babf9e7022342129fe795991cdad834e7f53
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/13665
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
When there are not enought transport buffers for
multi SGL request in state NEED_BUFFER, WRs
received from the data_wr_pool are returned back
to the pool. However rdma_req->data.wr.next pointer
still points to the first WR from the pool. Usually
it doesn't cause any problems since rdma_req will
try to fill buffers again, but when qpair is being
destroyed, all requests are completed forcefully.
When the request is completed and data.wr.next
pointer is not NULL, we'll try to put already
released WRs into the pool one more time.
That corrupts the pool and leads to undefined
behavior.
Fixes#2541
Signed-off-by: Alexey Marchuk <alexeymar@nvidia.com>
Change-Id: I238b92eec132d8d845330362af6f335421177454
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/13760
Reviewed-by: Michael Haeuptle <michaelhaeuptle@gmail.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
The function `nvme_ctrlr_init_ana_log_page` is exactly
same with `nvme_ctrlr_update_ana_log_page`, so remove it.
Change-Id: I1ad51635f47cf95cfa6de217e3b9144885c3b74e
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/13652
Community-CI: Mellanox Build Bot
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Reviewed-by: Dong Yi <dongx.yi@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
allow DSA device to async offload crc32 calculation in nvmf-TCP
This patch can use DSA to accelerate crc32 computation, making
the io performance of TCP paths using crc32 approach the io
performance of TCP paths that do not use crc32.
Using SLIST to minimize the performance drop. SLIST has less
operation compared to TAILQ.
Thinking about memory thrashing, we should use the same memory as
possible to receive new PDUs. So, insert newly freed PDU in to head
is better.
The performance drop is within 1% compared to the TCP path without
crc32.
Signed-off-by: MengjinWu <mengjin.wu@intel.com>
Change-Id: I480eb8db25f0e730cb198ca5ec19dbe3b4d38440
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11708
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: GangCao <gang.cao@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
We have to process whole QoS queue on each QoS poll. It may contain
IOs that still have quota or not affected by QoS rules at all. If we
stop on the first queued IO, all IOs will be limited by the minimum
QoS rule even if they're not affected by this rule.
Here is an example and simple test. We have a NVMf target with Null
bdev and QoS configured with read bandwidth limited to 10 MB/s and
write bandwidth limited to 100 MB/s. First we start nvme_perf with
only write IOs and we see that reported bandwidth is 100 MB/s. Then we
start another instance of nvme_perf with only read IOs. We see that
reported read bandwidth is 10 MB/s but we also see that write
bandwidth also drops to 10 MB/s.
Signed-off-by: Evgeniy Kochetov <evgeniik@nvidia.com>
Change-Id: I1edf09d038e65f873deef19ecb0f4bf9725a5ca5
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/13767
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Original implementation creates pollers and CQs for all discovered
devices at poll group creation. Device (ibv_context) that has no
references, i.e. has no QPs, may be removed from the system and
ibv_context may be closed by rdma_cm. In this case we will have a CQ
that refers to closed ibv_context and it may crash in ibv_poll_cq.
With this patch pollers are created on demand when we create the first
QP for a device. When there are no more QPs on the poller, we destroy
the poller. This also helps to avoid polling CQs that don't have any
QPs attached.
Signed-off-by: Evgeniy Kochetov <evgeniik@nvidia.com>
Change-Id: I46dd2c8b9b2902168dba24e139c904f51bd1b101
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/13692
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Both PCIE and VFIO-USER can use the same APIs to get IO queue
pair statistic data, so merge them here.
Change-Id: Iadf9ead2bd5abaf11d2ef5d1884acb67369f85bb
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/13538
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
When a bdev is registered, it is examined by the bdev modules before the
bdev register even is notified.
Examination may be asychronous, e.g. when the bdev module has to perform
I/O on the new bdev.
This causes a race condition where the bdev might be destroyed while
examination is not finished. Then, once all modules have signaled that
examination is done, `bdev_register_finished` makes an invalid access to
the freed bdev pointer.
To fix this, defer the unregistration until the examine is completed by
opening a descriptor on the bdev.
Change-Id: I79a2faa96c1c893fc1cee645fbe31f689b03ea4a
Signed-off-by: Nathan Claudel <nclaudel@kalray.eu>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/13630
Community-CI: Mellanox Build Bot
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>