Since SPDK holds copies of local DPDK headers for DPDK PCI API,
the same headers will now be used as includes.
It was already the case for DPDK 22.11, but not for DPDK 22.07.
Change-Id: I5859a630d1fb20b4ebf8628adb962f5e46c23788
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15969
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Shortly after DPDK 22.11 release it was amended with single
patch, which bumped the minor version.
No changes have occurred to the DPDK PCI API.
Change-Id: I94dadb23b3ad79cfbb21e848d718d909493137d1
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15890
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Reviewed-by: Michal Berger <michal.berger@intel.com>
As spdk_lvs_init() validates arguments, it uses o->cluster_sz in a
comparison but misleadingly prints opts.cluster_sz in the error message.
This changes the error message to print cluster_sz from the proper
structure.
Signed-off-by: Mike Gerdts <mgerdts@nvidia.com>
Change-Id: I810bf9ad4a24ed7cc844c2835e0edda988cb2cbe
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15970
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
The internal mempools were replaced with the newly added iobuf
interface.
To make sure we respect spdk_bdev_opts's (small|large)_buf_pool_size, we
call spdk_iobuf_set_opts() from spdk_bdev_set_opts(). These two options
are now deprecated and users should switch to spdk_iobuf_set_opts().
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: Ib1424dc5446796230d103104e272100fac649b42
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15328
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
This is done in a couple of places, so it makes sense to extract it to a
separate function.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: Id34b2545d9912c2b7b65b1277711e9683db92658
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15327
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Users can now specify a number of small/large buffers to be cached on
each iobuf channel. Previously, we relied on the cache of the
underlying spdk_mempool, which has per-core caches. However, since iobuf
channels are tied to a module and an SPDK thread, each module and each
thread is now guaranteed to have a number of buffers available, so it
won't be starved by other modules/threads.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I1e29fe29f78a13de371ab21d3e40bf55fbc9c639
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15634
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
The idea behind "iobuf" is to have a single place for allocating data
buffers across different libraries. That way, each library won't need
to allocate its own mempools, therefore decreasing the memory footprint
of the whole application.
There are two reasons for putting these kind of functions in the thread
library. Firstly, the code is pretty small, so it doesn't make sense to
create a new library. Secondly, it relies on the IO channel abstraction,
so users will need to pull in the thread library anyway.
It's very much inspired by the way bdev layer handles data buffers (much
of the code was directly copied over). There are two global mempools,
one for small and one for large buffers, and per-thread queues that hold
requests waiting for a buffer. The main difference is that we also need
to track which module requested a buffer in order to allow users to
iterate over its pending requests.
The usage is fairly simple:
```
/* Embed spdk_iobuf_channel into an existing IO channel */
struct foo_channel {
...
struct spdk_iobuf_channel iobuf;
};
/* Embed spdk_iobuf_entry into objects that will request buffers */
struct foo_object {
...
struct spdk_iobuf_entry entry;
};
/* Register the module as iobuf user */
spdk_iobuf_register_module("foo");
/* Initialize iobuf channel in foo_channel's create cb */
spdk_iobuf_channel_init(&foo_channel->iobuf, "foo", 0, 0);
/* Finally, request a buffer... */
buf = spdk_iobuf_get(&foo_channel->iobuf, length,
&foo_objet.entry, buf_get_cb);
...
/* ...and release it */
spdk_iobuf_put(&foo_channel->iobuf, buf, length);
```
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: Ifaa6934c03ed6587ddba972198e606921bd85008
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15326
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Recently while updating the copyright notices
throughout SPDK the headers, env_dpdk copies of
DPDK headers were modified too.
This patch brings them to the exact version as
in DPDK upstream.
Change-Id: If30b8556386a539d81d2fc1a5e42293522ed91f5
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15856
Community-CI: Mellanox Build Bot
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Reviewed-by: Michal Berger <michal.berger@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Copies of headers for DPDK PCI API were created before the
actual DPDK 22.11 release. The rte_bus_pci.h was
modified slightly with addition of rte_compat.h
include.
Please see relevant DPDK patch:
(1094dd9)cleanup compat header inclusions
This patch only makes the two align.
Change-Id: Ieb0397c6cf2d9027cf600bd0e064863b3782b846
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15855
Community-CI: Mellanox Build Bot
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Reviewed-by: Michal Berger <michal.berger@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
This patch adds "--msg-mempool-size" option for spdk app to allow
reactors' msg_mempool_size being configurable via commond line.
We tested the rbd_bdev performance for Ceph CTX sharing with high RBD volume count via bdevperf.
When testing with 256 volumes and limited Ceph CTX (e.g., 2 Ceph ctx for 256 volumes,
which are created though bdev_rbd_register_cluster),
error message "the *ERROR*: msg could not be allocated error message"
keeps showing and the bdev_perf program hangs.
We found the issue from the limited msg_mempool_size size, which is hardcoded
by SPDK_DEFAULT_MSG_MEMPOOL_SIZE in thread.h.
Therefore, we enable the "--msg-mempool-size" option to allow configurable msg_mempool_size.
Signed-off-by: Yue-Zhu <yue.zhu@ibm.com>
Change-Id: I54db7fd46247b2f18112bb994ecce6f4b7e5bf9c
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15552
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
The description was swapped with removal release, causing the logs to
look like this:
foo_bar: deprecation 'v23.05' scheduled for removal in foo.bar hit 1 times
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I422a35c5ec20c8a817bed0dd5d565dfc53ef6dc9
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15874
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Mike Gerdts <mgerdts@nvidia.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Community-CI: Mellanox Build Bot
The kernel vfio_pci driver module introduced vf_token checking
mechanism since kernel version 5.7, and has been supported by
DPDK. So add support for it to deal with the scenario of VF.
Signed-off-by: Jun Zeng <jun1.zeng@intel.com>
Change-Id: Ie9700fa395327da4e847c6213167284c148a64e3
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14424
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
In _free_ctrlr(), ->endpoint can never be NULL, and the code was
self-contradictory; assume it's not NULL.
Signed-off-by: John Levon <john.levon@nutanix.com>
Change-Id: I81a449123ca05f64460380dc3a8ad8af2143d166
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15831
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Use standard "sqid" naming for a log message.
Signed-off-by: John Levon <john.levon@nutanix.com>
Change-Id: Icca8415cd17272ca7bd82667721c4131dd1df7f1
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15828
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
This fixes the behavior of spdk_vhost_(set|get)_coalescing() on
non-vhost-user devices.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: Ia17cd4c0ed4bad262090e05f83727c1516c21f92
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15772
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
The current code for setting/getting coalescing setting only works with
vhost-user devices, while users can create virtio-blk devices with
non-vhost-user transport. Calling spdk_vhost_(set|get)coalescing() on
such device results in a segfault.
So, spdk_vhost_dev_backend interface is extended with methods to
set / get coalescing parameters. In the following patch, the virtio_blk
interface will be also extended with similar callbacks allowing us to
pipe coalescing settings to the appropriate transport.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: Ide5d5f633b17dcdbedb4b7804d5e45bf41373eca
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15771
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Both the length of a request and the number of ranges to copy are
controlled by the user, so we should check them and return an error
instead of asserting that they're correct.
This fixes the `test/nvmf/target/fabrics_fuzz.sh` test.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I3481c4bb1f2c7676df81f41dfc95ef063924222e
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15805
Reviewed-by: Pawel Piatek <pawelx.piatek@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Found with misspell-fixer.
Signed-off-by: Michal Berger <michal.berger@intel.com>
Change-Id: If062df0189d92e4fb2da3f055fb981909780dc04
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15207
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Use the deprecation API to annotate and log the deprecation of
spdk_nvme_ctrlr_prepare_for_reset() using the tag
"nvme_ctrlr_prepare_for_reset".
Signed-off-by: Mike Gerdts <mgerdts@nvidia.com>
Change-Id: I98fd840aa9acc028a49bb47daf4ab7e88f1eb818
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15756
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
srandomdev is *only* used to emulate arc4random, so only
bother defining it on Linux when it's needed. This avoids
unused errors on newer distros packaging glibc versions
that now defined arc4random.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I6e64a697d9633709cedd0198f75cf094d514562d
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15814
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
This patch fixes issue # 2809, by changing the max
completions processed per poll. A new parameter called
IDXD_MAX_COMPLETIONS is used to set maximum completions
processed per poll to 128 because we observed performance
degradation on a system with 16 NVMe SSDs at a queue depth
of 64 per SSD. When using DSA to compute the data digest,
the target application can issue upto 1024(16x64) request
to compute data digest concurrently to DSA. Limiting the
maximum completions processed per poll to 32 using
DESC_PER_BATCH cause up to 43% IOPS degradation.
Use IDXD_MAX_COMPLETIONS to control the number of
completions proccessed per poll in spdk_idxd_process_event
based on your workload. For example, if your application
is issuing 1000s of concurrent request to DSA you might
want to set IDXD_MAX_COMPLETIONS to a value higher than
128.
Change-Id: I2a1db993283a83a20266f40dac851728d63e6127
Signed-off-by: John Kariuki <John.K.Kariuki@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15801
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
This is in prep for adding a new compressDev accel_fw
module that will contain all of the DPDK compressDev specifics
on it, the vbdev will make calls to the accel_fw instead.
As the accel_fw has SW based compression, we want the configure
option to apply to building the vbdev module but not the accel_sw
software implementation or the upcoming compressdev module.
Renamed to "compress" as reduce is a term specific to the vbdev
implementation of the compression to be provided by the accel_fw
and thus the same reason why we leave the test flag called REDUCE
because it's controlling tests for the reduce library as well as
the vbdev module that is using reduce. The flag does not apply
to the SW implementation of compression.
This does not affect upcoming accel_fw compressdev module, that
will have its own configure option.
Signed-off-by: paul luse <paul.e.luse@intel.com>
Change-Id: If8ed3e48e1e3dabcaad1cd161289e78122cd9d58
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15179
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Per specification.
Signed-off-by: paul luse <paul.e.luse@intel.com>
Change-Id: Ic93349c7d3ed50fa6e502e39db0347141804d4c4
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15673
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
While lvs->destruct is set in a few places, it is never read. Since it
is not used, it is removed.
Signed-off-by: Mike Gerdts <mgerdts@nvidia.com>
Change-Id: Iee21e92c9049d143fca13930b4b5f328f9ec38f0
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15716
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Copy-on-write happens when cluster is written for the first time for
thin provisioned volume. Currently it is implemented as two separate
requests to underlying bdev: read of the whole cluster to bounce
buffer and then write of this buffer to the new location on the same
underlying bdev.
This patch improves copy-on-write flow by utilizing copy command of
underlying bdev if it is supported. In this case we have just one
request to bdev and don't need the bounce buffer.
Signed-off-by: Evgeniy Kochetov <evgeniik@nvidia.com>
Change-Id: I92552e0f18f7a41820d589e7bb1e86160c69183f
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14351
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
New `translate_lba` operation allows to translate blob lba to lba on
the underlying bdev. It recurses down the whole chain of bs_dev's. The
operation may fail to do the translation when blob lba is not backed
by the real bdev. For example, when we eventually hit zeroes device in
the chain.
This operation is used in the next commit to get source LBA for copy
operation.
Signed-off-by: Evgeniy Kochetov <evgeniik@nvidia.com>
Change-Id: I89c2d03d1982d66b9137a3a3653a98c361984fab
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14528
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
In the following patches, nvme_rdma_poll_group_set_cq() will
touch not only CQ but also SRQ and receive WR objects.
All these resources are of a poller.
Hence for clarification, rename nvme_rdma_poll_group_set_cq()
by nvme_rdma_qpair_set_poller().
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: Ic59ba5a45833e39b1b2647c000c8b953f1031d6b
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14910
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Factor out reset failed recvs operation into a helper function
nvme_rdma_reset_failed_recvs(). This will make the following
patches simpler.
For send operation, this change is not required yet, but in future
we may support something like shared SQ. Hence, we do this change
for send operation too.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Signed-off-by: Denis Nagorny <denisn@nvidia.com>
Signed-off-by: Evgeniy Kochetov <evgeniik@nvidia.com>
Change-Id: Ib44acebe63e97e5a60ea6fa701b49278c7f44b45
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14171
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
In the following patches, poll group will have rsps objects and to share
the code between poll group and qpair, option for creation will be used.
As a preparation, merge nvme_rdma_alloc_rsps() and
nvme_rdma_register_rsps() into nvme_rdma_create_rsps(). For consistency,
merge nvme_rdma_alloc_reqs() and nvme_rdma_register_reqs() into
nvme_rdma_create_reqs().
Update unit tests accordingly.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Signed-off-by: Denis Nagorny <denisn@nvidia.com>
Signed-off-by: Evgeniy Kochetov <evgeniik@nvidia.com>
Change-Id: I92ec9e642043da601b38b890089eaa96c3ad870a
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14170
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
When SRQ is supported, recv objects will be allocated by poll group
and qpair will associated and use them. In this case, we do not want
qpair to allocate and free recv objects. When connection is established,
it will be decided if SRQ is used or not. Hence, defer recv objects
allocation until connection is established.
Send objects are not affected directly by SRQ, but
nvme_rdma_register_reqs() no longer does any registration and deferring
send objects allocation makes the code more consistent. Hence, defer
send objects allocation until connection is established too.
Even after this patch, we rely on nvme_rdma_ctrlr_delete_io_qpair()
to free resources completely.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Signed-off-by: Denis Nagorny <denisn@nvidia.com>
Signed-off-by: Evgeniy Kochetov <evgeniik@nvidia.com>
Change-Id: Ic151fad01009d92a7fc809a730e6e9dff1a365f3
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14169
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Response objects will be in poll group when SRQ is enabled. But we want
to share the code to allocate and register response objects between SRQ
is enabled or disabled. To do it cleanly, move
nvme_rdma_qpair_submit_recvs() from nvme_rdma_register_rsps() to
nvme_rdma_connect_established(). A few clean up of error handling are
done in this patch. Unregistration will be done when qpair is
disconnected.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Signed-off-by: Denis Nagorny <denisn@nvidia.com>
Signed-off-by: Evgeniy Kochetov <evgeniik@nvidia.com>
Change-Id: I38dc5a6cb84a6bf56c01d5fb7f2cf3d3b63918e0
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14168
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
This will make the following patches simpler.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: Id3d7c025525b35c1c2b96027430789a8d8f2697b
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14422
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Inline nvme_rdma_post_recv() into the callers.
We do not have any similar helper function for posting send WR.
This will make the following patches simpler and will be reasonable.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: Ia95a4b350942d20bdb65e84f7575c2dcf67c149b
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14421
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Extract and inline the conditional nvme_rdma_qpair_submit_sends()
and nvme_rdma_qpair_submit_recvs() calls.
This will cralify the logic and make the following patches simpler.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: Ibe217c6f4fb2880af1add8c0429f92b4de107da8
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14420
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
When SRQ is supported, rsp array will be in either qpair or poller.
To make this difference transparent, rdma_req caches rdma_rsp and
rdma_rsp caches recv_wr directly instead of caching indecies.
Additionally, do a very small clean up together.
spdk_rdma_get_translation() gets a translation for a single entry
of a rsps array. It is more intuitive to use rsp.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Signed-off-by: Denis Nagorny <denisn@nvidia.com>
Signed-off-by: Evgeniy Kochetov <evgeniik@nvidia.com>
Change-Id: I61c9d6981227dc69d3e306cf51e08ea1318fac4b
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/13602
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Factor out processing recv completion and send completion into helper
functions to make the following patches simpler.
Additionally, invert if condition to check if both send and recv are
completed to make the following patches simpler.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: Idcd951adc7b42594e33e195e82122f6fe55bc4aa
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14419
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Both max and min should be reset periodically. We can use the queue
depth sampling poller to reset these but the queue depth sampling poller
is optional. We extend the bdev_reset_iostat RPC to support mode to
reset all or only max/min fields.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: I9ce54892f6e808f6a82754b6930092f3a16d51ff
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15444
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Add max/min_read/write/unmap_latency_ticks into the struct
spdk_bdev_io_stat.
When initializing or resetting the instance of the struct
spdk_bdev_io_stat, initialize max to 0 and min to UINT64_MAX.
Then update max if a new value is larger than the current max,
and update min if a new value is smaller than the current min.
For the bdev_get_iostat RPC, it prints max and prints min if min is not
UINT64_MAX or 0 if min is UINT64_MAX.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: I1b30b3825c15e37e9f0cf20104b866186de788a2
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14825
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Add a helper function bdev_reset_device_stat() to reset I/O statistics.
This funciton is used for the bdev_reset_iostat RPC.
We do not have any plan to use bdev_reset_device_stat() outside
lib/bdev. Hence, we do not add this as a public API.
Then, add a new RPC bdev_reset_iostat to reset I/O statistics of a
single bdev or all bdevs.
Resetting I/O statistics affects all consumers. Add a note to CHANGELOG
and doc/jsonrpc.md.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: I97af09107b5c3ad1f9c19bf3cbf027457c4fbae7
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15350
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Hold not only io_stat pointer but also num_blocks and blocklen in local
variables.
This will shorten and simplify bdev_io_update_io_stat(), and improve
readability and changeability.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: I527b72538a169a1faafd32863ff539306a8763a9
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15732
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
The following patches will add max/min latencies and more optional
counters. This factorization will improve the readability.
In addition to factorization, add spdk_likely to check if completed
successfully or not.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: I57581ece2b73d486aa138f8d26a5afaf6953a322
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15480
Community-CI: Mellanox Build Bot
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Krzysztof Karas <krzysztof.karas@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
For consistency, rename a JSON dump function by bdev_io_stat_dump_json()
and change the parameter order.
Other public APIs and function pointers in the generic bdev layer,
spdk_bdev_dump_info_json(), spdk_bdev_fn_table::dump_info_json, and
spdk_bdev_fn_table::write_config_json have a json_write_ctx pointer
as the last parameter. For consistency, swap a statistics pointer and
a json_write_ctx pointer.
This is another preparation to extend I/O statistics to include error
counters and module specific counters to output these via the
bdev_get_iostat RPC.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: I6f3bb6f2752f7da856d4fe66c0f1f8a2eedc176b
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15731
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Move a JSON dump functionbdev_get_iostat_dump() for I/O statistics
into lib/bdev/bdev.c.
The next patch will rename the function and change the parameter order.
This is another preparation to extend I/O statistics to include error
counters and module specific counters to output these via the
bdev_get_iostat RPC.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: I6a90d15fcbaa2e2a250167754135623bc9e7f362
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14837
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Add helper functions, bdev_io_stat_alloc(), bdev_io_stat_free(),
and bdev_io_stat_get() for struct spdk_bdev_io_stat.
Then replace a bdev_io_stat_add() call by bdev_io_stat_get() at
spdk_bdev_get_device_stat() because the saved data is queried first.
This is another preparation to extend I/O statistics to include error
counters and module specific counters to output these via the
bdev_get_iostat RPC.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: I9547757421a1de1b8cb44e0f8ade4b5c2bcad4e6
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15443
Community-CI: Mellanox Build Bot
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
The following patches will extend I/O statistics to include error
counters and module specific counters to output these via the
bdev_get_iostat RPC.
In this case, the size of the struct spdk_bdev_iostat will be variable.
As a preparation, allocate spdk_bdev_io_stat dynamically.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: I1979a9d867859d5cb5d05717bfcc677f07fa03f8
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15479
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Krzysztof Karas <krzysztof.karas@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Community-CI: Mellanox Build Bot
The following patches will extend I/O statistics to include error
counters and module specific counters to output these via the
bdev_get_iostat RPC.
In this case, the size of the struct spdk_bdev_iostat will be variable.
As a preparation, allocate spdk_bdev_io_stat dynamically.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: I50b57f792b451cf748ea8eb0611fe65d693d5a14
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15478
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Krzysztof Karas <krzysztof.karas@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Community-CI: Mellanox Build Bot
The following patches will extend I/O statistics to include error
counters and module specific counters to output these via the
bdev_get_iostat RPC.
In this case, the size of the struct spdk_bdev_iostat will be variable.
As a preparation, allocate spdk_bdev_io_stat dynamically. For the
per_channel mode, we can share the bdev_ctx->stat because
spdk_bdev_get_io_stat() always overwrites stat.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: I51cd550f52dc3b7d0f3f825fd48bcbeb3ecdcff2
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14836
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Krzysztof Karas <krzysztof.karas@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
When use of deprecated featues is encountered, SPDK now calls
SPDK_LOG_DEPRECATED(). This logs the use of deprecated functionality in
a consistent way, making it easy to add further instrumentation to catch
code paths that trigger deprecated behavior.
Change-Id: Idfd33ade171307e5e8235a7aa0d969dc5d93e33d
Signed-off-by: Mike Gerdts <mgerdts@nvidia.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15689
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Community-CI: Mellanox Build Bot
the mobj is allocating from pdu_data_out_pool,
if pdu_data_out_pool is exhausted, when the pdu is
polled next time, because data_buf_len is modified,
iscsi_pdu_payload_read return -1, and the connection
will be released.
Signed-off-by: lizengwu <786436671@qq.com>
Change-Id: I3ee65472f7ddaa357d7952a5b734540f0bc0b216
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15626
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Previously we use this flag to avoid to call `vhost_dev_unregister`
twice in `subsystem_fini`, but DPDK vhost library will check it,
we don't need this flag actually, but there is one race condition
between adding a new connection and unregistering the socket file
in different threads, so here we just move it to vhost-user device
as the first patch, and then use this flag in coming patch.
Change-Id: I658712dd20331a2e2eb5f4758bf76f748036a131
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15482
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
`vhost_user_dev_unregister` will check if the device is busy,
so we don't need to check `user_dev->pending_async_op_num` here.
For `vdev->registered`, with this check here, we can remove a
device even it didn't have a valid QEMU connection, and since
vhost-scsi supports hotplug feature, we don't need to check this
flag either when it have a valid QEMU connection.
Change-Id: I50cdeb5ca544e2ed93a1bc99ec3da8787a9e5df5
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15481
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Feng Li <lifeng1519@gmail.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Replace comments saying that particular locks must be held with
assertions that enforce that those locks are held. Remove the comments
so that there is no chance of comments and code getting out of sync in
the future.
This also fixes a caller of bdev_close() that did not hold a required
lock.
Signed-off-by: Mike Gerdts <mgerdts@nvidia.com>
Change-Id: I3a540f1ad9b9826f925c523986334aa8fcd302f2
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15440
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
This exercises the parts of spdk_spin_*() that are difficult to test in
unit tests. In particular, it tests multiple SPDK threads running on
different pthreads contending for a lock and it tests pollers and
messages going off CPU with a lock held.
Change-Id: I5cd6ce29c92c44ba63f47332fe339e59eed81553
Signed-off-by: Mike Gerdts <mgerdts@nvidia.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15534
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
This introduces an enhanced spinlock that adds safeguards compared to
the default pthread_spinlock_t. In particular:
- A pthread_spinlock_t is still used, but additional error checking is
performed to ensure there is no undefined behavior on relock,
unlocking when not the owner, or destoying a locked lock.
- The SPDK concurrency model allows an SPDK thread to be migrated
between pthreads. Releasing a pthread spinlock on a different thread
from where it is taken is undefined behavior. If an SPDK spinlock is
held at a time that a time when a poller or message returns control to
thread_poll(), the program will abort.
- SPDK spinlocks can only be obtained from an SPDK thread.
Signed-off-by: Mike Gerdts <mgerdts@nvidia.com>
Change-Id: I6dd6493ab5f5532ae69e20654546405a507eb594
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15277
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Community-CI: Mellanox Build Bot
nvme_tcp_read_pdu in a loop
nvme_tcp_read_pdu itself has a loop in it that runs until no more data
is available, so the extra loop does nothing.
Change-Id: I1471018e396c43187d1f06bd18ce8a6846a71c94
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15139
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
This is unsafe, because we touch need_buf_* queues, which aren't
thread-safe. Also, documented this requirement in
spdk_bdev_io_get_buf()'s description.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: Iabc141e051c543fdd51f079ae212f69e980d8148
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15668
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Add rpc method trace_get_info to show name of shared memory file,
list of the available trace point groups and mask of the available trace points for each group.
Fixes#2747
Signed-off-by: Xinrui Mao <xinrui.mao@intel.com>
Change-Id: I2098283bed454dc46644fd2ca1b9568ab2aea81b
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15426
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Xiaodong Liu <xiaodong.liu@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Community-CI: Mellanox Build Bot
All the other spdk_sock_* functions return -1 and set errno
appropriately, so we should do the same in flush().
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I51cda2c51974c72e82531f06fa31ab89b2329c91
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15642
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
If a lot of qpairs are connected all at once, the
RDMA optimal_poll_group logic does not work correctly,
because it only accounts for qpairs that received
their CONNECT capsule. Now that we have a counter
for a poll group's unassociated qpairs, use that value
to supplement the current io qpair count.
We can just assume for now that all of these unassociated
qpairs are io qpairs. That won't always be true, but
for purposes of picking the optimal poll group it is
sufficient.
Note that for RDMA, we could increment the counters
based on the RDMA qpair ID in the private data in the
rdmacm connect, but to keep the code simpler and common
across all transports, we defer the accounting until
after receiving the CONNECT command, so that it is
the same for all transports.
Fixes issue #2800.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I5897d6ebac23d3b78b100e3fef5a7f9fb5304820
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15695
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Use a local variable to hold the qpair count.
While here, also use pg_current to get the min_value,
this is a bit simpler to read than things like
(*pg)->group.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I65771fb469f021e9e77b8a6c117841b8f4b66af5
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15694
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
We make decisions on how to pick a poll group for a new
qpair by looking at each poll group's current_io_qpairs
count. But this count isn't always accurate since it
doesn't get updated until after the CONNECT has
been received.
This means that if we accept a bunch of connections
all at once, they may all get assigned the same poll
group, because the target poll groups counter doesn't
get immediately incremented.
So add a new counter, current_unassociated_qpairs,
to account for these qpairs. We protect this counter
with a lock, since the accept thread will increment
the counter, and the poll group thread will
decrement it when the qpair receives the CONNECT
allowing us to associated with a subsystem/controller..
If the qpair gets destroyed before the CONNECT is
received, we can use the qpair->connect_received
flag to decrement current_unassociated_qpairs.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I8bba8da2abfe225b3b9f981cd71b6f49e2b87391
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15693
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Currently we use qpair->ctrlr at qpair destroy
time to decide if we need to decrement the
qpair's poll group's qpair count. But this is
not correct - these counters get incremented
when the CONNECT is received, but qpair->ctrlr
doesn't get set until later.
So add a new connect_received bool to the spdk_nvmf_qpair.
Use this instead to determine when we should decrement
the poll group qpair counters.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I174a0fda36c4558171953bf58f2f5117bc074f76
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15692
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
At least recent Linux guest VMs send SPDK_NVME_IDENTIFY_CTRLR_IOCS as a
matter of course. While this isn't supported in lib/nvmf, as this
doesn't represent an error, reduce the log level of the error message so
we don't spam the logs.
Signed-off-by: John Levon <john.levon@nutanix.com>
Change-Id: I095de3e4331b3912cbc457da6d722b9883ec7884
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15646
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Community-CI: Mellanox Build Bot
Issue is found in the virtio_pci_scsi_dev_create() whose
error path is setting the vdev->ctx to NULL before the
destruct operation.
Change-Id: I4ab0fbe300f7413ad4503833088856aa3f4c0734
Signed-off-by: GangCao <gang.cao@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15676
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Optimal I/O boundary causes I/O to be split in the nvme driver. This is
a problem for writes if write_unit_size > 1 because the split I/O may
not match the write_unit_size.
Fixes: #2791
Change-Id: I437e6cb6d8e2415658d5b46539feeacb5363fd46
Signed-off-by: Artur Paszkiewicz <artur.paszkiewicz@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15627
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
NVMf target reports copy command support if all bdevs in the subsystem
support copy IO type. Maximum copy size is reported for each namespace
independently in namespace identify data. For now we support just one
source range.
Note, that command support in the controller is initialized once on
controller create. If another namespace which doesn't support copy
command is added to the subsystem later, it will not be reflected in
the controller data structure and will not be communicated to the
initiator. Attempt to execute copy command on such namespace will
fail. This issue is not specific to copy command and applies also to
write zeroes and unmap (dataset management) commands.
Signed-off-by: Evgeniy Kochetov <evgeniik@nvidia.com>
Change-Id: I5f06564eb43d66d2852bf7eeda8b17830c53c9bc
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14350
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
The correct SPDK thread is already contained in the poll group.
Signed-off-by: Thanos Makatos <thanos.makatos@nutanix.com>
Change-Id: I4eefe2ba60c77c01a866a693bccbb8affc8262ed
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15546
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: John Levon <levon@movementarian.org>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Fixes#2781
This patch fixes two issue causing segfault on r2t:
1. pdu buffer is allocated from immediate_data_pool, but data_buf_len is set as data_out_pool
2. task->desired_data_transfer_length is rewrite by iscsi_send_r2t, which causes a wrong calculated pdu->data_buf_len
Signed-off-by: melon.masou <melon.masou@outlook.com>
Change-Id: I151859afff7104f29ad7f0ec57a8479d88b742bd
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15542
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Added new API 'spdk_bdev_histogram_get_channel' to get histogram of
a specified channel for a bdev. A callback function is passed to it
to process the histogram.
Change-Id: If5d56cbb5fe6c39cda7882f887dcc9c6afa769ac
Signed-off-by: Richael Zhuang <richael.zhuang@arm.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15539
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Print out the specific values in this SPDK_ERRLOG,
this can help to find where the error is.
Change-Id: I2a38aa2d4270e0bbf554ddb348a73d40967d1b16
Signed-off-by: wanghailiangx <hailiangx.e.wang@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15618
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: GangCao <gang.cao@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Community-CI: Mellanox Build Bot
If the calloc failed, the fd was left in the fd_group.
Change-Id: Ie68426a13d342756c20315656f0309440fda6e02
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15475
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Xiaodong Liu <xiaodong.liu@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Thanos Makatos <thanos.makatos@nutanix.com>
Reviewed-by: John Levon <levon@movementarian.org>
SPDK threads generally run on dedicated cores and locks should be rarely
contended. Thus, putting a thread to sleep while waiting on a mutex does
not free up CPU cycles for other pthreads or processes. Even when
running in interrupt mode, lock contention should be low enough that
spinlocks are a net win by avoiding context switches.
Signed-off-by: Mike Gerdts <mgerdts@nvidia.com>
Change-Id: I6e2e78b2835bbadb56bbec34918d998d75280dfd
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15438
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Also add unit tests that explicitly test this
condition. They fail without the nvme driver changes
in this patch.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: Iaa369be341eb4eba394f248990e56dce001d3940
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15579
Reviewed-by: Mariusz Barczak <mariusz.barczak@intel.com>
Reviewed-by: Wojciech Malikowski <wojciech.malikowski@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
For now, just print a loud warning when this case is
violated. We will add a hard assertion and cause the app
to exit with error status in a later release.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: Ic9226f76a4729820f13a2728bea977b6a54f48ee
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15513
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
This function can be useful to query if a thread
had spdk_thread_exit() called on it yet. Internally
we have both EXITING and EXITED state - so
!spdk_thread_is_running() can be used to detect a
thread that is in either of those states.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I2f6fb024a6b1bc895fdc5132c722abc10f5d30f9
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15512
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
This was an accidental remnant from the original
check-in, when we did not have a clear differentiation
between the event and thread libraries.
The rocksdb plugin code will send events to an
lcore - not an SPDK thread. But originally the two
were combined though an API called spdk_allocate_thread.
Once the differentiation was clearly made, we moved to
using spdk_event_allocate() to send events to a specific
lcore, but never removed the spdk_thread.
So now let's just remove the spdk_thread_create since
it is not needed.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I5c6a3c304b7b4183eee90038367fdea7ebd7280f
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15504
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
This requires creating and setting SPDK threads in the
subsystem unit tests as well.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I31acfb1d7e418f011acc9b48933032d8bf8a1c53
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15511
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Currently when a vhost-scsi controller is removed, it
calls spdk_vhost_scsi_dev_remove_tgt on all remaining
targets, and then immediately calls vhost_dev_unregister.
But this path goes into vhost_user_dev_unregister which
immediately returns with error if there are any pending
async operations - and there are since scsi_dev_remove_tgt
is asynchronous.
So instead add the vhost_dev_unregister call to
remove_scsi_tgt, so that the unregister only happens
after the last ref goes away.
This requires changing vhost_fini() to no longer
assume that spdk_vhost_dev_remove() will immediately
unregister the device, since it now happens
asynchronously. Previously vhost_fini() was making
this assumption erroneously - it would call g_fini_cb
without actually checking that the devices had been
unregistered. Because of that incorrect assumption,
we need to do both the vhost and vhost-scsi changes
in the same patch.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I9577901266975447f9acfe53475221113f02fea3
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15510
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
At end of spdk_thread_poll(), if thread is in EXITING state,
we call thread_exit() to see if the thread can move to
EXITED state. If there are any pollers, io_channels
or pending device unregistrations in progress, thread_exit()
will keep the thread in EXITING mode for this iteration.
But a thread may post messages to itself during this cleanup
process, so thread_exit() should also check if there are
any messages on its queue.
Found during testing of spdk_thread lifetime patch set.
rbd bdev module will send messages to itself like this
during cleanup. Without this change, rbd module testing
with bdevperf would cause an spdk_thread to move to
EXITED state prematurely.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: Ie611026a67b7fa48640ae83be03e29a9c64883a2
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15533
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
If a connection is established and we receive a bad
PDU before successful login, the login_timer would not
get unregistered. So ensure the login_timer is always
unregistered in _iscsi_conn_destruct().
Found with Calsoft tests during new spdk_thread_exit()
assertion testing. Lack of unregistration would result
in its associated spdk_thread being unable to exit
cleanly due to the unexpired timer.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I79d427512f7829ad76bf89155e0e14c7bce3a7d7
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15499
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
The "app thread" will always be the first
thread created using spdk_thread_create(). There
are many operations throughout SPDK that implicitly
expect to happen in the context of this app thread,
so by formalizing it we can start to make assertions
on this to help clarify and simplify locking and
synchronization through the code base.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I7133b58c311710f1d132ee5f09500ffeb4168b15
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15497
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Next patch will add a new caller to this function.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I54374c0af3a4a0fdcc5ac9ca25e2c7ef03e99829
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15576
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
`BDEV_IO_NUM_CHILD_IOV` and `BDEV_RESET_IO_DRAIN_RECOMMENDED_VALUE`
are public macro definitions without `SPDK_` prefix, so we add the
`SPDK_` prefix to them.
Change-Id: I4be86459f0b6ba3a4636a2c8130b2f12757ea2da
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15425
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
The bdev hot remove might be an async process. The bdev_open will
return an error during the hot remove process. If someone invoke the
bdev_get_bdevs API when a bdev is in the middle of a hot remove
process, the spdk_for_each_bdev function will stop its loop when a
bdev_open return an error. Thus the bdev_get_bdevs will only return
partual bdevs or even return an empty list if the hot remove bdev is
the first bdev in the loop. When spdk_for_each_bdev and
spdk_for_each_bdev_leaf loop for each bdevs, if a bdev returns an
error, we skip that bdev instead of stop the whole loop.
Signed-off-by: Peng Yu <yupeng0921@gmail.com>
Change-Id: Ib35b817e23e47569fc5762a883b4ff8e322ae173
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15322
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Mike Gerdts <mgerdts@nvidia.com>
Reviewed-by: Xiaodong Liu <xiaodong.liu@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
If the guest performs a hard shutdown we're not deleting the CQs:
nvmf_vfio_user_close_qpair calls delete_sq_done, which won't delete
the CQ because vu_ctrlr->reset_shn is false.
Signed-off-by: Thanos Makatos <thanos.makatos@nutanix.com>
Change-Id: I383fb985340a0d9d0eb7fea7403372cbdc55a089
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15387
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: John Levon <levon@movementarian.org>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>