For non-fabric controllers, the corresponding I/O qpairs are simply
re-enabled at controller reset.
This had a issue when I/O qpairs span multiple threads and poll group
is used.
spdk_nvme_ctrlr_reconnect_poll_async() calls
nvme_transport_ctrlr_connect_qpair() with qpair->async being false.
Then nvme_transport_ctrlr_connect_qpair() calls
spdk_nvme_poll_group_process_completions() until the qpair is connected.
spdk_nvme_poll_group_process_completions() may poll other qpairs.
This may cause I/O to complete on a wrong thread.
For PCIe controller, spdk_nvme_poll_group_process_completions() calls
spdk_nvme_qpair_process_completions() simply for each qpair.
Hence change nvme_transport_ctrlr_connect_qpair() to call
spdk_nvme_qpair_process_completions() if the controller is non-fabrics.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: Ieb270c2fb154124021ef6d25577b817d05e5ca9e
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14295
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Michael Haeuptle <michaelhaeuptle@gmail.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Dong Yi <dongx.yi@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
By the previous patches, a qpair is destroyed after it is actually
disconnected.
But after the qpair is destroyed, it is checked if drained by using
rqpair->current_num_sends and rqpair->current_num_recvs.
However, if the qpair is the last of a poller of a poll group,
CQ is destroyed before checking if the qpair is drained.
If CQ is destroyed, at least rqpair->current_num_recvs is not updated,
and we may get one second timeout.
This should be avoided.
Hence, destroy the qpair after it is disconnected and drained.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: Ibd6c83e8a3e7b6e11e9b45cee42669da6d42a621
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14278
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
If the being disconnected qpair is the last of a poller of a poll group,
CQ is destroyed and the poller is released before the qpair is actually
disconnected.
This patch destroy CQ and release the poller after the qpair is actually
disconnected.
One exception is when spdk_nvme_ctrlr_free_io_qpair() is called to a
connected qpair. In this case, the qpair is removed from a poll group
before the qpair is actually disconnected. In this case, destroy CQ and
release the poller when the qpair is removed from the poll group.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: Idf266bbb6dbb40f04ae6313db724fabf80865763
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14253
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
We have two cases to call nvme_rdma_poll_group_put_poller().
For consistency, make the two cases the same sequence.
This will make the next patch easier. The next patch will release
poller from poll group when qpair is actually disconnected as
possible as we can.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: I4178113d5277240e287e83a57e97cf32fd0f7457
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14252
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Hyper-V NVMe SSD controllers require admin queue
size to be even multiples of a page. Add quirk to
adjust the admin queue size if user overrides the
default value to something other than an even
multiple.
As part of this change, set the quirks earlier
when constructing a pcie controller, so that the
quirks value can be used in the generic
nvme_ctrlr_construct() function.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I417cd3cdc7e3ba512ec412f4876b0e0b7432341c
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14220
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Better not to cache a value especially for there's an error return.
Signed-off-by: yidong0635 <dongx.yi@intel.com>
Change-Id: I3b243a66f4db9af34bc2ea01bafdac33004be128
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/13650
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
This was correct back when we only supported PCIe, but doesn't
in the newfangled world of fabrics and vfio-user.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I565edd2dab1eff862844585df8c25da508e4816d
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14136
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Dong Yi <dongx.yi@intel.com>
Reviewed-by: Jacek Kalwas <jacek.kalwas@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
When we specify source address for admin and I/O qpairs,
rdma_resolve_addr() succeeded only for admin qpair and failed for
following all I/O qpairs because rdma_resolve_addr() returned
-EADDRINUSE.
To reuse source address among multiple qpairs, set the REUSEADDR option
for each CM ID before executing rdma_resolve_addr() if source address
is specified.
We may miss something. Even if rdma_set_option() fails, execute
rdma_resolve_addr().
Fixes issue #2604
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: If03f82d4499cf83c0e428a62e91c9d9e6aad28e0
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14229
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: GangCao <gang.cao@intel.com>
Reviewed-by: Dong Yi <dongx.yi@intel.com>
Commit a119799b ("test/nvme/aer: remove duplicated changed NS list log")
changed the nvme driver to read the CHANGED_NS_LIST log page before
calling the application's AER callback (previously it would read it
after).
Commit b801af090 ("nvme: add disable_read_changed_ns_list_log_page")
added a new ctrlr_opts member to allow the application to tell the
driver to not read this log page, and will read the log page itself
instead to clear the AEN. But we cannot add this option to the 22.01
LTS branch since it breaks the ABI. So adding this API here, which
can then be backported manually to the 22.01 branch for LTS users
that require it.
Restoring the old behavior is not correct for applications that
want to consume the CHANGED_NS_LIST log page contents itself to
know which namespaces have changed. Even if the driver reads the
log page after the application, that read could happen during a
small window between when a namespace change event has occurred and
the AEN has been sent to the host. The only safe way for the
application to consume ChANGED_NS_LIST log page contents itself
is to make sure the driver never issues such a log page request
itself.
Fixes issue #2647.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: Iaeffe23dc7817c0c94441a36ed4d6f64a1f15a4e
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14134
Reviewed-by: Michael Haeuptle <michaelhaeuptle@gmail.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Dong Yi <dongx.yi@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
This allows mapping an nvme_request back to the
nvme_bdev_io.
This requires bumping up the max number of arguments per
tracepoint. 5 was previously chosen as max since it
exactly fit in 64 bytes (1 cacheline) when all
arguments were stored as uint64_t, but now that we
support uint32_t arguments we can afford extra
arguments when some of them are uint32_t. I've
bumped it to 8 so we can avoid having to touch
this value multiple times if we find some cases
where we need 7 or 8 args.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: Ie2ef5e59d10549860b47542e68c1c34efa63047f
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/13995
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Jacek Kalwas <jacek.kalwas@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
In multi-process, we need to make sure we don't
complete a register_operation in the wrong process. So
save the pid in the nvme_register_completion structure
when it is inserted into the STAILQ, then only complete
operations where the pid matches.
Fixes issue #2630.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I58c995237db486fecdd89d95e9e7a64379d0b0e5
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/13940
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Dong Yi <dongx.yi@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Similar to the disable_read_ana_log_page ctrlr_opt,
this enables the application to tell the NVMe
driver to *not* read the CHANGED_NS_LIST log
page in response to a NS_ATTR_CHANGED AEN, and
will do the read itself.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: Ie447734187d4a4cb95ceef6e0131b640b8ba5984
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14088
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Various opts structures in SPDK have a size member, to enable
ABI compatibility should fields be added in the future.
But this requires the strucures to be packed, otherwise for
example a structure may be padded at the end, and a new
field added may just consume some of that padding.
So add STATIC_ASSERTS for the current sizes in this
patch. Upcoming patches will make the structures packed
and add in reserved fields to fill in holes.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I9107d01d7b533f8542385a3538894bcd9f8c465d
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14086
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Dong Yi <dongx.yi@intel.com>
Community-CI: Mellanox Build Bot
spdk_nvme_qpair_process_completions() had called
always _nvme_qpair_complete_abort_queued_reqs() at its end.
However, the call was accidentally removed by a commit
59c8bb527b
to fix an issue.
By this removal, aborting request was not completed for some error
cases.
Fix the degradation by restoring the call.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: I0099eb7a008f823e1282576504423cdc248911d7
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14045
Reviewed-by: Michael Haeuptle <michaelhaeuptle@gmail.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Avoid putting a new req on the outstanding_reqs
TAILQ until we know it can be initialized
successfully. This avoids adding to the TAILQ
only to remove it just after.
This allow simplifies the outstanding_reqs TAILQ
handling, since reqs are now only inserted and
removed in one place each.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I5ccc41c14abd541ffcf2a602246e0671386840c7
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/13991
Community-CI: Mellanox Build Bot
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
We were using "TR" for "tracker" previously, but
we are tracing the nvme_requests, not nvme_trackers,
so use the right names for the trace object to avoid
confusion.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: Ia3886d74b162138c2cdbe0017224d9494f74966c
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/13990
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Dong Yi <dongx.yi@intel.com>
Reviewed-by: Jacek Kalwas <jacek.kalwas@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
ioccsz is specific for fabrics. spdk_nvme_ctrlr_is_fabrics() returns
true for custom fabrics transport. Hence we can use
spdk_nvme_ctrlr_is_fabrics() safely in nvme_ctrlr_update_nvmf_ioccsz().
Before this change, in the unit tests, ctrlr->trid.trtype was set to
zero at initialization. After this change, for most cases,
spdk_nvme_ctrlr_is_fabrics() should return false for most cases.
SPDK_NVME_TRANSPORT_PCIE did not work. Hence, initialize
ctrlr->trid.trtype by SPDK_NVME_TRANSPORT_CUSTOM_FABRICS instead.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: I4bedcab4a9f2876c1c9463ff10ad0966754f1713
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/13948
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
rdma_reg_msgs() was replaced by ibv_reg_mr() recently to support
persistent PD per RDMA device. The difference between rdma_dereg_mr()
and ibv_dereg_mr() is only return value and errno. For consistency,
replace rdma_dereg_mr() by ibv_dereg_mr().
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: I55e0743690e74f9510863bfa122a75d0632dce4e
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/13949
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Get a PD for the device from the PD pool managed by the RDMA provider
when creating a QP, and put the PD when destroying the PD.
By this change, PD is managed completely by the RDMA provider or the hooks.
nvme_rdma_ctrlr::pd was added long time ago but is not referenced
anywhere. Remove nvme_rdma_ctrlr::pd for cleanup and clarification.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: If8dc8ad011eed70149012128bd1b33f1a8b7b90b
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/13770
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
An earlier commit added ctrlr_ready into struct
spdk_nvme_transport_ops. However, the major SO
version was not increased.
Fixes: 3dd0bc9e (nvme: Add transport controller ready step)
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: Id903634f9aaf5bdaa62fd30e92a4fb39a985b86f
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/13981
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
This is another preparation to create and use ibv_context and pd.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Signed-off-by: Denis Nagorny <denisn@nvidia.com>
Signed-off-by: Evgeniy Kochetov <evgeniik@nvidia.com>
Change-Id: Id594fa1ccb2daf535b1aaaef0a397bda2ec98578
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/13710
Community-CI: Mellanox Build Bot
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
The following patches will create and use ibv_context and pd
explicitly instead of using default ibv_context and pd created
by rdmacm.
As a preparation, pass pd instead of cm_id to nvme_rdma_reg_mr().
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Signed-off-by: Denis Nagorny <denisn@nvidia.com>
Signed-off-by: Evgeniy Kochetov <evgeniik@nvidia.com>
Change-Id: Ifdcd18ed363b8ba4a23a920bf3559237e38821c6
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/13599
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
If Controller Fatal Status (CFS) bit is set, there's no point in waiting
for CSTS.RDY and the only way to move forward with the initialization is
to perform a controller reset.
This fixes issues with test/nvme/sw_hotplug.sh when running under qemu.
It seems that during that test, qemu marks the emulated NVMe drives as
fatal, so if we didn't check CSTS.CFS, the initialization would time
out.
Fixes#2201.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I97712debc80c3dd6199545d393c0f340f29d33b2
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/13820
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Michal Berger <michal.berger@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
The function `nvme_ctrlr_init_ana_log_page` is exactly
same with `nvme_ctrlr_update_ana_log_page`, so remove it.
Change-Id: I1ad51635f47cf95cfa6de217e3b9144885c3b74e
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/13652
Community-CI: Mellanox Build Bot
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Reviewed-by: Dong Yi <dongx.yi@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Original implementation creates pollers and CQs for all discovered
devices at poll group creation. Device (ibv_context) that has no
references, i.e. has no QPs, may be removed from the system and
ibv_context may be closed by rdma_cm. In this case we will have a CQ
that refers to closed ibv_context and it may crash in ibv_poll_cq.
With this patch pollers are created on demand when we create the first
QP for a device. When there are no more QPs on the poller, we destroy
the poller. This also helps to avoid polling CQs that don't have any
QPs attached.
Signed-off-by: Evgeniy Kochetov <evgeniik@nvidia.com>
Change-Id: I46dd2c8b9b2902168dba24e139c904f51bd1b101
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/13692
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Both PCIE and VFIO-USER can use the same APIs to get IO queue
pair statistic data, so merge them here.
Change-Id: Iadf9ead2bd5abaf11d2ef5d1884acb67369f85bb
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/13538
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
When a secondary process exit without deleting allocated IO
queue pair, then a new secondary process will do cleanup for
previous allocated queue pair, then segment fault will happen
due to `stat` inside IO queue pair data strucutre can't be
accessed in this cleanup process.
Fix issue #2565.
Change-Id: I01a037642683901941b5268ac20d17b78b6c6350
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/13537
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
That is possible to get/set registers from any thread,
during regs processing we are polling admin qpair to
get a completion. At the same time, another thread
can also poll admin qpair and that can lead to
undefined behavior.
This patch fixes an issue when bdev_nvme is configured
with io_timeout. If remote target becomes unresponsive
(e.g. due to link down), IO timeout occurs and bdev_nvme
tries to get csts registers in timeout_cb. At the same
time another thread can process adminq, so we may have
2 simultaneous adminq polls. If admin qpair is disconnecting
at that time (RDMA transport) we may destroy resources
twice from different threads.
We don't see a problem with set_regs function but it
won't be redundant to lock mutex in set_regs as well.
Signed-off-by: Alexey Marchuk <alexeymar@nvidia.com>
Change-Id: I7ec3984d25d0249061005533d13b22315b44ddf2
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/13687
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
In the multi-process case, a process may call `spdk_nvme_ctrlr_free_io_qpair` on
a foreign I/O qpair (i.e. one that this process did not create) when that qpairs
process exits unexpectedly.
The variable `qpair->poll_group` isn't multi-process safe, we can't use it
in `spdk_nvme_ctrlr_free_io_qpair` and related transport poll group APIs.
Change-Id: Ic13a6a2c7d760477be5be5a56a45caa2b5518717
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/13573
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
All of the callers immediately put the req right
after the nvme_rdma_req_complete call, so just move
the put into that function instead.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: Ic370cf689850924e0c902a6071af8b3a7ed58c0b
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/13527
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
This follows similar logic in the pcie and tcp
completion paths, including omitting error
messages when aborting aers by adding a print_on_error
parameter to the completion function.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: Id558d0af2cdd705dfb60abb842bd567a0949ccce
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/13525
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
By default, the SPDK nvmf target reports vid==INTEL,
which results in the SPDK nvme driver trying to enable
Intel vendor-specific log page. Fix this by trying to
enable those log pages only for PCIE transport
controllers.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I78ebf365d4fa6295d1f610697266c3ead765988d
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/13524
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
We can only get to this code path if the controller
has vid==INTEL, so make that more clear by changing
the check to an assert.
Remove unit test that calls
nvme_ctrlr_construct_intel_support_log_page_list()
for a controller that is not VID==INTEL - this is
no longer valid.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I3b58451bc95992bf641e7452f0ac4c2bac9fe31c
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/13523
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
This follows similar logic in the pcie completion
path, including omitting error messages when aborting
aers by adding a print_on_error parameter to the
completion function.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I96df72280bb8fcbee3847fdc27f38e14a1bf3251
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/13522
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
nvme_tcp_req_complete_safe caches values on
the request, so that we can free the request *before*
completing it. This allows the recently completed
req to get reused in full queue depth workloads, if
the callback function submits a new I/O.
So do this nvme_tcp_req_complete as well, to make
all of the completion paths identical. The paths
that were calling nvme_tcp_req_complete previously
are all non-fast-path, so the extra overhead is
not important.
This allows us to call nvme_tcp_req_complete from
nvme_tcp_req_complete_safe to reduce code duplication,
so do that in this patch as well.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I876cea5ea20aba8ccc57d179e63546a463a87b35
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/13521
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
All callers of nvme_tcp_req_complete call
nvme_tcp_req_put immediately afterwards, so move
this call into nvme_tcp_req_complete.
This will help enable some improvements in later
patches.
Note that nvme_tcp_req_complete_safe has this same
functionality open coded right now, but that will
get changed in the next patch. It calls
nvme_tcp_req_put immediately after the TAILQ_REMOVE,
so do that in nvme_tcp_req_complete as well.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I368122bc49a7f0772e3011e5427e3c43618380eb
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/13520
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
nvme_qpair_abort_all_queued_reqs() aborts error injections, queued
requests, aborting queued requests, and outstanding requests. (Aborting
outstanding requests depends on transports.) However, it did not abort
queued aborts.
Include nvme_ctrlr_abort_queued_aborts() into
nvme_qpair_abort_all_queued_reqs() to do really the name of the
function indicates.
nvme_ctrlr_abort_queued_aborts() has been called in a few cases, but
we do not care duplication.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: I19102cc6603a72ce5c398a7947cb4d606b692991
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12849
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Vasuki Manikarnike <vasuki.manikarnike@hpe.com>
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
In SPDK, declarations have the return type on the same line. Definitions
have the return type on a separate line. Astyle has an option for
enforcing this. Unfortunately, it seems to have two bugs:
1) It doesn't work correctly at all on C++ files.
2) It often fails on functions that return enums, or long type names
Deal with 1) by adjusting the check_format.sh script to only tell astyle
to fix return type line breaks for C files and not C++. Deal with 2) by
adding a few typedefs to work around the problem.
Change-Id: Idf28281466cab8411ce252d5f02ab384166790c6
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/13437
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Dong Yi <dongx.yi@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
We had not incremented ctrlr->outstanding_aborts when aborting a
request in the ctrlr->queued_aborts, and ctrlr->outstanding_aborts
became negative. Fix the bug in this patch. Additionally add assert
to check if ctrlr->outstanding_aborts is not negative.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: I58090286f070ba854bdea87f0f8ecb7810890338
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/13452
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Fully asynchronous ctrlr detach (b6ecc3729) introduce a register
operation state machine that waits for operation to complete. When
controller failed to initialize, `nvme_ctrlr_fail` set qpair state to
`DISCONNECTED` immediately, causing qpair process completions to
never complete register operations therefore prevent async detach exit.
Signed-off-by: Sebastian Brzezinka <sebastian.brzezinka@intel.com>
Change-Id: I205c5157b8ea7b4535f98ff4052414310e421446
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12858
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
When running perf test, sometimes after CONNECT req's resp was
received and processed, the qpair still failed to change from state
CONNECTING to CONNECTED. For when it goes to nvme_fabric_qpair_connect_poll
-> nvme_wait_for_completion_robust_lock_timeout_poll to process the
CONNECT req's resp, the req may have not been finished in sock_check_zcopy,
although its resp has been received and processed, which means the
tcp_req->ordering.bits.send_ack is still 0 and the status->done still
is false. And after the req is completed in sock_check_zcopy, we need
to poll this qpair again to make the state enter CONNECTED.
And if icreq's resp received and processed before nvme_tcp_send_icreq_complete
is called by _sock_check_zcopy, the qpair will be stuck in CONNECTING
and it never proceed to send the CONNECT req. We also need to put it
in pgroup->needs_poll to fix it.
I can reproduce this bug with the following configuration.
target: 16NVMe SSD, running on 20 cores;
initiator: randread test using nvme perf with 32 cpu cores and
zerocopy enabled.
The error doesn't always occur. CONNECT failure is about 1 failure in
ten with the following log. And icreq failure is less frequent with
only target side's "keep alive timeout" log.
Error reported in initiator side:
Initialization complete. Launching workers.
[2022-05-23 14:51:07.286794] nvme_qpair.c: 760:spdk_nvme_qpair_process_completions:
*ERROR*: CQ transport error -6 (No such device or address) on qpair id 2
ERROR: unable to connect I/O qpair.
ERROR: init_ns_worker_ctx() failed
And target side shows:
Disconnecting host from subsystem nqn.2016-06.io.spdk:cnode2 due to keep alive timeout
Change-Id: Id72c2ffd615ab73c5fc67d36c3ff8b730cebcef7
Signed-off-by: Richael Zhuang <richael.zhuang@arm.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12975
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Many open source projects have moved to using SPDX identifiers
to specify license information, reducing the amount of
boilerplate code in every source file. This patch replaces
the bulk of SPDK .c, .cpp and Makefiles with the BSD-3-Clause
identifier.
Almost all of these files share the exact same license text,
and this patch only modifies the files that contain the
most common license text. There can be slight variations
because the third clause contains company names - most say
"Intel Corporation", but there are instances for Nvidia,
Samsung, Eideticom and even "the copyright holder".
Used a bash script to automate replacement of the license text
with SPDX identifier which is checked into scripts/spdx.sh.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: Iaa88ab5e92ea471691dc298cfe41ebfb5d169780
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12904
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Dong Yi <dongx.yi@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-by: <qun.wan@intel.com>