It can be useful for passing additional information about nvmf
target to a handler for new nvmf connections. Context can be
stored in globals as it is currently done in nvmf code. However
in case of multiple targets or languages where accessing global
state is challenging (i.e. Rust), this becomes inconvenient.
Change-Id: Ia6a2fdba4601531822b3e5fda7ac5ab89d46f6c5
Signed-off-by: Jan Kryl <jan.kryl@mayadata.io>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/469263
Community-CI: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Seth Howell <seth.howell@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Sasha Kotchubievsky <sashakot@mellanox.com>
The previous version of this function precluded one target name from
being a leading substring of another. i.e. if "nvmf_tgt_1" was already
used as a name "nvmf_tgt_11" could not be used subsequently.
Just an odd quirk that shouldn't be the case.
Change-Id: Iea59b6757512f01070e48074e35a11d942e399bb
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/468522
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Alexey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Functions added in this patch:
spdk_nvmf_tgt_get_name - get human readable name from target.
spdk_nvmf_get_first_tgt - start iterating over global list of targets
spdk_nvmf_get_next_tgt - get next target in iteration
These functions will facilitate the following RPC
nvmf_get_targets - get the names of all active NVMe-oF targets.
In this series, I will also add two more RPCs, nvmf_create_target, and
nvmf_destroy_target, as wrappers around the create and destroy
functions. Since all of these changes are pretty minor and closely
related, I will just do one big changelog entry at the end.
Change-Id: Ia9f1248fbf9726fa3889998a169211fb25e724f2
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/468386
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Alexey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Keeping a global discovery log page was meant to be a time saving
mechanism, but in the current implementation, it doesn't work properly,
and can cause undesirable behavior and potential crashes. There are two
main problems with keeping a global log page.
1. Admin qpairs can be assigned to any SPDK thread. This means that when
multiple initiators connect to the host and request the discovery log,
they can both be running through the spdk_nvmf_ctrlr_get_log_page
function at the same time. In the event that the discovery generation
counter is incremented while these accesses are occurring, it can cause
one or both of the threads to update the log at the same time. This
results in both logs trying to free the old log page (double free) and
set their log as the new one (possible memory leak).
2. The second problem is that each host is supposed to get a unique
discovery log based on the subsystems to which they have access.
Currently the code relies on whether the discovery log page offset in
the request is equal to 0 to determine if it should load a new discovery
log page or use the cached one. This is inherently faulty because it
relies on initiator provided value to determine what information to
provide from the log page. An initiator could easily send a discovery
request with an offset greater than 0 on purpose to procure most of a
log page provided to another host.
Overall, I think it's safest to not cache the log page at all anymore
and rely on a thread local fresh log page each time.
Reported-by: Curt Bruns <curt.e.bruns@intel.com>
Change-Id: Ib048e26f139927d888fed7019e0deec346359582
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/466839
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
This function will allow applications (and RPCs)
to obtain an spdk_nvmf_tgt pointer by name.
Change-Id: I82792e06a819e06d9fddb5429830008653d92cd1
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/465349
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
This will provide a unique identifier which can be used to provide get
and set methods within the RPCs.
Change-Id: Idd144e99e49b8d26530f60530d2e908b18fa251b
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/465330
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
This is necessary to allow the spdk_nvmf_tgt structure to evolve over
time without having to further change the target API.
Change-Id: Ib0f0f9b1f190913feff0229c96df4e84b1bf35f7
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/465363
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Anil Veerabhadrappa <anil.veerabhadrappa@broadcom.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
As part of moving the nvmf rpc code to the library, we will need to make
it more inclusive of use cases outside of the example spdk nvmf_tgt
application. That application only supports a single nvmf target
structure. As such, many of the RPCs have this assumption built into
them.
In order to enable the multi-target use case, we need to configure a way
to translate between user supplied RPCs and actual target objects in the
library.
Change-Id: I5d3745afe9c2ca1c33f6e1a1bcc2b8bb3196ccd6
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/465329
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
NVMf statistics functions use spdk_get_io_channel function to get a
poll group. It increases reference counter in io channel and causes
problems on application exit. spdk_put_io_channel calls were added to
release the channel.
Signed-off-by: Evgeniy Kochetov <evgeniik@mellanox.com>
Change-Id: I832d1eae346c3bc3858ed0ed063ff7a7a897a2f5
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/463389
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
This patch adds nvmf_get_stats RPC method and basic infrastructure to
report NVMf global and per poll group statistics in JSON format.
Signed-off-by: Evgeniy Kochetov <evgeniik@mellanox.com>
Change-Id: I13b83e28b75a02bc1dcb7b95cbce52ae10ff0f7b
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/452298
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
This patch is used to do the following work:
1 It is optimized for NVMe/TCP transport. If the qpair's
socket has same NAPI_ID, then the qpair will be handled
by the same polling group.
2. We add a new connection scheduling strategy, named as
ConnectionScheduler in the configuration file. It will be
used to input different scheduler according to the customers'
input.
Signed-off-by: Ziye Yang <ziye.yang@intel.com>
Change-Id: Ifc9246eece0da69bdd39fd63bfdefff18be64132
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/454550
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
By capturing this pointer onto the stack, we inform the compiler
that we don't expect it to change. That allows the compiler to
generate more efficient code.
Change-Id: I0f3ff9373662198e915269c4498e4902a2cdb808
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/459754
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Seth Howell <seth.howell5141@gmail.com>
This signals to the compiler and analysis programs that this
won't change during iteration, so it may produce better code.
Change-Id: I478c0c9445d4ddf8a69ab1b3deaf628b82a0eaea
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/459753
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Seth Howell <seth.howell5141@gmail.com>
For the nvme device, I/Os are completed asynchronously. So we
need to check the outstanding I/Os before putting IO channel
when we hot remove the device. We should be sure that all the
I/Os have been completed when we change the sgroup->state to
PAUSED, so that we can update the subsystem.
Fix#615#755
Change-Id: I0f727a7bd0734fa9be1193e1f574892ab3e68b55
Signed-off-by: JinYu <jin.yu@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/452038
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Existing condition for updating subsystem poll group's reservation
information is wrong, when received the RELEASE command, the
reservation type may be changed to none, but it will not be
saved to the subsystem's poll group.
Change-Id: Idc177a0f03fb9611d6eda1e25a5b90caaa73d1be
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/450727
Reviewed-by: Liang Yan <liang.z.yan@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Now data structure spdk_nvmf_subsystem_pg_ns_info holds all the
reservation information from the associate namespace, so for the
IO processing routine we don't need to send a message to the
subsystem's thread to check the IO command is permited or not.
Change-Id: Ib6be6abf7bf5f24c230dff80c163a1eb963e20d0
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/448256
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Array channels in the subsystem's poll group are indexed by
nsid - 1, so rename the previous num_channels to num_ms
makes more sense. Also embed the channels into a namespace
data structure here, and this can be reused in the following
patch.
Change-Id: If5d9aab4b1d5bcf7a3c22f29fa58d84752f0d4cc
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/446211
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
This intermediate state is unused and meaningless. the qpair transitions
into this state right before calling a synchronous operation and then
transitions to active as soon as that operation completes successfully.
If the operation did not complete successfully, we were leaving qpairs
in this weird intermediate state when for all intents and purposes they
had reverted to an uninitialized state. Keeping qpairs in the
uninitialized state until they have been added to a poll group creates a
meaningful distinction between states that can be actionable from the
transport level.
Change-Id: I6de9bc424b393b6fff221aa2f4212aaa91488629
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/443471
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
The typical rdma qpair disconnect function goes through the function
_nvmf_rdma_disconnect_retry. When this function was introduced, it was
discovered that we could receive a qpair disconnect event for a given
qpair before that qpair had been assigned to a poll group. In order to
ensure that the disconnect procedure completed properly, we waited on
the current thread in _nvmf_rdma_disconnect_retry for the qpair to be
assigned a poll group before we finally disconnected. see rdma.c:2250.
Since _nvmf_rdma_disconnect_retry was not necessarily called from the
poll group's thread, we relied upon the assumption that the group
variable would never be set back to NULL. See the comment on rdma.c:
2243.
However, in _spdk_nvmf_qpair_destroy we were setting the group back to
NULL. This operation can result in the following set of operations
across multiple threads that prevent a qpair from ever being fully
destroyed.
1. thread 1: receive a disconnect event - call nvmf_rdma_disconnect
2. thread 1: from nvmf_rdma_disconnect call
spdk_nvmf_rdma_qpair_inc_refcnt - setting rqpair->refcnt to 1.
3. thread 2: call spdk_nvmf_rdma_poller_poll.
4. thread 2: in spdk_nvmf_rdma_poller_poll reap a completion with an
error status which causes us to call spdk_nvmf_qpair_disconnect -
rdma:2846
5. thread 2: spdk_nvmf_qpair_disconnect calls _spdk_nvmf_qpair_destroy which sets
qpair->group = NULL
6. thread 1: from nvmf_rdma_disconnect we call
_nvmf_rdma_disconnect_retry which checks if qpair->group == NULL. If
that is the case, we assume that the qpair has not been assigned a group
yet and send ourself a message to call _nvmf_rdma_disconnect_retry again. see rdma.c:2253
7. thread 2: from _spdk_nvmf_qpair_destroy we call
spdk_nvmf_transport_qpair_fini which results in a call to
spdk_nvmf_rdma_close_qpair. which sends dummy send and recvs to the
qpair.
8. thread 2: we call poller_poll and get completions for both the send
and recv dummy requests. This results in a call to
spdk_nvmf_rdma_qpair_destroy.
9. thread 2: spdk_nvmf_rdma_qpair_destroy checks rqpair->refcnt and when
it sees that it does not = 0 (see step 2 above) it returns without
freeing the resources. see rdma.c:629
10. thread 1: we keep churning in _nvmf_rdma_disconnect_retry sending
ourselves messages because rqpair->group is going to be null. Thread 1
never reaches line 2257 where it sends a message to call
_nvmf_rdma_qpair_disconnect. _nvmf_rdma_qpair_disconnect is the function
that decreases the rqpair->refcnt and allows us to make forward progress
on destroying the qpair.
I encountered this issue while trying to disconnect from our target
using the kernel initiator with an x722 NIC. I think the timing on this
bug comes out with that specific configuration because come of the calls
in the disconnect path on thread 1 fail causing it to take longer giving
a chance to the second thread to delete the qpair.
There are really two issues at play here. We don't have a single point
of entry for disconnecting RDMA qpairs, and we rely on the qpair->group
variable never being set back to NULL. This patch addresses the second
issue, and the next patch in the series addresses the first.
Change-Id: I65395d0bbb67edfa7bad2ddc70906606c3d83781
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/443304
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
This change was provided by GitHub user vikasbrcm to fix issue 562.
I am uploading his change to facilitate testing of the issues and
possibly get it merged before the 19.01 window closes.
Change-Id: I58fb1058f68c6c02006ceed6e577be627e6dbc09
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/441611
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reason: I checked the code in different transport,
the qpair is already freed, so we dot need to set
any state.
Change-Id: I3d78c259c3f79ea4426dc9408e5c3469bc171358
Signed-off-by: Ziye Yang <optimistyzy@gmail.com>
Reviewed-on: https://review.gerrithub.io/437493
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
The ctrlr may be NULL, so we need to add a check here
to present segment fault.
Change-Id: I6c5361cc829af065082a95df0b8cc2f8d49a6002
Signed-off-by: Ziye Yang <ziye.yang@intel.com>
Reviewed-on: https://review.gerrithub.io/436950
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Maciej Szwed <maciej.szwed@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
For TCP/IP transport, we need to remove the socket
from the polling group since we do not want to keep the
tgroup info in the NVMe/TCP qpair, it should be general.
Change-Id: I4b064d8378f66ea5d91ac554fe628d9ccebd07f4
Signed-off-by: Ziye Yang <optimistyzy@gmail.com>
Reviewed-on: https://review.gerrithub.io/434128
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
This option is deprecated. Also, rename the rpc and configuration
options for setting the opts to reflect that they now only set the max
number of subsystems
Change-Id: Iaabcbf33dd0a0dc489d81233fda74e9e7f3e0d2e
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/430161
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
In order to prepare for multiple transports, the nvmf tgt should never
implicitly create a transport when listen is called.
Change-Id: If1286e7e3f7bce422a4acd66390852736113df7a
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/430160
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
1.nvmf: change the return type of calloc failed to -ENOMEM and
keep consistency in this file.
2.thread: revise rc condition to ( rc!= 0),to deal with
all abnormal return.
Change-Id: I7cccb548f30448eaa1bac1a5904c3edcad9c1208
Signed-off-by: yidong0635 <dongx.yi@intel.com>
Reviewed-on: https://review.gerrithub.io/431459
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
When we thought we could do error recovery we differentiated between
inactive and erro states. However, that's not possible so collapse
them back into one.
Change-Id: I57622c400378f2d4c518efbc12fb52e665a9ba4c
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.gerrithub.io/430627
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Seth Howell <seth.howell5141@gmail.com>
In the case that the subsystem in the related poll group has
NULL IO channel assigned due to some problem like out of resource,
for example, the NVMe SSD hardware itself has limited number
of IO qpairs. The subsystems in the particular poll group
could have zero valid channels. In this case, the creation of
assoicated poll group will fail and when adding the new qpair
to the specified poll group, needs to have a check and pick the
available poll group.
Change-Id: Iedee2a6375e48eb7bf899cfb0542c565c7ebd231
Signed-off-by: GangCao <gang.cao@intel.com>
Reviewed-on: https://review.gerrithub.io/423646
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Avoid using the deprecated construct_nvmf_subsystem
when dumping configuration.
Change-Id: I908d87bdd77a8b2a8e54baeb7b73e8b52c4912ee
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.gerrithub.io/425186
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
- Add independent functions to create transport with specific opts
and add to target while maintaining backward compatibility with
current apps and rpc configuration that still use the add listener
method to create a transport.
- Add new rpc function to create transport and add to target.
+ Update json reporting to include new rpc function.
+ Update python scripts to support new rpc function.
+ New nvmf test script (cr_trprt.sh) to test new rpc function.
Change-Id: I12d0a42e34c9edff757755f18a78b722d5e1523e
Signed-off-by: John Barnard <john.barnard@broadcom.com>
Reviewed-on: https://review.gerrithub.io/423590
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
The function returns the transport ID describing the
listen address on which the connection originated.
Change-Id: Ib11cddb8ff2ceb04a5f3ce236ba96c68b7226773
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.gerrithub.io/425023
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Now that it is required to be on the same thread, the
message isn't necessary.
Change-Id: I714b77b46467dbcfa51186c8404c5976eaeea08a
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.gerrithub.io/424593
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
I observed that spdk_nvmf_qpair_disconnect is only ever called
from the thread that owns the qpair - i.e. the one associated
with the poll group - with only one exception where the qpair
wasn't fully initialized. Add a check that enforces this
condition, as it will allow some major simplifications.
Change-Id: Ied434c9ea63fd4f2a6f9eacdf8f3f26a7b6bcf3f
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.gerrithub.io/424591
Reviewed-by: Seth Howell <seth.howell5141@gmail.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
This is a string name used for debugging only.
Change-Id: I9827f0e6c83be7bc13951c7b5f0951ce6c2a1ece
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.gerrithub.io/424127
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
thread
In debug mode this will verify that the state is being set
from the correct thread only.
Change-Id: I6234299d1fcdb63cd047417b6255c91e29991242
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.gerrithub.io/423411
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Seth Howell <seth.howell5141@gmail.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
- Move most of the target opts from nvmf_tgt to nvmf_transport.
- Update transport create functions to pass in transport opts.
- When transport opts are NULL in transport create function, use
target opts. (for backward compatiblity)
- Part 1 of 2 patches. Part 2 (to follow after part 1 accepted)
will allow independent creation of transport with specific opts
while maintaining backward compatibility with current apps and
rpc configuration that still use the add listener method to
create a transport.
Change-Id: I0e27447c4a98e0b6a6c590541404b4e4be879b47
Signed-off-by: John Barnard <john.barnard@broadcom.com>
Reviewed-on: https://review.gerrithub.io/423329
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
In the case of failing to spdk_nvmf_poll_group_add_subsystem()
operation, the subsystem still needs to initialize the related
queue so that later coming request can be properly queued.
Also needs to correctly handle the expected state in this failed
condition so that when destroying the subsystem, it could be
properly handled.
Change-Id: I419f2ac7164c25258c3911952c38b9433fca762b
Signed-off-by: GangCao <gang.cao@intel.com>
Reviewed-on: https://review.gerrithub.io/422799
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
The admin queue pair may get disconnected before
the controller is entirely destroyed and can't
be relied on to obtain the correct thread.
Change-Id: I5e80ef286693d53a161134610dd8354c458f8390
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.gerrithub.io/422134
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: qun wan <qun.wan@intel.com>
In RDMA, qpairs can't be removed from poll groups because
the poll group defines the completion queue. So don't
allow this operation anymore, even if it were theoretically
possible on other transports.
Change-Id: I69a3d1b336decd2d25e43ddea94f8b2095ef662f
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.gerrithub.io/421174
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
In the case that NVMe SSD itself has limited number of
hardware I/O QPairs, the corresponding abstraction of
I/O channel where upper module used to send I/Os down
will be NULL.
Add a check here for the NVMe-oF module and return the
error if the related I/O channel is NULL.
Change-Id: I97b799c6ecb026a01b0a414f1b49b949aa2407fd
Signed-off-by: GangCao <gang.cao@intel.com>
Reviewed-on: https://review.gerrithub.io/416689
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>