This patch is used to dump the requests state if
the tqpair's resource is not freed.
Change-Id: Ic4780662558d73267d4f1ebabfc22780fafec4ec
Signed-off-by: Ziye Yang <ziye.yang@intel.com>
Reviewed-on: https://review.gerrithub.io/c/440846
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
This is shared between all currently valid transports. Just move it up
to the generic structure. This will make implementing more shared
features on top of this a lot easier.
Change-Id: Ia896edcb7555903ba97adf862bc8d44228df2d36
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/440416
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
This patch series is geared at solving github issue 555.
Ultimately the goal of this series is to add a per-poll-group buffer
cache to prevent starvation.
Change-Id: I8ddaa47487665c2f9adce2109eb71b8fa71a7927
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/439415
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
This is an attempt to clean up requests sititng in the
waiting_for_buffer state before destroying it for good.
Change-Id: I8ae047e4d7fd01f30419ae346e4da49355dc033d
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/440127
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
This allows us to drain all of the pending requests from the qpairs
before we destroy them, preventing them from being picked up on
subsequent process_pending polls.
Change-Id: I149deff437b4c1764fabf542cdd25dd067a8713a
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/440428
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Due to qpair timeout handling refactoring,
we removed the qpair destroying related code.
And this patch is submitted to address this issue. With
this patch, we can detect sock close of the fd from
the initiator, and correctly free the qpair related resource
(e.g., pid) managed by nvmf layer.
Otherwise, the initatior thinks the qpair related source is
freed, however it is not freed in the target side.
Change-Id: Ia2de07bd849fa5d3bc0e0e0d4941464dfd16d266
Signed-off-by: Ziye Yang <ziye.yang@intel.com>
Reviewed-on: https://review.gerrithub.io/c/440242
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
The purpose of this patch is to remove the duplicated code
used in spdk_nvmf_rdma_request_free
Change-Id: I3f74466a7ec788000eff9c2a75c9ea2cacaf5cc2
Signed-off-by: Ziye Yang <optimistyzy@gmail.com>
Reviewed-on: https://review.gerrithub.io/c/439942
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Seth Howell <seth.howell5141@gmail.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
The nsid field can be used for per namespace basis
reservation notification.
Change-Id: Ia7212020ec893ea367afe79933e1629895fe41b8
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-on: https://review.gerrithub.io/c/439930
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Observed some issues related with AER in the testpool,
which states that the subsystem is not ready. So change
the check, which will be more accurate. We only did not
allow the subsystem in inactive state or deactivitating
state. For others, we can still queue the requests.
Change-Id: Ic041298dfc5f7d7bfab5f5e5314ade377273df32
Signed-off-by: Ziye Yang <ziye.yang@intel.com>
Reviewed-on: https://review.gerrithub.io/c/439797
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Seth Howell <seth.howell5141@gmail.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
When the host connects the target and does the io related job,
if we use ctrlr + c, it will be crash. The issue
is that we found the rqpair->qpair.group is NULL.
Change-Id: Id36cfac2be9abc707bf75a2e1ddb3f414610b6f1
Signed-off-by: Ziye Yang <optimistyzy@gmail.com>
Reviewed-on: https://review.gerrithub.io/c/437232
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
This operation is not attached to a send request so we need to put the
request into the completed state right away since there is no send
associated with it during the draining process.
Change-Id: I294f99950b00a584d8940bb4f93ac046c478d3b3
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/439437
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
We found ibv state value may be unreasonable, so before we
use the state value we do some judgement. The unreasonable
state probably means hardware issue, so the process flow
become unpredicatable.
Fix GitHub issue #508.
Change-Id: I213f4d684b103cce7bc072aecd591e2c491e0596
Signed-off-by: JinYu <jin.yu@intel.com>
Reviewed-on: https://review.gerrithub.io/c/436920
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Ziye Yang <optimistyzy@gmail.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
If this happens, we have something going seriously wrong and we need as
much debug information as we can get.
Change-Id: I305512790461443316b9f231fa2afeb69593af1b
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/438097
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
We do not want to present those subsystems which are not
ready.
Change-Id: I7f5c171fbac4c31d839421e37e93e62569c0e87a
Signed-off-by: Ziye Yang <optimistyzy@gmail.com>
Reviewed-on: https://review.gerrithub.io/c/437222
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Seth Howell <seth.howell5141@gmail.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
The least needed data buffer number should only
be larger for completing one RDMA (read/write RDMA).
Change-Id: I44eb51db279fc055f687eb78b6a642dbb5cb23f3
Signed-off-by: Ziye Yang <optimistyzy@gmail.com>
Reviewed-on: https://review.gerrithub.io/437808
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Previously, we allocate the buffer size according
to the MaxQueueDepth info, however this is not exactly
a good way for customers to configure, we should provided
a shared buffer number configuration for the transport.
Change-Id: Ic6ff83076a65e77ec7376688ffb3737fd899057c
Signed-off-by: Ziye Yang <optimistyzy@gmail.com>
Reviewed-on: https://review.gerrithub.io/437450
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
This makes the timeout check for each qpair in the group
efficient. If there are many qpairs in the group, we
can scale.
Change-Id: I75c29a92107dc32377a2ef7edb5ac92868f1c5df
Signed-off-by: Ziye Yang <ziye.yang@intel.com>
Reviewed-on: https://review.gerrithub.io/435277
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reason: I checked the code in different transport,
the qpair is already freed, so we dot need to set
any state.
Change-Id: I3d78c259c3f79ea4426dc9408e5c3469bc171358
Signed-off-by: Ziye Yang <optimistyzy@gmail.com>
Reviewed-on: https://review.gerrithub.io/437493
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Remove the unnessary fields in spdk_nvmf_tcp_transport
Change-Id: I632608ba654b30f3511f5e1d925c6743c9100365
Signed-off-by: Ziye Yang <optimistyzy@gmail.com>
Reviewed-on: https://review.gerrithub.io/437271
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
At least in case of RDMA transport, poll_group_create (spdk_nvmf_rdma_poll_group_create)
can return error (NULL).
Change-Id: If1576b3515e7f9ede76af08bfa6b1c8399dcda09
Signed-off-by: Sasha Kotchubievsky <sashakot@mellanox.com>
Signed-off-by: Evgeniy Kochetov <evgeniik@mellanox.com>
Reviewed-on: https://review.gerrithub.io/436887
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Check for QP reference counter in RDMA QP destroy function was wrong
and QP resources were never released.
Change-Id: I6ab0ce39452e8263f89589d138c90f749516ebb1
Signed-off-by: Evgeniy Kochetov <evgeniik@mellanox.com>
Signed-off-by: Sasha Kotchubievsky <sashakot@mellanox.com>
Reviewed-on: https://review.gerrithub.io/436974
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
The ctrlr may be NULL, so we need to add a check here
to present segment fault.
Change-Id: I6c5361cc829af065082a95df0b8cc2f8d49a6002
Signed-off-by: Ziye Yang <ziye.yang@intel.com>
Reviewed-on: https://review.gerrithub.io/436950
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Maciej Szwed <maciej.szwed@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
For TCP/IP transport, we need to remove the socket
from the polling group since we do not want to keep the
tgroup info in the NVMe/TCP qpair, it should be general.
Change-Id: I4b064d8378f66ea5d91ac554fe628d9ccebd07f4
Signed-off-by: Ziye Yang <optimistyzy@gmail.com>
Reviewed-on: https://review.gerrithub.io/434128
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Since we already make the recv state handling in a correct
way, so we do not need this check any more.
Change-Id: Id71ab2e0ef60be302f8cf6ea776259d7312663ec
Signed-off-by: Ziye Yang <optimistyzy@gmail.com>
Reviewed-on: https://review.gerrithub.io/436896
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
This is a failsafe for finding and reporting data buffers that span
multiple Memory Regions. These errors should never be triggered, but
finding and reporting them will help any debugging.
Change-Id: I3c61e3cc510f5a36039fc1815ff0de45fce794d5
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/436054
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Discover commands previously blindly used the nvmf_req->data structure.
This only works if the entire command fits in a single contiguous
buffer. commit 1d9be84bfd changed the default buffer size such that
this would become a problem for as few as 8 subsystems.
Fixes github issue 525
This change may also help prevent data corruption as we were copying up
to nvmf_req->length data into the buffer. For requests with multiple
data buffers this can cause us to copy off the end of that buffer.
Change-Id: I788259da988b2458f57ee2795e1c5d3ced8803dd
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/435544
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
The purpose of this patch is to fix the issue when there is no
data buffer allocated, the previous method is wrong to set the
recv pdu state.
The reason is that:
1 When there is no data buffer allocated, we still need to handle
the incoming pdu. It means that we should switch the pdu recv
state immedidately.
2 And when there is a buffer, we resume the req handling with the
allocated buffer, that time we should not switch the pdu receving
state of the tqpair.
Change-Id: I1cc2723acc7b0a17407c3a2e6273313a4e612916
Signed-off-by: Ziye Yang <optimistyzy@gmail.com>
Reviewed-on: https://review.gerrithub.io/436153
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
The usage of this list is duplicated with
the state_queue[TCP_REQUEST_STATE_DATA_PENDING_FOR_R2T]
list of tqpair, so remove it.
Change-Id: I7a67a5c8049bb9492bf97e0d60e2040a29b0a7e4
Signed-off-by: Ziye Yang <optimistyzy@gmail.com>
Reviewed-on: https://review.gerrithub.io/436274
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Fix the issue in both target and host sides.
Change-Id: I1bf31072b2164a3035b443fe6c5418a6a7829d81
Signed-off-by: Ziye Yang <optimistyzy@gmail.com>
Reviewed-on: https://review.gerrithub.io/436099
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Previously, this field is used to optimize the code.
When we receive the capsule cmd pdu, we need to allocate
the related buffer, if there is read or write request.
If the related buffer is not valid, then we cannot enter
the next pdu handling phase. So we use this field to mark.
After carefully checking the code, I think that we use
the tcp_req which is assoicated with the pdu, thus it is
efficient.
Change-Id: Ic1634d706dd40a706269bce199bf6031ea0462c0
Signed-off-by: Ziye Yang <optimistyzy@gmail.com>
Reviewed-on: https://review.gerrithub.io/435995
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
For NVMeoF, extened host identifer is used which is exactly
the same size as uuid, while here, use uuid data structure
makes sense. For NVMeoF reservation features, host identifier
need to be used with each registrant, using spdk_uuid_compare
becomes straightforward.
Change-Id: Ib6ffaa92fab5e0ae5037682be14fcc415f9714d7
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-on: https://review.gerrithub.io/436302
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: Jim Harris <james.r.harris@intel.com>
Since we use aligned buffer, I think that the error handling
path here is not correct, the address is wrong.
Change-Id: I5bcb7f050199496423f861fd6aea65e0fe48c804
Signed-off-by: Ziye Yang <ziye.yang@intel.com>
Reviewed-on: https://review.gerrithub.io/435992
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-by: Seth Howell <seth.howell5141@gmail.com>
Add a check, which will be required for the further
unit test.
Change-Id: Ib1987fef914e6546f2bdbacd23bf9bb6005b8155
Signed-off-by: Ziye Yang <ziye.yang@intel.com>
Reviewed-on: https://review.gerrithub.io/435197
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
This notice was scaring a lot of people because every time we disconnect
a qpair it tells the user that qpair is entering an error state. That is
part of the normal state flow of qpairs during disconnect, but makes it
seem like something is going wrong.
Change-Id: I776e71db2b24fa963113fee88b5cf02c0820f171
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/435555
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Ziye Yang <optimistyzy@gmail.com>
Reviewed-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Previously, if want to know which mask bit is used for specific
trace group, the only way is to check source code. Now list
each trace group with its trace tpoint group mask bit in
usage message
Change-Id: I7a85fe9c0885f1919f6ffbdc97dab81f1986fb07
Signed-off-by: Liu Xiaodong <xiaodong.liu@intel.com>
Reviewed-on: https://review.gerrithub.io/435448
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
This is necessary to confirm that a buffer that spans a 2_MB boundary is
still in a single MR.
Change-Id: If0d14e514ab2197a0d2e3af4f565f56d50591210
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/435179
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
NVMf target can't support extended LBA format for now, so print a
error log for those NVMe backend devices with extended LBA format.
Fix the issue #497.
Change-Id: Idda76ba934dd0eb45f92ae22b0b71398b3ae69dd
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-on: https://review.gerrithub.io/432799
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: <dongx.yi@intel.com>
Reviewed-by: Seth Howell <seth.howell5141@gmail.com>
It is the first patch to follow the NVMe over fabrics
spec and implmenent the NVMe/TCP transport. It can be
divided into work in the host and target sides:
Host side: Add the TCP/IP transport in nvme lib (lib/nvme).
Target side: Add the TCP/IP transport in nvmf lib (lib/nvmf).
Change-Id: Idc4f93750df676354f6c2ea8ecdb234e3638fd44
Signed-off-by: Ziye Yang <optimistyzy@gmail.com>
Reviewed-on: https://review.gerrithub.io/425191
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
We have historically conflated SPDK_NVMF_MAX_SGL_ENTRIES and the maximum
number of SGEs associated with a wr data object. For now these are the
same thing, but there should be nothing tying the number of NVMe request
SGL elements to the number of rdma request wr sgl elements.
Also, clarify the rx_sge and tx_sge enums to reflect the actual maximum
number of SGEs associated with either the send and receive queues.
This change doesn't actually modify these values, but sets us up to do
things like split the data in an NVMe SGE into multiple WR SGEs in case
the buffer associated with the NVMe SGE is not contained in a single
RDMA mr.
We also need to store these values in the qpair for later usage.
Change-Id: Iff3756fc72787a4b72a99b2bdf90bf486a8010fa
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/433196
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Having the buffers be the same size as the maximum xfer size doesn't do
us any favors. Make these buffers a ratio of the maximum transfer size
and the number of supported nvmf SGLs.
Also configure the number of nvmf request iovs to correspond with this
new ratio.
Change-Id: I3147dcd86b599c74521ebfdf3bcdbcdee8871a3a
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/428747
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
This option is deprecated. Also, rename the rpc and configuration
options for setting the opts to reflect that they now only set the max
number of subsystems
Change-Id: Iaabcbf33dd0a0dc489d81233fda74e9e7f3e0d2e
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/430161
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
This patch implements the following QP shutdown flow:
1. Move the QP to ERR state
2. Post dummy work requests to send and receive queues
3. Poll CQ until it returns dummy work requests (with WR Flush Error status)
4. Call ibv_destroy_qp and release resources
In order to differentiate dummy and normal WRs new spdk_nvmf_rdma_wr
structure was introduced which contains type of WR. Since now it is
expected that wr_id field in ibv_recv/send_wr and ibv_wc always points
to this structure. Based on WR type wr_id can be safely casted to
correct container structure. In case of unsuccessful work completions
'opcode' can not be used for this purpose because it may be
invalid (see "IB Architecture Specification Volume 1", ch. 11.4.2.1
"Poll for completion").
Change-Id: Ifb791e36114c619c71ad4d831f2c7972fe7cf13d
Signed-off-by: Evgeniy Kochetov <evgeniik@mellanox.com>
Signed-off-by: Sasha Kotchubievsky <sashakot@mellanox.com>
Reviewed-on: https://review.gerrithub.io/430754
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Seth Howell <seth.howell5141@gmail.com>
In order to prepare for multiple transports, the nvmf tgt should never
implicitly create a transport when listen is called.
Change-Id: If1286e7e3f7bce422a4acd66390852736113df7a
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/430160
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Part of a larger series aimed at exposing NVMe-oF transports though rpc
and spdkcli. This is in line with the goal of initializing all NVMe-oF
options on a per-transport basis.
Change-Id: I4f07d58d49b925cf51df3980d2e2161c50169cee
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/430622
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>