spdk_sock_group_poll() and spdk_sock_group_poll_count() had returned
0 on success. The implementation didn't match the specification
described in the header file, and couldn't be used to collect stats
correctly because 0 means idle.
This patch fixes the return value of spdk_sock_group_poll() and
spdk_sock_group_poll_count() to return number of events and
the callers not to overwrite the return value by 0.
Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: I7e2a17187fc74ea44d3acf2f35d63f5e5a254eda
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/463710
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Phenomenon:
Test case: Using the following command to test
./test/nvmf/target/shutdown.sh --iso --transport=tcp
without this patch, it will cause coredump.
The error is that the NVMe/TCP request in data buffer
waiting list has "FREE" state.
We do not need call this function in
spdk_nvmf_tcp_qpair_flush_pdus_internal, it causes the
bug during shutdown test since it will call the function
recursively, and it does not work for the shutdown path.
There are two possible recursive calls:
(1)spdk_nvmf_tcp_qpair_flush_pdus_internal ->
spdk_nvmf_tcp_qpair_process_pending ->
spdk_nvmf_tcp_qpair_flush_pdus_internal ->
>..
(2) spdk_nvmf_tcp_qpair_flush_pdus_internal->
pdu completion (pdu->cb)
->..
-> spdk_nvmf_tcp_qpair_flush_pdus_internal.
And we need to move the processing for NVMe/TCP requests
which are waiting buffer in another function to handle
in order to avoid the complicated possbile recursive
function calls. (Previously, we found the simliar
issue in spdk_nvmf_tcp_qpair_flush_pdus_internal for
pdu sending handling)
But we cannot remove this feature,
otherwise, the initiator will hang for waiting the
I/O. So we add the same functionality in spdk_nvmf_tcp_poll_group_poll
function.
Purpose: To fix the NVMe/TCP shutdown issue.
And this patch also reables the test for shutdown and bdevio.
Change-Id: Ifa193faa3f685429dcba7557df5b311bd566e297
Signed-off-by: Ziye Yang <ziye.yang@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/462658
Reviewed-by: Seth Howell <seth.howell@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
move the staement location of TCP request setting and remove
the duplicated code.
Signed-off-by: Ziye Yang <ziye.yang@intel.com>
Change-Id: Ia659756185547ff4f8aa26c5bc01f63defe6c113
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/462589
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
This priority is used to differentiate the sock priority on the TCP connections
between NVMe-oF TCP target and other TCP based applications.
Signed-off-by: Ziye Yang <ziye.yang@intel.com>
Change-Id: I6ee294e647420b56d1d91a07c2e37bf34ce24e03
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/461801
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
spdk_dma_*malloc() is about to be deprecated.
Change-Id: Ic42db528bbae4b3ca2e91cb9ac46def99ecb5f28
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/459431
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Set DIF context of the corresponding request to PDU when
- processing in-capsule data of the command,
- processing data of C2H PDU, or
- processing data of H2C PDU.
Change-Id: I3a668a55be21dbe2ee6ecf26476290670bd7b4a8
Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/458929
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Ziye Yang <ziye.yang@intel.com>
When NVMe/TCP initiator transfers in-capsule data, NVMe/TCP has to
process it as in-capsule data. If DIF insert/strip is enabled,
in-capsule data size will be increased by NVMe/TCP target to insert
metadata. However size of in-capsule data buffer had not been
increased, and buffer overflow occurred when NVMe/TCP initiator
transfers in-capsule data to NVMe/TCP target with DIF insert/strip
being enabled.
This patch increases size of in-capsule data buffer size to store
metadata. 16 byte metadata per 512 byte data block is the current
maximum ratio of metadata per block.
Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: I88b127efd7a945bde167a95df19a0b9175cb8cd0
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/461333
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
We updated readv_offset before generating DIF to avoid adding
the temporary variable _rc in the previous patch, but that caused
write error when inserting DIF.
Fix the bug in this patch.
Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: Id0788280a83cbea2554c851db77751432fc00cba
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/461116
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
When handling the capsule command header, call spdk_nvmf_request_get_dif_ctx
by passing the NVMf request and the reference to the DIF context, and set
the flag dif_insert_or_strip of the NVMf/TCP request to true.
spdk_nvmf_request_get_dif_ctx returns false immediately when the
corresponding NVMf controller disables DIF insert/strip.
Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: I16f6b322f2692d5f9653d011a490e7929ec37365
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/458928
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Add the optimal poll group get function.
Signed-off-by: Ziye Yang <ziye.yang@intel.com>
Change-Id: Ia9e57c6924a6563d79269cf535814883e83698cd
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/454549
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
This is a place holder and subsequent patches will use the option
dif_insert_or_strip and provide JSON RPCs to configure it.
Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: I7e3fbb1d49c47647a9a0a1a2149152801591b283
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/456452
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-by: Ziye Yang <ziye.yang@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
When DIF is inserted or stripped,
- in the TCP transport layer, we can use LBA based length throughout, but
- in the NVMf controller layer and BDEV layer, extended LBA based
length must be used, and NVMf controller gets the length from
tcp_req->req.length.
Hence by adding and using two variables, elba_length and orig_length
to struct spdk_nvmf_tcp_req, set the extended LBA length to
tcp_req->req.length before calling spdk_nvmf_request_exec(), and then
restore the original LBA based length to tcp_req->req.length after
calling spdk_nvmf_tcp_req_complete().
Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: I9309b8923c6386644c4fd8ef3ee83a19f5d21ce5
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/458926
Reviewed-by: Ziye Yang <ziye.yang@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
If tcp_req->dif_insert_or_strip, increase the length from LBA based
to extended LBA based by using its own DIF context.
Change-Id: Ie9f5cf757328dda795b43a7b6c70a72259865115
Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/458925
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-by: Ziye Yang <ziye.yang@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
The next patch will extend the length from LBA based to extended
LBA based and use it as buffer length to insert or strip DIF.
So cache sgl.unkeyed.length at the top of spdk_nvmf_tcp_req_parse_sgl
and use it throughout.
Besides, one unrelated change-the-line to improve the readability
is included.
Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: I2a1dc9379bb5671ec80b5b478504c9879a4f0fff
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/458924
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Generate and insert DIF to each data block when reading more than a single
byte.
This update is very similar with the use case of spdk_dif_generate_stream
in iSCSI target.
Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: I063919a32153ac0daf6d6eb1836c0d5995b65d33
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/459092
Reviewed-by: Ziye Yang <ziye.yang@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
If DIF mode is local and C2H data is extended LBA payload, DIF should
be verified just before sending the payload.
Add a helper function nvmf_tcp_pdu_verify_dif and call it in
spdk_nvmf_tcp_send_c2h_data after completing nvme_tcp_pdu_set_data_buf.
When nvmf_tcp_pdu_verify_dif returns error, treat the error as fatal
transport error because the error is caused by the target itself.
Handle the fatal NVMe/TCP transport error by terminating the connection
as described in the NVMe specification.
On the other hand, data digest error is treated as a non-fatal transport
error because the error is caused outside the target. This is reasonable.
Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: I9680af2556c08f5888aeaf0a772097e4744182be
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/458921
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
I used pahole to see whether the alignment of the structure
is reasonable. After reorgnization, we can saved 16 bytes and 1
cacheline according to the information by pahole.
Signed-off-by: Ziye Yang <ziye.yang@intel.com>
Change-Id: I1347e7c582fe2b00707e2841690b87d53cc61e33
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/460572
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Using naming rules consistent with other related libraries is helpful
to ensure the quality as verified by this patch series.
This patch changes a few parts to use iov and iovcnt for SGL operations.
Besides, name of an array points to the head of the array and is
constant. So copying name of array to an another pointer is not
necessary and can be removed.
Change-Id: I2324f28126b3088098c1c767cf6c060f22c175c3
Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/455629
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Maciej Szwed <maciej.szwed@intel.com>
Reviewed-by: Ziye Yang <ziye.yang@intel.com>
Previously we had used nvme_tcp_pdu_set_data() for incapsule data.
This patch changes handling incapsule data to use
nvme_tcp_pdu_set_data_buf() as same as H2C and C2H.
This unification is necessary to support DIF insert and strip
in NVMe/TCP target later.
Change-Id: I02cae8db94e51cf79a354dd64ad45f0e491ec08e
Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/455920
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Ziye Yang <ziye.yang@intel.com>
NVMe/TCP target had assumed the size of each iovec was io_unit_size.
Using nvme_tcp_pdu_set_data_buf() instead removes the assumption
and supports any alignment transparently.
Hence this patch moves nvme_tcp_pdu_set_data_buf() to
include/spdk_internal/nvme_tcp.h and replaces the current code to use it.
Besides, this patch simplifies spdk_nvmf_tcp_calc_c2h_data_pdu_num()
because sum of iov_len of iovecs is equal to the variable length now.
We cannot separate code movement (lib/nvme/nvme_tcp.c to include/
spdk_internal/nvme_tcp.h) and code replacement (lib/nvmf/tcp.c)
because moved functions are static and compiler give warning if
they are not referenced in lib/nvmf/tcp.c.
The next patch will add UT code.
Change-Id: Iaece5639c6d9a41bd35ee4eb2b75220682dcecd1
Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/455625
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ziye Yang <ziye.yang@intel.com>
Reviewed-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
And also add spdk_sock_group_get_ctx function
Change-Id: I2a2a58b0588ff7d99d3538ea0a633a3b8c7a234b
Signed-off-by: Ziye Yang <ziye.yang@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/454538
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Maciej Szwed <maciej.szwed@intel.com>
In our previous code, we will handle all the PDU until there is
no incoming data from the network if we can continue the loop.
However this is not quite fair when we handling multiple connections
in a polling group.
And this change is setting a maximal NVME/TCP PDU we can handle
for each conneciton, it can improve the performance. After some
tuing, 32 should be a good loop number. Our iSCSI target uses
16.
The following shows some performance data:
Configuration:
1 Command used in the initiator side:
./examples/nvme/perf/perf -r 'trtype:TCP adrfam:IPv4 traddr:192.168.4.11 trsvcid:4420'
-q 128 -o 4096 -w randrw -M 50 -t 10
2 target side, export 4 malloc bdev in a same subsystem
Result:
Before patch:
Starting thread on core 0
========================================================
Latency(us)
Device Information : IOPS MiB/s Average min max
TCP (addr:192.168.4.11 subnqn:nqn.2016-06.io.spdk:cnode1) from core 0: 51554.20 201.38 2483.07 462.31 4158.45
TCP (addr:192.168.4.11 subnqn:nqn.2016-06.io.spdk:cnode1) from core 0: 51533.00 201.30 2484.12 508.06 4464.07
TCP (addr:192.168.4.11 subnqn:nqn.2016-06.io.spdk:cnode1) from core 0: 51630.20 201.68 2479.30 481.19 4120.83
TCP (addr:192.168.4.11 subnqn:nqn.2016-06.io.spdk:cnode1) from core 0: 51700.70 201.96 2475.85 442.61 4018.67
========================================================
Total : 206418.10 806.32 2480.58 442.61 4464.07
After patch:
Starting thread on core 0
========================================================
Latency(us)
Device Information : IOPS MiB/s Average min max
TCP (addr:192.168.4.11 subnqn:nqn.2016-06.io.spdk:cnode1) from core 0: 57445.30 224.40 2228.46 450.03 4231.23
TCP (addr:192.168.4.11 subnqn:nqn.2016-06.io.spdk:cnode1) from core 0: 57529.50 224.72 2225.17 676.07 4251.76
TCP (addr:192.168.4.11 subnqn:nqn.2016-06.io.spdk:cnode1) from core 0: 57524.80 224.71 2225.29 627.08 4193.28
TCP (addr:192.168.4.11 subnqn:nqn.2016-06.io.spdk:cnode1) from core 0: 57476.50 224.52 2227.17 663.14 4205.12
========================================================
Total : 229976.10 898.34 2226.52 450.03 4251.76
Signed-off-by: Ziye Yang <ziye.yang@intel.com>
Change-Id: I86b7af1b669169eee2225de2d28c2cc313e7d905
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/459572
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
By now (5.1 is released), the Linux kernel initiator supports the
success optimization and further, the version that doesn't support
it (5.0) was EOL-ed. As such, lets open it up @ spdk by default.
Doing so provides a notable performance improvement: running perf with
iodepth of 64, randread, two threads and block size of 512 bytes for 60s
("-q 64 -w randread -o 512 -c 0x5000 -t 60") over the VMA socket acceleration
library and null backing store, we got 730K IOPS with the success
optimization vs 550K without it.
IOPS MiB/s Average min max
549274.10 268.20 232.99 93.23 3256354.96
728117.57 355.53 175.76 85.93 14632.16
To allow for interop with older kernel initiators, we added
a config knob under which the success optimization can be
enabled or disabled.
Change-Id: Ia4c79f607f82c3563523ae3e07a67eac95b56dbb
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/457644
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Ziye Yang <ziye.yang@intel.com>
Reviewed-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
According to the TP 8000 spec in Page 26:
Maximum Number of Outstanding R2T (MAXR2T): Specifies the maximum
number of outstanding R2T PDUs for a command at any point in time
on the connection.
Note that by the spec, the target may only support single r2t
(which is the minimum possible), it doesn't have to use multiple r2ts
even if the initiator supports that. So remove the maxr2t and
pending_r2t variable in the tcp qpair structure.
In the original design, we think that maxr2t is the maximal active
r2t numbers for each connection. So if the initiator sends out maxr2t=16,
it means that all the commands of a qpair can use such number of R2T pdus.
So we need to wait for the available R2Ts for the request when the maxr2t
reaches the maximal value. But it is the wrong understanding of the spec.
In fact, each command has its own number of maximal r2t numbers, then we
do not need to use the wait method for R2T method anymore. So we remove
the state TCP_REQUEST_STATE_DATA_PENDING_FOR_R2T. Futhermore, we adjust
the related SPDK_TPOINT_ID definition.
In current patch, the target will support one active R2T for each
write NVMe command. Thus, we remove the function spdk_nvmf_tcp_handle_queued_r2t_req.
Reported-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: Ziye Yang <ziye.yang@intel.com>
Change-Id: I7547b8facbc39139b4584637ccc51ba8b33ca285
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/455763
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Or Gerlitz <gerlitz.or@gmail.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
This matches the Linux kernel target. Users can
still decrease this default when creating the
transport (i.e. -p option for nvmf_create_transport
in rpc.py).
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: Icad59350a2cd35cfc4ad76d06399345191680c05
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/454820
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-by: Seth Howell <seth.howell5141@gmail.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
This restriction helps reduce the amount of padding when
printing out the event trace, allowing it to fit in a
small number of columns.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: Ifa31e5a6967c7b9bc7028069effb71533f80596f
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/452736
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
This was not used by any of the trace register descriptions.
Let's remove it rather keeping it around if we don't need it.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: Idda809e2911db5be555ff6aa13695484a14bf665
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/452734
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Mempools are based off of a ring structure which allocates its elements
as a power of two. It also only exposes n-1 elements to the user. So
when we create a mempool with 2^n elements in it, we have to allocate a
ring with 2^n+1 entries. By decreasing the number of elements in these
key mempools by 1, we can save a decent amount of memory.
Change-Id: I942c9dd4cf59096969bc2559fb46fd2084a07f09
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/448875
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Purpose: To support the multiple SGL later.
Change-Id: I133a451100b736353cf98a6aaca879d290ff5b67
Signed-off-by: Ziye Yang <ziye.yang@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/448259
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
This function will be exteneded later for multiple SGL
support.
Change-Id: I1f6962ec03c72e335efaa311a12d3891312fcc53
Signed-off-by: Ziye Yang <ziye.yang@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/449968
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: wuzhouhui <wuzhouhui@kingsoft.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
This wasn't used anywhere.
Change-Id: I405af3c808be284d19218f3f04c1e90e33e31de8
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/446977
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Ziye Yang <ziye.yang@intel.com>
The purpose is to use the single readv to read both
the payload the digest(if there is a possible one).
And this patch will be prepared to support the
multiple SGL in NVMe tcp transport later.
Change-Id: Ia30a5e0080b041a65461d2be13db4e0592a70305
Signed-off-by: Ziye Yang <ziye.yang@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/447670
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Borrow the ideas from iSCSI and optimize
the nvme_tcp_build_iovecs function.
Change-Id: I19b165b5f6dc34b4bf655157170dec5c2ce3e19a
Signed-off-by: Ziye Yang <ziye.yang@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/446836
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
If the current recv_state of qpair is same with the state to be set,
we will print error message. And checked the current code,
we should add a check to avoid this.
Change-Id: I49334f637c48e565e785d1fe6d0f000e18b2048a
Signed-off-by: Ziye Yang <ziye.yang@intel.com>
Reviewed-on: https://review.gerrithub.io/c/445653
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Purpose: solve the coredump issue for the buffer
return later in spdk_nvmf_tcp_request_free_buffers.
If keep this statement, we cannot return the buffer
to the polling group.
Change-Id: Ib5c95ba54b37540950e654110fe6317cab507076
Signed-off-by: Ziye Yang <optimistyzy@gmail.com>
Reviewed-on: https://review.gerrithub.io/c/445435
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Seth Howell <seth.howell5141@gmail.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
From TP8000 spec 7.4.7,
"In response to a C2HTermReq PDU, the host shall terminate the connection.
If the host does not terminate the connection in an implementation specific
period that does not exceed 30 seconds, the controller may terminate the
connection on its own".
It means that the timeout is designed for: when the target is
sending out C2hTermReq, if the host does not terminate the connection,
the target should terminate the connection.
PS: For detecting the malicous connection without sending response
(such as no response of R2T PDU) which should be another patch.
Change-Id: I586dbb235d99aeab5d748a19b9128cd8b0cef183
Signed-off-by: Ziye Yang <optimistyzy@gmail.com>
Reviewed-on: https://review.gerrithub.io/c/440831
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
We should never be going over these limits in the respective transports,
but add asserts to check this during testing.
Change-Id: Ifcaa82ccf58546a38020b31df54ee5d1d9822b8b
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/442777
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
This doesn't fix any bug, but it makes more sense to leave the qpair
in the NVME_TCP_PDU_RECV_STATE_AWAIT_PDU_READY state until it
receives at least one byte.
Change-Id: Ic5f34a733a80b58f65a1334fae7e07dbded2b3d0
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.gerrithub.io/c/441811
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
The management channel was used in the RDMA transport prior
to the introduction of poll groups and made its way over to
the TCP transport when it was written. Eliminate it in favor
of just using the poll group.
Change-Id: Icde631dd97a6a29190c4a4a6a10a0cb7c4f07a0e
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.gerrithub.io/c/442432
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Seth Howell <seth.howell5141@gmail.com>
This was only used by the target, and it didn't actually need it.
Change-Id: Ibcef410165efdc16077da24419580ed51b087d70
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.gerrithub.io/c/442440
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
This type was actually two entirely different types for
the initiator and the target, so just make it void.
Change-Id: I15512d9d4efd790dce0fa4323b7230de66144bc6
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.gerrithub.io/c/442438
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Currently, the code does not comply with the spec,
so remove such code for 19.01 and will add the code
which complies with the spec for 19.04
Change-Id: Icd3b2573fbc46dc2fa7a00c6672c23ea01ffe0ee
Signed-off-by: Ziye Yang <optimistyzy@gmail.com>
Reviewed-on: https://review.gerrithub.io/c/441985
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
If there is socket read error, we should directly disconnect
the socket instead of set the tqpair into RECV_ERROR state.
When it is in ERROR_RECV state, it does not mean that
we should close the socket immediately.
Change-Id: I975906653c13eb3fa5195799c517015435176785
Signed-off-by: Ziye Yang <optimistyzy@gmail.com>
Reviewed-on: https://review.gerrithub.io/c/441830
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
This patch will solve the following two cases:
1 Free the pdu resources. Add the checkout of c2h_pdu_data_cnt of the qpair.
2 Do not recyecle the req accoriding to the pdu in the send_queue, but directly
recylcing the reqs in TCP_REQUEST_STATE_TRANSFERRING_CONTROLLER_TO_HOST state.
Change-Id: I5856c3421019ec49d576d3dae4c62fefbb3925ca
Signed-off-by: Ziye Yang <optimistyzy@gmail.com>
Reviewed-on: https://review.gerrithub.io/c/440847
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Purpose: To avoid the buffer contention among different
polling groups if there are multiple core configurations
for NVMe-oF tcp transport.
Change-Id: I1c1b0126f3aad28f339ec8bf9355282e08cfa8db
Signed-off-by: Ziye Yang <optimistyzy@gmail.com>
Reviewed-on: https://review.gerrithub.io/c/440444
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Seth Howell <seth.howell5141@gmail.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
This patch is used to dump the requests state if
the tqpair's resource is not freed.
Change-Id: Ic4780662558d73267d4f1ebabfc22780fafec4ec
Signed-off-by: Ziye Yang <ziye.yang@intel.com>
Reviewed-on: https://review.gerrithub.io/c/440846
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
This is shared between all currently valid transports. Just move it up
to the generic structure. This will make implementing more shared
features on top of this a lot easier.
Change-Id: Ia896edcb7555903ba97adf862bc8d44228df2d36
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/440416
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
This patch series is geared at solving github issue 555.
Ultimately the goal of this series is to add a per-poll-group buffer
cache to prevent starvation.
Change-Id: I8ddaa47487665c2f9adce2109eb71b8fa71a7927
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/439415
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Due to qpair timeout handling refactoring,
we removed the qpair destroying related code.
And this patch is submitted to address this issue. With
this patch, we can detect sock close of the fd from
the initiator, and correctly free the qpair related resource
(e.g., pid) managed by nvmf layer.
Otherwise, the initatior thinks the qpair related source is
freed, however it is not freed in the target side.
Change-Id: Ia2de07bd849fa5d3bc0e0e0d4941464dfd16d266
Signed-off-by: Ziye Yang <ziye.yang@intel.com>
Reviewed-on: https://review.gerrithub.io/c/440242
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Previously, we allocate the buffer size according
to the MaxQueueDepth info, however this is not exactly
a good way for customers to configure, we should provided
a shared buffer number configuration for the transport.
Change-Id: Ic6ff83076a65e77ec7376688ffb3737fd899057c
Signed-off-by: Ziye Yang <optimistyzy@gmail.com>
Reviewed-on: https://review.gerrithub.io/437450
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
This makes the timeout check for each qpair in the group
efficient. If there are many qpairs in the group, we
can scale.
Change-Id: I75c29a92107dc32377a2ef7edb5ac92868f1c5df
Signed-off-by: Ziye Yang <ziye.yang@intel.com>
Reviewed-on: https://review.gerrithub.io/435277
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Remove the unnessary fields in spdk_nvmf_tcp_transport
Change-Id: I632608ba654b30f3511f5e1d925c6743c9100365
Signed-off-by: Ziye Yang <optimistyzy@gmail.com>
Reviewed-on: https://review.gerrithub.io/437271
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
For TCP/IP transport, we need to remove the socket
from the polling group since we do not want to keep the
tgroup info in the NVMe/TCP qpair, it should be general.
Change-Id: I4b064d8378f66ea5d91ac554fe628d9ccebd07f4
Signed-off-by: Ziye Yang <optimistyzy@gmail.com>
Reviewed-on: https://review.gerrithub.io/434128
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Since we already make the recv state handling in a correct
way, so we do not need this check any more.
Change-Id: Id71ab2e0ef60be302f8cf6ea776259d7312663ec
Signed-off-by: Ziye Yang <optimistyzy@gmail.com>
Reviewed-on: https://review.gerrithub.io/436896
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
The purpose of this patch is to fix the issue when there is no
data buffer allocated, the previous method is wrong to set the
recv pdu state.
The reason is that:
1 When there is no data buffer allocated, we still need to handle
the incoming pdu. It means that we should switch the pdu recv
state immedidately.
2 And when there is a buffer, we resume the req handling with the
allocated buffer, that time we should not switch the pdu receving
state of the tqpair.
Change-Id: I1cc2723acc7b0a17407c3a2e6273313a4e612916
Signed-off-by: Ziye Yang <optimistyzy@gmail.com>
Reviewed-on: https://review.gerrithub.io/436153
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
The usage of this list is duplicated with
the state_queue[TCP_REQUEST_STATE_DATA_PENDING_FOR_R2T]
list of tqpair, so remove it.
Change-Id: I7a67a5c8049bb9492bf97e0d60e2040a29b0a7e4
Signed-off-by: Ziye Yang <optimistyzy@gmail.com>
Reviewed-on: https://review.gerrithub.io/436274
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Fix the issue in both target and host sides.
Change-Id: I1bf31072b2164a3035b443fe6c5418a6a7829d81
Signed-off-by: Ziye Yang <optimistyzy@gmail.com>
Reviewed-on: https://review.gerrithub.io/436099
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Previously, this field is used to optimize the code.
When we receive the capsule cmd pdu, we need to allocate
the related buffer, if there is read or write request.
If the related buffer is not valid, then we cannot enter
the next pdu handling phase. So we use this field to mark.
After carefully checking the code, I think that we use
the tcp_req which is assoicated with the pdu, thus it is
efficient.
Change-Id: Ic1634d706dd40a706269bce199bf6031ea0462c0
Signed-off-by: Ziye Yang <optimistyzy@gmail.com>
Reviewed-on: https://review.gerrithub.io/435995
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Since we use aligned buffer, I think that the error handling
path here is not correct, the address is wrong.
Change-Id: I5bcb7f050199496423f861fd6aea65e0fe48c804
Signed-off-by: Ziye Yang <ziye.yang@intel.com>
Reviewed-on: https://review.gerrithub.io/435992
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-by: Seth Howell <seth.howell5141@gmail.com>
Add a check, which will be required for the further
unit test.
Change-Id: Ib1987fef914e6546f2bdbacd23bf9bb6005b8155
Signed-off-by: Ziye Yang <ziye.yang@intel.com>
Reviewed-on: https://review.gerrithub.io/435197
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Previously, if want to know which mask bit is used for specific
trace group, the only way is to check source code. Now list
each trace group with its trace tpoint group mask bit in
usage message
Change-Id: I7a85fe9c0885f1919f6ffbdc97dab81f1986fb07
Signed-off-by: Liu Xiaodong <xiaodong.liu@intel.com>
Reviewed-on: https://review.gerrithub.io/435448
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
It is the first patch to follow the NVMe over fabrics
spec and implmenent the NVMe/TCP transport. It can be
divided into work in the host and target sides:
Host side: Add the TCP/IP transport in nvme lib (lib/nvme).
Target side: Add the TCP/IP transport in nvmf lib (lib/nvmf).
Change-Id: Idc4f93750df676354f6c2ea8ecdb234e3638fd44
Signed-off-by: Ziye Yang <optimistyzy@gmail.com>
Reviewed-on: https://review.gerrithub.io/425191
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>