This patch removes the string from register component.
Removed are all instances in libs or hardcoded in apps.
Starting with this patch literal passed to register,
serves as name for the flag.
All instances of SPDK_LOG_* were replaced with just *
in lowercase.
No actual name change for flags occur in this patch.
Affected are SPDK_LOG_REGISTER_COMPONENT() and
SPDK_*LOG() macros.
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Change-Id: I002b232fde57ecf9c6777726b181fc0341f1bb17
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/4495
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Mellanox Build Bot
Reviewed-by: Anil Veerabhadrappa <anil.veerabhadrappa@broadcom.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Community-CI: Broadcom CI
It should be 16 but not 6. For example, it will have 16 priorities
when configuring ADQ with Intel's 100G NIC.
Signed-off-by: Ziye Yang <ziye.yang@intel.com>
Change-Id: Iebdf7b379c15f3b5fd16dba2ad87ec55af04577f
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/4235
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Usage of spdk_thread_get_count is wrong since there might be many
threads allocated by other modules. Transport buffers are used by
transport poll groups, their number is equal to the number of cores.
Change-Id: I4bc748e93c3b204bf3b3ec73f17257b927a7f428
Signed-off-by: Alexey Marchuk <alexeymar@mellanox.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/3882
Community-CI: Broadcom CI
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Add trid to struct spdk_nvmf_qpair and initialize it at initialization.
admin_qpair->trid will be used to get the corresponding
subsystem_listener via nvmf_subsystem_find_listener() and add it to
struct spdk_nvmf_ctrlr in the next patch.
Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: I0d1a41aede60de88747eff16c7e04f63d0702596
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/4009
Community-CI: Mellanox Build Bot
Community-CI: Broadcom CI
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Purpose: To make the pdu management consistent with other PDUs, then
we can easily adapt our code into some hardware offloading solution.
Signed-off-by: Ziye Yang <ziye.yang@intel.com>
Change-Id: Ic4a2847fd1b6cacda4cbaa52ff12c338f0394805
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/3588
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
RDMA target can't handle bidirectional xfer type, in debug build
it throws an assert in nvmf_rdma_setup_wr function. NVMF controller
performs checks od opcodes, but the failure happens before this
check. Add similar validation in TCP transport.
Change-Id: I14400b9c301295c0ae1d35a4330189d38aeee723
Signed-off-by: Alexey Marchuk <alexeymar@mellanox.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/3436
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
struct spdk_nvmf_request holds req_to_abort and so passing req_to_abort
separately is not really necessary now. The internal API
nvmf_ctrlr_abort_request() was added at the stage of prototyping.
Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: I9ef2467d6f92422f044650c62a0777b95c0fc1ab
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/3488
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Michael Haeuptle <michaelhaeuptle@gmail.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
1 Change the default factor from 4 to 8, which can be used
to improve the performance.
2 Change the base buffer size in nvme_tcp.c,
we should not use sizeof(struct spdk_nvme_tcp_cmd),
it is 72 bytes. Normally, the initiator will receive
C2h pdus and R2T Pdus by most, so set the size of using
sizeof(struct spdk_nvme_tcp_c2h_data_hdr) is enough.
Signed-off-by: Ziye Yang <ziye.yang@intel.com>
Change-Id: I384f4cb026cb8d83e75b639f7256ee8cb8ed1df1
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/3283
Community-CI: Mellanox Build Bot
Community-CI: Broadcom CI
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: John Kariuki <John.K.Kariuki@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
There is no reason to continue processing these requests if the
qpair is not still active. We should complete them and free
any resources they are still holding.
Also, not doing so can cause issues with trying to access pointers
in the qpair after they are invalid. See issue #1460.
Signed-off-by: Seth Howell <seth.howell@intel.com>
Change-Id: I6e570a576983dfedf726dc4a9a83316209403e00
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/3451
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Make the abort execution timeout value as optional.
Zero is acceptable and means immediate timeout.
Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: Ia4b03c65b8bd15899f48be9476ee657446147581
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/3104
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Michael Haeuptle <michaelhaeuptle@gmail.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
If the state of the request is TRANSFERRING_HOST_TO_CONTROLLER,
we cannot abort it now but may be able to abort it when its state
is EXECUTING. Hence wait until its state is EXECUTING, and then
retry aborting.
The following patch will make the timeout value configurable as
an new transport option.
Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: I98347b68e8b6b4a804c47894964cb81eae215aaa
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/3010
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Michael Haeuptle <michaelhaeuptle@gmail.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
If the request is queued and is not in completing, we can abort
it safely.
If the state of the request is NEED_BUFFERING, the request is
queued to both tqpair->group->group.pending_buf_queue and
the queue per state.
If the state is AWAITING_R2T_ACK, the request is queued to the
queue per state.
Dequeueing from the queue per state is done in
nvmf_tcp_req_set_state(). Hence explicit dequeuing only when the
state of the request is NEED_BUFFERING.
Most abort operation is common between two cases. We can use fallthrough
in switch-case but factor out the common operation into a helper
function nvmf_tcp_req_set_abort_status() instead because we may use
the helper function in future and using helper function is easier to
read than fallthrough.
Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: I1695b084d5d1f2537fbdd512bc3cd136e0f6a65b
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/3009
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Michael Haeuptle <michaelhaeuptle@gmail.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Call nvmf_ctrlr_abort_request() if the request whose CID matches
is found and its state is executing.
nvmf_tcp_qpair_abort_request() returns immediately if rc is
SPDK_NVMF_REQUEST_EXEC_STATUS_ASYNCHRONOUS or calls
spdk_nvmf_request_complete() otherwise.
Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: I1abceecc211ee79d8ac18a82dc63b13d313a6f27
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/3008
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Michael Haeuptle <michaelhaeuptle@gmail.com>
State machine is different among NVMe-oF transports and is
encapsulated to the transport neutral NVMe-oF controller and
NVMe-oF qpair.
To implement abort operation for each NVMe-oF transport,
add a function pointer qpair_abort_request to struct spdk_nvmf_transport_ops
and a stub nvmf_transport_qpair_abort_request() to encapsulate
which transport is used.
The following patches will implement qpair_abort_request for each
transport. Each qpair_abort_request() is responsible to call
spdk_nvmf_request_complete() for the abort request.
Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: I2beac959ed428c5108cf33691226b7fae5cd24d6
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/3007
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Michael Haeuptle <michaelhaeuptle@gmail.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Poller should return status > 0 when it did some work
(CPU was used for some time) marking its call as busy
CPU time.
Active pollers should return BUSY status only if they
did any meangful work besides checking some conditions
(e.g. processing requests, do some complicated operations).
Signed-off-by: Maciej Szwed <maciej.szwed@intel.com>
Change-Id: Id4636a0997489b129cecfe785592cc97b50992ba
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/2164
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Broadcom CI
Community-CI: Mellanox Build Bot
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
It should return "NVME_TCP_PDU_FATAL". I think that
this issue is introduced after we move the data
copy from tcp transport layer to the socket
layer. And it should return "NVME_TCP_PDU_FATAL now",
and it will be consistent with the logic in the same
function.
With this patch, it will fix the big I/O size write
from the initiator.
Signed-off-by: Ziye Yang <ziye.yang@intel.com>
Change-Id: Ide018adb603eb13d002fc98886258dd1e2424f7c
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/3122
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Community-CI: Mellanox Build Bot
Community-CI: Broadcom CI
Having that transport can decide about particular ctrlr attributes not
globally but per ctrlr.
Signed-off-by: Jacek Kalwas <jacek.kalwas@intel.com>
Change-Id: Ia3fb0d4e576cb9f8ce6df75f775e2fd5727d7f48
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/2757
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Broadcom CI
Community-CI: Mellanox Build Bot
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
This parameter describes the number of admin and IO
qpairs while admin qpair always exists and should not
be configured explicitly.
Introduce a new parameter `max_io_qpairs_per_ctrlr`
which configures the number of IO qpairs.
Internal structure of NVMF transport is not changed,
both RPC parameters configure the same nvmf transport parameter.
Deprecate max_qpairs_per_ctrlr in spdkcli as well
Side change: update dif_insert_or_strip description -
it can be used by TCP and RDMA transports
Config files parsing is not changed since it is deprecated
Fixes#1378
Change-Id: I8403ee6fcf090bb5e86a32e4868fea5924daed23
Signed-off-by: Alexey Marchuk <alexeymar@mellanox.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/2279
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Anil Veerabhadrappa <anil.veerabhadrappa@broadcom.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Community-CI: Broadcom CI
I missed a few files in this library the first time.
Signed-off-by: Seth Howell <seth.howell@intel.com>
Change-Id: I2ad55355e6348eaa10384a148dd45deb9f68fc2b
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/2442
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Anil Veerabhadrappa <anil.veerabhadrappa@broadcom.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
In SPDK NVMe/TCP target, when initializing the socket, the low
watermark is set to sizeof(struct spdk_nvme_tcp_common_pdu_hdr),
which is 24 bytes. In our testing, some times there might be very
small data packet (as small as 16 bytes) be sent to wire. After
this, if there is no more data sent to the same socket, this small
data packet won’t be received by NVMe/TCP controller qpair thread
because the size hasn’t reached the low watermark. Because of this,
the qpair thread is waiting for more data come in and the initiator
is waiting for the IO request to be completed. Hence the delay
happens.
As the minimum data that allows target to determine the PDU type is
sizeof(struct spdk_nvme_tcp_common_pdu_hdr), which is 8 bytes, we
changed low watermark setting as below. With the change, the problem
was gone immediately.
Change-Id: I14ccc4c84b77e33a617726e7455304aca29d5d57
Signed-off-by: Wenhua Liu <liuw@vmware.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/2138
Community-CI: Mellanox Build Bot
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Ziye Yang <ziye.yang@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Virtual controller capabilities can be overridden on transport
specific layer. The current behavior shall be preserved.
This can be useful to limit or extend the default based on transport
type.
Signed-off-by: Jacek Kalwas <jacek.kalwas@intel.com>
Change-Id: I754f0d957a46f219adc1e55f792e79c7546ddb43
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/1274
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Purpose: To set the priority of the NVMe-oF connection especially
for TCP connection.
For example, the previous example can be:
trtype:TCP adrfam:IPv4 traddr:10.67.110.181 trsvcid:4420
With the change, it could be:
trtype:TCP adrfam:IPv4 traddr:10.67.110.181 trsvcid:4420 priority:2
The priority is optional. We try to change
spdk_nvme_transport_id but not in spdk_nvme_ctrlr_opts since
the opts in spdk_nvme_ctrlr_opts will reflect in every nvme ctrlr,
this is short of flexibility.
Signed-off-by: Ziye Yang <ziye.yang@intel.com>
Signed-off-by: Sudheer Mogilappagari <sudheer.mogilappagari@intel.com>
Change-Id: I1ba364c714a95f2dbeab2b3fcc832b0222b48a15
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/1875
Community-CI: Mellanox Build Bot
Community-CI: Broadcom CI
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Since the related feature is already contained in
spdk_sock_listen and spdk_sock_connect functions,
we no longer need this function.
Signed-off-by: Ziye Yang <ziye.yang@intel.com>
Change-Id: I1eafff0d139fa266a355fbee2bf0fc3947db69fc
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/1876
Community-CI: Mellanox Build Bot
Community-CI: Broadcom CI
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
We will be create fine name for each poller but it will need large
effort. Replacing spdk_poller_register by the macro SPDK_POLLER_REGISTER
will provide better name than function address with minimum effort.
Following patches may improve function name for clarification.
Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: If862a274c5879065c3f7cb04dcb5ca7844523e68
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/1781
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Maciej Szwed <maciej.szwed@intel.com>
Community-CI: Broadcom CI
The code used to do this but it was removed when the buffering was
shifted down to the posix layer. Add a way for users of sockets
to still properly size the buffers.
This also means that by default, the receive buffering is not enabled
on sockets. That matches the behavior of the previous release.
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Change-Id: I20ce875be2efd841fe3a900047b4655a317d7799
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/1560
Community-CI: Broadcom CI
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
It's actually faster to process them until you run out of data.
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Change-Id: I9e81babdb9bdc405a8dbf03b2f701fe50bcc70f6
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/1559
Community-CI: Broadcom CI
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
This reverts commit ea5ad0b286.
This code is moving from the nvmf target to the posix sock
layer in this series.
Change-Id: I333bdf325848e726ab82a9e6916e1bbdcd34009c
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/446
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
This reverts commit d50736776c.
This code is moving from the nvmf target to the posix sock
layer in this series.
Change-Id: I7cf477333f2a3fa4a0089394d5fa28142b262a7f
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/445
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
This reverts commit 5e7b8d18f3.
This code is moving from the nvmf target down into the posix
sock layer in this series.
Change-Id: Iea9a7cef5bedd6a34edf7b4c87825897279829c3
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/444
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
This was recently made asynchronous to support virtualized transports.
However, we're moving to add a new call to associated a listener with a
subsystem to transports and the operation that needed to be asynchronous
will actually be performed there. For simplicity, make this synchronous
again.
Change-Id: Ie98136a19c58f0f9bba0d140476de3bbb38e12d7
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/881
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
It is memory optimisation as transport id is 'heavy'. As a side effect
simpler handling of listen and stop_listen on transport specific layer.
Signed-off-by: Jacek Kalwas <jacek.kalwas@intel.com>
Change-Id: I4e9d0e0c5eee2d570ec4ac9079270c32d5afb8db
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/626
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
This defines the official interface that NVMe-oF target
transports may use. For now, all code is just copied
from elsewhere. Eventually we'll want to add doxygen
comments.
Change-Id: I0cd9368607544be18c7c49188d071e38ceb59b8f
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/412
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jacek Kalwas <jacek.kalwas@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Ziye Yang <ziye.yang@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Some functions performed incorrect header/data digest
support check, align it with NVMEoF spec. Use a table
to check if PDU supports digest depending on its type.
Change-Id: I6170dd19ace017f37fda0a923f604732799460b9
Signed-off-by: Alexey Marchuk <alexeymar@mellanox.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/483375
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Community-CI: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
When a command arrives and no requests are available, the socket
recv state machine sits in the RECV_STATE_AWAIT_REQ state until another
network event occurs. If this I/O was the last one sent, this leaves the
target hung. To fix this, when a request is completed, kick the state
machine to make forward progress.
In practice, this can only occur once the pdu send acknowledgements are
asynchronous relative to arriving commands. That only begins happening
with the use of MSG_ZEROCOPY. When MSG_ZEROCOPY is turned on, it's
possible receive the next PDU in a chain for a command prior to seeing
the acknowledgement that the response that triggered that PDU actually
sent.
Change-Id: I556f31ad56970d36aa3538cfde375d35f3d4e551
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/480002
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Previously, the R2T was sent and if an H2C arrived prior
to seeing the R2T ack, it was processed anyway. Serialize
this process.
In practice, if the H2C arrives with a correctly functioning
initiator, that means the R2T already made it to the initiator.
But because the PDU hasn't been released yet, immediately processing the
PDU requires an extra PDU associated with the request. Basically, making
this change halves the worst-case number of PDUs required per
connection.
In the current sock layer implementations, it's not actually possible
for the R2T send ack to occur after that H2C arrives. But with the
upcoming addition of MSG_ZEROCOPY and other sock implementations, it's
best to fix this now.
Change-Id: Ifefaf48fcf2ff1dcc75e1686bbb9229b7ae3c219
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/479906
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
This function was only called from one spot.
Change-Id: I856f564d3ef6c6157be7a32a2cd812c702516a8d
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/482003
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Alexey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
This seems like a more descriptive name
Change-Id: Ia616865b3fb36d8f9ccc5fb2ca6185bdd8543cf8
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/482002
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Alexey Marchuk <alexeymar@mellanox.com>
With our target design, there's no advantage to sending
multiple R2T PDUs per nvme command. This patch starts by
setting up the math so that at most 1 R2T PDU is required
per request. This can be guaranteed because the maximum
data transfer size (MDTS) is pre-negotiated in NVMe-oF
to a reasonable size at start up.
It then proceeds to simplify all of the logic around mapping
requests to PDUs. It turns out that the mapping is now always
1:1. There are two additional cases where there is no request
object at all but a PDU is still needed - the connection response
and termination request. Put an extra PDU on the queue object
for that purpose.
This is a major simplification.
Change-Id: I8d41f9bf95e70c354ece8fb786793624bec757ea
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/479905
Community-CI: SPDK CI Jenkins <sys_sgci@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Alexey Marchuk <alexeymar@mellanox.com>
We can always accept up to the maximum I/O size in an H2C,
so eliminate the #define.
Change-Id: I349dab5f9b6ec482a7c580b1396e03c8d30a250b
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/482278
Community-CI: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Alexey Marchuk <alexeymar@mellanox.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
The resources allocated to a queue pair do not need to be directly
correlated to the queue size requested by the initiator in NVMe-oF, as
long as enough resources are present. The RDMA transport, for instance,
does complex pooling of the resources behind the scenes when using a
shared receive queue.
Simplify the resource allocation for a TCP qpair to just always allocate
the max allowed queue size right away. This is a configurable parameter,
so system administrators can adjust for their needs. The initiator may
then request a queue size less than or equal to that, which will only be
enforced by queue depth counting and not impact the actual number of
resources allocated on the target.
This change relies on the MaxC2HSize being equal to the Maximum Data
Transfer Size (MDTS) reported. That is the default configuration, but
MDTS is configurable. Changing the MDTS with this patch to a value
larger than 128k will cause the target to break. This is addressed in
the next patch in this series.
Change-Id: Ibd4723785c6a4d8d444f9b7bbfa89f98de2320f5
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/479733
Community-CI: SPDK CI Jenkins <sys_sgci@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Alexey Marchuk <alexeymar@mellanox.com>
These values do not need to be negative.
Change-Id: Id9f798cf1c9da354448f9c6fbb90e599f877bb32
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/482277
Community-CI: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Alexey Marchuk <alexeymar@mellanox.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
By releasing the just-completed PDU prior to calling the callback,
for flows that immediately submit another PDU inside the callback,
the just-released PDU can be immediately reused. This reduces the number
of PDUs required in the pool to continue forward progress to half of the
previous value, while also making it more CPU cache friendly.
Change-Id: I8031b8f9f57ac05f261d96433d9899fe5e31d318
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/479904
Community-CI: SPDK CI Jenkins <sys_sgci@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Alexey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ziye Yang <ziye.yang@intel.com>
Reviewed-by: Or Gerlitz <gerlitz.or@gmail.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
This was calling a callback for another function which
attempted to release the request. The code only worked because
in the r2t case the cb_arg was set to NULL, and that makes
the request free function do nothing.
Change-Id: Id9ec30ceb0eaa41deb67aa995da5d6f786d9b9f0
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/479903
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Community-CI: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Or Gerlitz <gerlitz.or@gmail.com>
Reviewed-by: Ziye Yang <ziye.yang@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
This wasn't actually used. Every PDU only had a single reference.
Change-Id: I8adaa7edeca5fe175aa853c156df741170d76c10
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/479902
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Community-CI: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Ziye Yang <ziye.yang@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
This would allow to respond for add listener rpc request even
when there are async calls in transport specific function.
Signed-off-by: Jacek Kalwas <jacek.kalwas@intel.com>
Change-Id: I94a9f45b7ba9e8d46a60ae3785953cea12554732
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/479511
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Community-CI: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Anil Veerabhadrappa <anil.veerabhadrappa@broadcom.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Make common code as part of successful return.
In rdma check if already listening first.
Signed-off-by: Jacek Kalwas <jacek.kalwas@intel.com>
Change-Id: Ib0c87ac11db7daff00dc4042c9e0ab20eb7ffd0f
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/478721
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Community-CI: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Purpose: With this patch,
(1)We can support using different sock implementations in
one application together.
(2)For one IP address managed by kernel, we can use different method
to listen/connect, e.g., posix, or uring. With this patch, we can
designate the specified sock implementation if impl_name is not NULL
and valid. Otherwise, spdk_sock_listen/connect will try to use the sock
implementations in the list by order if impl_name is NULL.
Without this patch, the app will always use the same type of sock implementation
if the order is fixed. For example, if we have posix and uring together,
the first one will always be uring.
Signed-off-by: Ziye Yang <ziye.yang@intel.com>
Change-Id: Ic49563f5025085471d356798e522ff7ab748f586
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/478140
Community-CI: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Community-CI: SPDK CI Jenkins <sys_sgci@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
The trtype should be stored as both an enum and string. This is intended to
help pave the way for pluggable NVMe-oF transports.
Signed-off-by: Seth Howell <seth.howell@intel.com>
Change-Id: I6af658d7a17c405e191ff401b80ab704c65497e7
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/478744
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Alexey Marchuk <alexeymar@mellanox.com>
It is faster for the kernel to pin memory in hugepages, so allocate
the pdu pool from hugepages. This will help more
with upcoming changes to leverage MSG_ZEROCOPY.
Change-Id: I9ce581acca9c6edb71bd8119258966e3b405db77
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/475801
Community-CI: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Or Gerlitz <gerlitz.or@gmail.com>
All transitions to the EXITING state go through the disconnect function
now
Change-Id: Ia55816351b2998bfef26130b6ffdc4a1010567a1
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/470533
Community-CI: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Ziye Yang <ziye.yang@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
This function can't actually return NULL. It aborts if we get
our math wrong.
Change-Id: Iaf77112addc3c14c70755a56043c5dba3427890d
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/478911
Community-CI: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
We can use spdk_min to get the copy_len in spdk_nvmf_tcp_send_c2h_term_req.
It confirms copy_len it's not larger than SPDK_NVME_TCP_TERM_REQ_ERROR_DATA_MAX_SIZE
Signed-off-by: dongx.yi <dongx.yi@intel.com>
Change-Id: Id343928e1911e4ab77fca7463f3f0cc55889db30
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/479118
Community-CI: SPDK CI Jenkins <sys_sgci@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Minor optimisation done by code analysis, both cmd and dif are
overridden in TCP_REQUEST_STATE_NEW.
Signed-off-by: Jacek Kalwas <jacek.kalwas@intel.com>
Change-Id: I6bae4ddae175035d029c0693f7e4351b95a296ab
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/478604
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
It's not wrong, just to keep consistency with other functions.
So remove these.
Signed-off-by: dongx.yi <dongx.yi@intel.com>
Change-Id: I833211ea8ee6c6b02c874ea340a3f936a0c4c00f
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/478684
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Alexey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Community-CI: SPDK CI Jenkins <sys_sgci@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
It will be the expected behavior when the error message will
printed if we use asynchrounous I/O. And the real
error message for not getting the tcp_req is located in
spdk_nvmf_tcp_capsule_cmd_hdr_handle.
Change-Id: I1a608fbd3a04050eacb6cb68eafd50e5128925ab
Signed-off-by: Ziye Yang <ziye.yang@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/477872
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
TCP_REQUEST_STATE_NEW is already set in spdk_nvmf_tcp_req_get.
Signed-off-by: Jacek Kalwas <jacek.kalwas@intel.com>
Change-Id: Ia835f3763cd74ef9b504901c719d9954317f49af
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/476164
Community-CI: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Community-CI: SPDK CI Jenkins <sys_sgci@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
This eliminates the flushing logic, simplifying the tcp
transport.
This also happens to greatly improve performance, especially
on random read tests. The batching done in spdk_sock_writev_async seems
to be more effectively than the previous batching logic in the tcp
transport.
Change-Id: Id980ac6073e380dc75f95df3f69cb224f50fb01b
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/470532
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Community-CI: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ziye Yang <ziye.yang@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
It is valuable to have more detail status instead
SPDK_NVME_SC_INTERNAL_DEVICE_ERROR.
Signed-off-by: Jacek Kalwas <jacek.kalwas@intel.com>
Change-Id: Ifd003b490a7ae9af017645c97636ceaf2f93d4b0
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/476634
Community-CI: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Destroy in poll_group_add results in heap-use-after-free because
upper layer calls qpair_fini in case poll_group_add returns
error.
Signed-off-by: Jacek Kalwas <jacek.kalwas@intel.com>
Change-Id: I3e921a21b7ab5f7c15c80bc5919cb97cbda0b5d2
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/475858
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Also need to update the spdk_nvmf_tcp_poll_group_poll.
Since if the tqpair recv state in wait_for_req,
we may already received the data, and there could be
not epoll event.
Signed-off-by: Ziye Yang <ziye.yang@intel.com>
Change-Id: I9c5a202e47e57aaba63da143f954a20c135a98ae
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/473626
Community-CI: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Purpose: But if we use asynchronous writev
for pdu sending, the call_back of writev may occur
after the new data coming. So it means that the
free tcp request may not be available.
So we use the strategy to check the request status
in TCP_REQUEST_STATE_TRANSFERRING_CONTROLLER_TO_HOST.
So the strategy is checking the state_cntr of all the
reqs in TCP_REQUEST_STATE_TRANSFERRING_CONTROLLER_TO_HOST
state.
1 If the state_cntr > 0, we should queue
the new request.
2 If the statec_cntr == 0, it means that
there is no available slot for the new tcp request
, i.e., the new nvme command comming from the initiator.
If we receive this, it means that the initiator sends more
requests,and we should reject it.
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Signed-off-by: Ziye Yang <ziye.yang@intel.com>
Change-Id: Ifbeb510e669082cb7b80faf2e7987075af31d176
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/472912
Community-CI: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
To avoid the allocation of ttransport in the sub functions,
and it makes the code much efficient.
Change-Id: Ie4c5a1755ddbecf10dc364ff811f74a7af5f9c3b
Signed-off-by: Ziye Yang <ziye.yang@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/473003
Community-CI: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
When we use async writev (e.g., lib io_uring), we find that
the callback of writev is executed after recving the new
data from the initiator, and this is possible.
For example, if the NVMe-oF TCP target receives the ic_req from the
initiator, and sendout the ic_resp, the state of tqpair will change from
invalid to running until the callback is executed. And the data of ic_resp
is already sent to the initiator, and we receive the new command later. However,
we may still not get the call back function executed
(i.e, spdk_nvmf_tcp_send_icresp_complete). And it is possible
for using lib io_uring, I faced this issue when using lib uring.
And this patch can fix this issue.
Signed-off-by: Ziye Yang <ziye.yang@intel.com>
Change-Id: I7f4332522866d475e106ac6d36a8ec715133f0dc
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/472770
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Community-CI: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
The loop is intended to accept multiple socks when
available, but once accept returns NULL, there's no
reason to keep trying.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I896908d276da35bc3fff172c1c17e22abd2a5343
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/473234
Community-CI: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Alexey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
It's important to be able to recover full context from just
the PDU in the future.
Change-Id: I3d1f3c326299b1237b42dbe33d340a282c3bc5bb
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/470531
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ziye Yang <ziye.yang@intel.com>
This is always the request pointer, so rename it for clarity.
Change-Id: Ifbda7db7787c65f0deb190a1e94f0676b2c0d99a
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/470530
Community-CI: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-by: Ziye Yang <ziye.yang@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Alexey Marchuk <alexeymar@mellanox.com>
Use whatever size the socket layer thinks is best. In practice,
this is the same size as before.
Change-Id: I4820e16d8da6e566d1f8f078a75d345399f64ab5
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/470511
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-by: Ziye Yang <ziye.yang@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Sasha Kotchubievsky <sashakot@mellanox.com>
Reviewed-by: Alexey Marchuk <alexeymar@mellanox.com>
Since we use big buffer to read the data, so the incoming
data may already be read when the req is waiting for the buffer.
So if we use the orginalstatement machine, there will be no
read event will be generated again.
The quick solution is to restore the original code, since
for req which has incapsule data, we not need to wait for the
buffer from the shared buffer pool.
Signed-off-by: Ziye Yang <ziye.yang@intel.com>
Change-Id: Ib195d57cc2969235203c34664115c3322d1c9eae
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/472047
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
It can be useful for passing additional information about nvmf
target to a handler for new nvmf connections. Context can be
stored in globals as it is currently done in nvmf code. However
in case of multiple targets or languages where accessing global
state is challenging (i.e. Rust), this becomes inconvenient.
Change-Id: Ia6a2fdba4601531822b3e5fda7ac5ab89d46f6c5
Signed-off-by: Jan Kryl <jan.kryl@mayadata.io>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/469263
Community-CI: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Seth Howell <seth.howell@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Sasha Kotchubievsky <sashakot@mellanox.com>
Run the following command:
pahole ./app/nvmf_tgt/nvmf_tgt -R -C spdk_nvmf_tcp_req
It tells me change the bool definition location
of dif_insert_or_strip.
Change-Id: Ia43ab62bcc223a07e6415b2c769fe4af2b097f18
Signed-off-by: Ziye Yang <ziye.yang@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/470401
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
By passing the pointer to struct spdk_nvmf_transport_poll_group
to spdk_nvmf_tcp_req_parse_sgl(), we can remove spdk_nvmf_tcp_req_fill_iovs()
and inline spdk_nvmf_request_get_buffers() into spdk_nvmf_tcp_req_parse_sgl().
Pointers to struct spdk_nvmf_request are used in many lines of
spdk_nvmf_tcp_req_parse_sgl(). Caching and using them simplifies and
improves readability a little for spdk_nvmf_tcp_req_parse_sgl().
We can pass pointer to not struct spdk_nvmf_tcp_transport but struct
spdk_nvmf_transport to spdk_nvmf_tcp_req_parse_sgl().
Ordering the pointer to struct spdk_nvmf_tcp_req first in parameters
of spdk_nvmf_tcp_req_parse_sgl() matches the function name.
Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: I9f0d33b48383800c3b0a738eb24b11ffed7e6e60
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/469640
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ziye Yang <ziye.yang@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Alexey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
This patch is close to the end of the effort to unify buffer allocation
among NVMe-oF transports.
Merge each transport's fill_buffers() into common
spdk_nvmf_request_get_buffers() of the generic NVMe-oF transport.
One noticeable change is to set req->data_from_pool to true not in
each specific transport but in the generic transport.
The next patch will add spdk_nvmf_request_get_multi_buffers() for
multi SGL case of RDMA transport.
This relatively long patch series is a preparation to support
zcopy APIs in NVMe-oF target.
Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: Icb04e3a1fa4f5a360b1b26d2ab7c67606ca7c9a0
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/469205
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Alexey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
The subsequent patches unifies getting buffers, filling iovecs, and
filling WRs in a single API. This is a preparation.
Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: I077c4ea8957dcb3c7e4f4181f18b04b343e9927d
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/468953
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Alexey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Seth Howell <seth.howell@intel.com>
This patch makes multi SGL case possible to call spdk_nvmf_request_get_buffers()
per WR.
This patch has an unrelated fix to clear req->iovcnt in
reset_nvmf_rdma_request() in UT. We can do the fix in a separate patch
but include it in this patch because it is very small.
Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: If6e5af0505fb199c95ef5d0522b579242a7cef29
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/468942
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Alexey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Seth Howell <seth.howell@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
On a DIF verification error, fail the read command with a status code
of APPLICATION_TAG_CHECK_ERROR, GUARD_CHECK_ERROR, or
REFERENCE_TAG_CHECK_ERROR and a status code type of SCT_MEDIA_ERROR.
The state of the request is TCP_REQUEST_STATE_TRANSFERRING_CONTROLLER_TO_HOST
when a DIF verification error is detected. So dequeue the request
from C2H data queue, return the response PDU, and then send the command
response.
This was an item on the TODO list. RDMA transport do this right
behavior from the start and so TCP transport follows it by this patch.
Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: I102bbd253cc8c1379d0937c9536bf2bfe04cbf6a
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/468911
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
tcp_req->orig_length had been set just before I/O submission but
the value is already fixed in spdk_nvmf_tcp_req_parse_sgl().
Hence move setting tcp_req->orig_length accordingly.
This follows the good practice of RDMA transport.
Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: I99f6e266d8f7027bce810864314f3ee24a1af10c
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/468910
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
The default behavior is to set it to 2MB, so this isn't
required anymore.
Change-Id: I62d7605cd4d5bc41347128f32f9a1aa373a15744
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/466993
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Since we use pdu->data_iovcnt to
build the iov in nvme_tcp_build_iovs, so
send out pdu has the maximal iov number
equals to: 2 + pdu->data_iovcnt,
so we change the comparison.
This makes sure that we can handle all the data
owned by one pdu.
Signed-off-by: Ziye Yang <ziye.yang@intel.com>
Change-Id: I2b9258cc5716d706c0fa38af609726c439708768
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/467207
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
This unifies buffer management among transports further and is a
preparation to make buffer allocation asynchronous.
Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: I8c588eeac4081f50fe32605feb7352f72c628d95
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/466847
Reviewed-by: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Anil Veerabhadrappa <anil.veerabhadrappa@broadcom.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Most variables related with I/O buffer are in struct spdk_nvmf_request
now. So we can pass nvmf_request instead of nvmf_tcp_req to
nvmf_tcp_req_fill_buffers and do it in this patch.
Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: I00eff578a98891e99fcb9a3aafa3d99126d6f1c1
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/466089
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
This is a small performance optimization and an effort to unify
I/O buffer management further among transports.
It is ensured that the request is the first of STAILQ when
spdk_nvmf_tcp_send_c2h_data() is called or the case
TCP_REQUEST_STATE_NEED_BUFFER is executed in spdk_nvmf_tcp_req_process().
Hence change TAILQ_REMOVE to STAILQ_REMOVE_HEAD for these two cases.
Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: I0b195874ac22a8d5ecfb283a9865d2615b7d5912
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/466637
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ziye Yang <ziye.yang@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
In this patch, we directly point the hdr_p
to the memory owned by the pdu_recv_buf to avoid
memory copy.
Change-Id: Iee0dd98058928f429bf7ad22103cd4826226400f
Signed-off-by: Ziye Yang <ziye.yang@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/465158
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
RDMA transport executes spdk_nvmf_rdma_request_parse_sgl() only if
the request is the first of the pending requests in the case
RDMA_REQUEST_STATE_NEED_BUFFER in the state machine
spdk_nvmf_rdma_requests_process().
This made RDMA transport possible to use STAILQ for pending requests
because STAILQ_REMOVE parses from head and is slow when the target is in
the middle of STAILQ.
On the other hand, TCP transport executes spdk_nvmf_tcp_req_parse_sgl()
even if the request is in the middle of the pending request in the case
TCP_REQUEST_STATE_NEED_BUFFER in the state machine
spdk_nvmf_tcp_req_process() if the request has in-capsule data.
Hence TCP transport have used TAILQ for pending requests.
This patch removes the condition if the request has in-capsule data
from the case TCP_REQUEST_STATE_NEED_BUFFER.
The purpose of this patch is to unify I/O buffer management further.
Performance degradation was not observed even after this patch.
Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: Idc97fe20f7013ca66fd58587773edb81ef7cbbfc
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/466636
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Use spdk_nvmf_request_get_buffers() and spdk_nvmf_request_free_buffers(),
and then remove spdk_nvmf_tcp_request_free_buffers() and
spdk_nvmf_tcp_request_get_buffers().
Set tcp_req->data_from_pool to false after spdk_nvmf_request_free_buffers().
Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: I286b48149530c93784a4865b7215b5a33a4dd3c3
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/465876
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
This is a prepration to unify buffer management among transports.
Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: I6b1c208207ae3679619239db4e6e9a77b33291d0
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/466002
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
This is a preparation to unify buffer management among transports.
struct spdk_nvmf_request already has SPDK_NVMF_MAX_SGL_ENTRIES (16) * 2
iovecs. Hence incresing the number of buffers twice will be no problem.
Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: Idb525abbf35dc9f4b8547b785b5dfa77d106d8c9
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/465873
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Purpose: Reduce the recv/readv system call.
Method: Use a big recv buffer to conduct the read.
Though it will introduce addtional buffer copy,
we hope that the overhead introduced by buffer copy will
be smaller compared with frequent recv/readv system call overhead.
And the design is to make a trade off between them.
Signed-off-by: Ziye Yang <ziye.yang@intel.com>
Change-Id: I9286fd9cec0b512cea8e3f2c335c5bf862b98573
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/464842
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Purpose: Prepare the further optimnization in the
target side whening receving pdu headers, we expect
to use zero copy.
Change-Id: Iae7f9106844736d7160d39d0af1f5941084422ec
Signed-off-by: Ziye Yang <ziye.yang@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/465380
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
This follows the practice of RDMA transport and is a preparation to
unify buffer allocation among transports.
Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: Ib85625f2a0eca01ef4028685dd838d6c41faad7b
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/465872
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>