Subsequent patches will implement PI verification when PI error occurs,
but PI verification will be different between read and write.
Subsequent patches will set IO flags for normal read and write but
will not set IO flags for checked read.
Current nesting stack,
bdev_nvme_readv/writev
-> bdev_nvme_queue_cmd
-> spdk_nvme_ns_cmd_readv/writev
-> bdev_nvme_queued_done
makes these changes difficult.
Hence this patch inlines bdev_nvme_queue_cmd into bdev_nvme_readv/writev,
adds separate completion function bdev_nvme_readv/writev_done, and
removes enum direction.
This patch doesn't cause any functional change.
Change-Id: I2f97ff21245539c690490d0fc4134d2e0049eddd
Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-on: https://review.gerrithub.io/c/443187
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
PI check flags is not set to NVMe controllers created by hot plug
handler automatically. Document this behavior for clarification.
Change-Id: I9590d0cb7f53a24c33afd706e222065893d23cb4
Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-on: https://review.gerrithub.io/c/444012
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Add "prchk:reftag|guard" to the 3rd item of the TransportID row
in [Nvme] section.
apptag is not supported yet as same as JSON RPC.
These two patches cannot control hot added NVMe controllers, but
we should not set prchk options to hot added NVMe controllers
automatically. Hence the next patch will document this behavior
explicitly.
Change-Id: I74a73ac52779aa50c5b45e20ffb61002e95f33ef
Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-on: https://review.gerrithub.io/c/443835
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
The next patch will use the string "prchk:reftag|apptag" as
per-controller prchk options for .INI config file.
Hence add helper functions for them beforehand.
Change-Id: I58c225cc36cc84bf594f108e611028996b5eedb9
Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-on: https://review.gerrithub.io/c/443834
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Add prchk_reftag and prchk_guard to construct_nvme_bdev RPC.
In spdk_rpc_construct_nvme_bdev, create prchk_flags based on them
and pass it to spdk_bdev_nvme_create, and in spdk_bdev_nvme_create,
pass it to create_ctrlr.
A single option enable_prchk may be enough but add separate options
for reftag and guard to clarify that apptag is not supported yet.
The next patch will make per-controller PRCHK options configurable
by .INI config file.
Change-Id: I370ebbe984ee83d133b7f50bdc648ea746c8d42d
Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-on: https://review.gerrithub.io/c/443833
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Add prchk_flags in struct nvme_ctrlr and set it at creating of
the corresponding controller, and copy it to each bdev of the
controller.
Change-Id: Ie971a0c1539b5419de9e5168ed47ac0e579be2c5
Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-on: https://review.gerrithub.io/c/443186
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Bdev don't support APIs that passes metadata not interleaved with
logical block data. So, return error explicitly when creating NVMe
bdev with separate metadata for now.
Change-Id: I0776e72232c8e7758ad11b405e7e4914e779d131
Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-on: https://review.gerrithub.io/c/444011
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Metadata location and DIF type are set only if there is metadata, and
DIF location is set only if DIF is enabled.
Change-Id: Ib684b54332820446ff1a0b609f5b4e0b3d42f2f9
Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-on: https://review.gerrithub.io/c/443344
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
The patch is used to fix issue:
https://github.com/spdk/spdk/issues/638
Reason: For supporting sgl, the implementation of
function nvme_tcp_pdu_set_data_buf is not correct.
The translation is not correct for incapsule data
when using SGL. In order not to do the translation
via calling sgl function again, we use a variable
to store the buf.
Change-Id: I580d266d85a1a805b5f168271acac25e5fd60190
Signed-off-by: Ziye Yang <optimistyzy@gmail.com>
Reviewed-on: https://review.gerrithub.io/c/444066
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Currently, the SPDK_BDEV_REGISTER_MODULE() macro uses __LINE__
to generate functions like spdk_bdev_module_register_187().
Typically, this is not a problem as these functions are not called directly
rather, they are only used as constructor functions to load the bdevs during
system startup.
There are languages however, (e.g rust) that require these functions to be
referenced explicitly to prevent them from being removed during the linking phase.
In order to reference them, having the names predictable (and potentially
changed per commit) makes things easier.
Change-Id: I15947ed9136912cfe2368db7e5bba833f1d94b15
Signed-off-by: gila <jeffry.molanus@gmail.com>
Reviewed-on: https://review.gerrithub.io/c/443536
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
spdk_thread_poll()
This is an optimization if the calling function already knows the
current time.
Change-Id: I1645e08e7475ba6345a44e0f9d4b297a79f6c3c2
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.gerrithub.io/c/443634
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
strip_size as an rpc param is now deprecated and can be removed
in a future release. Either strip_size or strip_size_kb can be
used but only one of them or the rpc will fail.
Internally we maintain both fields because strip size always
comes in as KB but we convert it to blocks so having both elements
makes it clear for developers what they're looking at.
JSON output includes both strip_size and strip_size_kb.
Fixes#550
Change-Id: I5dc51e8af22eae3d56af8f8d37a564dbaae228fa
Signed-off-by: paul luse <paul.e.luse@intel.com>
Reviewed-on: https://review.gerrithub.io/c/437873
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
DPDK 19.02 requires this mempool to be allocated via
crypto-specific function which returns rte_mempool.
To keep the amount of #ifs minimal, we'll use rte_mempool
unconditionally.
Change-Id: I3a09de41e237e168580bb92b574854e291e68a74
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/c/443785
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
We setup the qpairs on module init but never
released them. Some memory was leaked, although since
it was allocated with rte_malloc() it couldn't be
picked up by ASAN.
rte_cryptodev API offers rte_cryptodev_queue_pair_setup()
to setup a qpair, but there's no equivalent function to
release it. We have to access the rte_cryptodev structure
directly and call a qpair release function ptr that's
stored inside. It seems very very hacky, but the entire
rte_cryptodev structure is a part of the public API and
the global array of all such devices is an exported
symbol.
Change-Id: I17ac73d1098ca9a92d2dfd52e0f905e2c2b5488f
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/c/443561
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
The typical rdma qpair disconnect function goes through the function
_nvmf_rdma_disconnect_retry. When this function was introduced, it was
discovered that we could receive a qpair disconnect event for a given
qpair before that qpair had been assigned to a poll group. In order to
ensure that the disconnect procedure completed properly, we waited on
the current thread in _nvmf_rdma_disconnect_retry for the qpair to be
assigned a poll group before we finally disconnected. see rdma.c:2250.
Since _nvmf_rdma_disconnect_retry was not necessarily called from the
poll group's thread, we relied upon the assumption that the group
variable would never be set back to NULL. See the comment on rdma.c:
2243.
However, in _spdk_nvmf_qpair_destroy we were setting the group back to
NULL. This operation can result in the following set of operations
across multiple threads that prevent a qpair from ever being fully
destroyed.
1. thread 1: receive a disconnect event - call nvmf_rdma_disconnect
2. thread 1: from nvmf_rdma_disconnect call
spdk_nvmf_rdma_qpair_inc_refcnt - setting rqpair->refcnt to 1.
3. thread 2: call spdk_nvmf_rdma_poller_poll.
4. thread 2: in spdk_nvmf_rdma_poller_poll reap a completion with an
error status which causes us to call spdk_nvmf_qpair_disconnect -
rdma:2846
5. thread 2: spdk_nvmf_qpair_disconnect calls _spdk_nvmf_qpair_destroy which sets
qpair->group = NULL
6. thread 1: from nvmf_rdma_disconnect we call
_nvmf_rdma_disconnect_retry which checks if qpair->group == NULL. If
that is the case, we assume that the qpair has not been assigned a group
yet and send ourself a message to call _nvmf_rdma_disconnect_retry again. see rdma.c:2253
7. thread 2: from _spdk_nvmf_qpair_destroy we call
spdk_nvmf_transport_qpair_fini which results in a call to
spdk_nvmf_rdma_close_qpair. which sends dummy send and recvs to the
qpair.
8. thread 2: we call poller_poll and get completions for both the send
and recv dummy requests. This results in a call to
spdk_nvmf_rdma_qpair_destroy.
9. thread 2: spdk_nvmf_rdma_qpair_destroy checks rqpair->refcnt and when
it sees that it does not = 0 (see step 2 above) it returns without
freeing the resources. see rdma.c:629
10. thread 1: we keep churning in _nvmf_rdma_disconnect_retry sending
ourselves messages because rqpair->group is going to be null. Thread 1
never reaches line 2257 where it sends a message to call
_nvmf_rdma_qpair_disconnect. _nvmf_rdma_qpair_disconnect is the function
that decreases the rqpair->refcnt and allows us to make forward progress
on destroying the qpair.
I encountered this issue while trying to disconnect from our target
using the kernel initiator with an x722 NIC. I think the timing on this
bug comes out with that specific configuration because come of the calls
in the disconnect path on thread 1 fail causing it to take longer giving
a chance to the second thread to delete the qpair.
There are really two issues at play here. We don't have a single point
of entry for disconnecting RDMA qpairs, and we rely on the qpair->group
variable never being set back to NULL. This patch addresses the second
issue, and the next patch in the series addresses the first.
Change-Id: I65395d0bbb67edfa7bad2ddc70906606c3d83781
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/443304
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Includes the required DPDK dependencies for SPDK block Reduce aka
Compression.
Change-Id: Ic1ea3cbeb9373a7700f6f0c2a3194d65d6a34a41
Signed-off-by: paul luse <paul.e.luse@intel.com>
Reviewed-on: https://review.gerrithub.io/c/429523
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
This patch is for DIF check types.
Add enum spdk_dif_check_type to DIF library.
Add a field dif_check_flags to struct spdk_bdev and add
spdk_bdev_is_dif_check_enabled to bdev APIs.
Added enum is intended to improve usability. If no enum, the
caller will have to get raw data of flags and mask each bit.
Change-Id: Ia46a37a9684dc968dcc51963674f0a9963e0cd4d
Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-on: https://review.gerrithub.io/c/443339
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
This patch is for DIF settings.
Add fields dif_type and dif_is_head_of_md to struct spdk_bdev and
add APIs spdk_bdev_get_dif_type and spdk_bdev_is_dif_head_of_md to
bdev APIs.
The fields dif_type and dif_is_head_of_md are added to the JSON
information dump.
Change-Id: I15db10cb170a76e77fc44a36a68224917d633160
Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-on: https://review.gerrithub.io/c/443184
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Next patch will introduce enum spdk_dif_check_type for user to
know easily if checking DIF field is enabled or not.
This patch renames bitmask macros from SPDK_DIF_*_CHECK to
SPDK_DIF_FLAGS_*_CHECK to avoid mis-interpretation .
Using FLAGS was derived from SPDK_NVME_IO_FLAGS_PRCHK_* in
include/spdk/nvme_spec.h.
Change-Id: I89e155d047352f54091c14b9251464cd3a72a162
Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-on: https://review.gerrithub.io/c/443338
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
To support DIF, bdev will need to expose the following information:
- Metadata format
- Block size
- Metadata size
- Metadata setting (interleave or separate)
- DIF settings
- DIF type 1, 2, or 3
- DIF location
- DIF check types
- Guard check
- Reference tag check
- Application tag check
This patch is for the metadata format. Subsequent patches will do for the DIF
setting and DIF check types.
Add fields, md_len and md_interleave, to struct spdk_bdev and add APIs,
spdk_bdev_get_md_size and spdk_bdev_is_md_interleaved, to bdev APIs.
The fields, md_len and md_interleave, are added to the bdev JSON infomation dump.
DIF will be used only in the NVMe bdev module and the upcoming virtual
DIF bdev module first. But additional required storage by md_len and md_interleave
will be very small and they are simple. Hence add them to struct spdk_bdev simply.
Change-Id: I4109f6a63e6f0576efe424feb0305a9a17b9b2e8
Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-on: https://review.gerrithub.io/c/443183
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
The timeout is set to 0, so it never waits anyway. But
this should be 0.
Change-Id: I8b4058017a91b647ea9324f1474a732921c389f0
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.gerrithub.io/c/443647
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
This doesn't fix any bug, but it makes more sense to leave the qpair
in the NVME_TCP_PDU_RECV_STATE_AWAIT_PDU_READY state until it
receives at least one byte.
Change-Id: Ic5f34a733a80b58f65a1334fae7e07dbded2b3d0
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.gerrithub.io/c/441811
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
The `len` field wasn't used at all and `reserved` is
no longer needed after we removed the paddr in the
previous patch.
This effectively cuts down spdk_mobj struct size by half.
Change-Id: Ica39f3a30e14ec1275a87d827dc41df5df9cf623
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/c/443483
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
The physical addresses in iSCSI are completely unused
as iSCSI does not perform any DMA on its own.
Change-Id: I350037b708a9f36f423e6ca6f7c822d8b6b95116
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/c/443482
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
We explicitly checked for one of the strings in the
parsed RPC request even though it's required for the
entire request to parse successfully. The extra check
is now removed.
Change-Id: I19c446786e4ac88b88f14e18dc5258f31b1a87f1
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/c/443317
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Pawel Wodkowski <pawelx.wodkowski@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Since we no longer use external events and we access
all vhost devices synchronously, we no longer need
to dynamically allocate our RPC request contexts. They
can be put just on the stack.
Change-Id: Ie887607b67451aba4f3404c4b9551e6424335beb
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/c/440380
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Pawel Wodkowski <pawelx.wodkowski@intel.com>
Removed their various usages inside the core vhost code
together with the external events themselves. External
events were completely replaced by spdk_vhost_lock()
and spdk_vhost_dev_find().
Change-Id: I1f9d0268c27a06e2eecab9e7d179b1fd54d4223d
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/c/440379
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Pawel Wodkowski <pawelx.wodkowski@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Replaced them with inline code that performs exactly
the same but is shorter and easier to follow. External
events were replaced by spdk_vhost_lock() and
spdk_vhost_dev_find().
Change-Id: Id46a619c592c20a573664b54efc097489e9bb893
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/c/440378
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Pawel Wodkowski <pawelx.wodkowski@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Currently infrequent cases in request completion path are marked as
unlikely. This patch applies that to submission path.
These cases are infrequent and marked using unlikely marco:
a. The sq tail reaches the end of queue.
b. The sq tail equals to sq head. (never happen if FW runs correctly)
c. The qpair is admin queue.
Change-Id: I8b873a18615788f2efbf7c683aad710c7007a082
Signed-off-by: lorneli <lorneli@163.com>
Reviewed-on: https://review.gerrithub.io/c/443451
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
The management channel was used in the RDMA transport prior
to the introduction of poll groups and made its way over to
the TCP transport when it was written. Eliminate it in favor
of just using the poll group.
Change-Id: Icde631dd97a6a29190c4a4a6a10a0cb7c4f07a0e
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.gerrithub.io/c/442432
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Seth Howell <seth.howell5141@gmail.com>
This was only temporarily required for polling. With
a per-group aio ctx, it isn't needed anymore.
Change-Id: Ie59b50a4700f0f99dea470f857d187ac656dd229
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.gerrithub.io/c/443467
Reviewed-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
We only need one aio context for the entire set of channels
sharing a thread.
Change-Id: I1143247901586efe50530b28323ddb923bc6b242
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.gerrithub.io/c/443314
Reviewed-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
This is marginally more convenient.
Change-Id: I9989d687b80051ccb2e07edc5e1efdbca75e8716
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.gerrithub.io/c/443313
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
This will be used later.
Change-Id: I12b07756a13d03a34c9705306d720c1db7ecb15c
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.gerrithub.io/c/443312
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
This wasn't actually necessary. The next patch in this series will
change the way aio is used such that only one aio context is
polled for the entire group of channels on a single thread.
Change-Id: I05c4d824d9c63a51c8a2d608d84c184f249f66d7
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.gerrithub.io/c/443311
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
This isn't used just yet, but will be necessary temporarily
during this patch series.
Change-Id: I7f04426c27e3fe0417e2f60bac28217fa44c0cb2
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.gerrithub.io/c/443310
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Move it next to the other channel definition.
Change-Id: I9ec33c135836d3dc326abe4ce7588e7a2eff77d4
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.gerrithub.io/c/443309
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
These didn't need to be visible.
Change-Id: I337a02802cac4431b4abd9a922408d4147801565
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.gerrithub.io/c/443308
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Small static function only called from one place, so
just inline it.
Change-Id: Ibc54f790da55dd1635d81181208b1d506550ca9c
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.gerrithub.io/c/443307
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
It does not need to be in the header file.
Change-Id: I5c489de81e48b11d02b66cbdd6d9ac05eae16429
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.gerrithub.io/c/443306
Reviewed-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
max_read_depth should be based on max_qp_init_read_atomic, or the
maximum number of read values that the initiator will accept as
outstanding.
The device attributes object contains values for both the initiator
(remote side) and the target (local side). All attributes with the name
init in them are meant to correspond to the initiator. The
qp_read_atomic value represents the number of reads and atomic
operations that can have this device as the target. qp_init_read_atomic
represents how many read operations the initiator has said that we can
have outstanding that have the initiator's rdma device as the target.
Since this number represents how many outstanding reads we will send to
the initiator at once, we should use the qp_init_read_atomic value.
Change-Id: Iacc044e8321080de8accd9128ac3777bbb948afc
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/442409
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
ftl_process_reloc should process free_queue in first place
(this will start read operations) and then process write queue.
Change-Id: I3a44b3651cc1526f8a024330472f94aa8d818193
Signed-off-by: Wojciech Malikowski <wojciech.malikowski@intel.com>
Reviewed-on: https://review.gerrithub.io/c/443403
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Change-Id: Id44f9de4500ec2be45aa4203c5945b1501fbdb21
Signed-off-by: Wojciech Malikowski <wojciech.malikowski@intel.com>
Reviewed-on: https://review.gerrithub.io/c/443236
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
This function gets used as a function pointer, which
seems to keep the compiler from trying to inline the
function. Stack manipulation was showing up in the
perf profile pointing to this. Marking the function
as inline gets it actually inlined in the hot I/O
path.
Improves bdevperf microbenchmark from 78M to 85M IO/s.
Cores are virtually identical - 11.4M on core 0 and
10.4-10.6M on remaining cores.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: Iadced071dfc07fc09db6da3571c930988b2dc3fd
Reviewed-on: https://review.gerrithub.io/c/443278
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
This keeps the hottest structures at the head of the
cache and helps improve performance.
Improves microbenchmark (8 null bdevs on 8 lcores,
bdevperf seq read with qd=1) from 67M to 78M on my
Xeon E5-v3 system. Core 0 performance remains about
the same (10.7-10.8M) but others cores improve from
around 8.0M each to 9.4M.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: Ia3ccf94ab39b6f911127f0bd1016e352027b11fc
Reviewed-on: https://review.gerrithub.io/c/443277
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: I2bad16b6649c279448a3c662ab7b035dbe0a4bfb
Signed-off-by: Wojciech Malikowski <wojciech.malikowski@intel.com>
Reviewed-on: https://review.gerrithub.io/c/443251
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
The ocssd spec and buildtime-check already ensures
sizeof(struct spdk_ocssd_geometry_data) is 4096, so we can use
struct spdk_ftl_dev::geo as buffer directly.
Change-Id: Id7a52f978d80284fe941d9f5d7bc7219518871e8
Signed-off-by: wuzhouhui <wuzhouhui@kingsoft.com>
Reviewed-on: https://review.gerrithub.io/c/443069
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Reviewed-by: Wojciech Malikowski <wojciech.malikowski@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
According to the current implementation, the functions that called by
bdev_ftl_init_bdev() will not call callback if they return errno.
Besides, the caller of bdev_ftl_init_bdev() (e.g.
spdk_rpc_construct_ftl_bdev()) don't expect callback be called if callee
return errno.
Change-Id: I5f36d5332ac66db65bb2090e9625a73b1107306b
Signed-off-by: wuzhouhui <wuzhouhui@kingsoft.com>
Reviewed-on: https://review.gerrithub.io/c/443068
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Reviewed-by: Wojciech Malikowski <wojciech.malikowski@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
There is no need to hold g_ftl_bdev_lock when calling bdev_ftl_create.
Besides, the functions (e.g. bdev_ftl_add_ctrlr) that called by
bdev_ftl_create will lock g_ftl_bdev_lock again.
Change-Id: I74751822364e16c58a3065dc78f8a4dce157e925
Signed-off-by: wuzhouhui <wuzhouhui@kingsoft.com>
Reviewed-on: https://review.gerrithub.io/c/443066
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Vhost external events no longer do any asynchronous
calls, they only lock the vhost mutex and directly
call the provided function. The mutex encapsulation
isn't worth the additional complexity of splitting
each vdev-handling code into multiple functions, so
we expose low-level APIs that should eventually
replace external events entirely.
Instead of:
```
static int do_something_cb(struct spdk_vhost_dev *vdev, void *arg)
{
struct my_data *ctx = arg;
/* access the vdev and ctx */
free(ctx);
}
struct my_data *ctx = calloc(...);
rc = spdk_vhost_call_external_event("my_vdev", do_something_cb, ctx);
if (rc != 0) { /* err handling */ }
```
We can now do just:
```
spdk_vhost_lock();
vdev = spdk_vhost_dev_find("my_vdev");
if (vdev == NULL) { /* err handling */ }
/* access the vdev any context data */
spdk_vhost_unlock();
```
Change-Id: I06e1e149d6dd006720b021d3bef8d9b7bfaeceaa
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/c/440377
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Pawel Wodkowski <pawelx.wodkowski@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
_allocate_bit_arrays() needs vol->backing_dev set which was being done
after the call.
Change-Id: Ic8c36c98aee94fbd8230273638011b948cd95675
Signed-off-by: paul luse <paul.e.luse@intel.com>
Reviewed-on: https://review.gerrithub.io/c/443048
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: wuzhouhui <wuzhouhui@kingsoft.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
This is only needed within the c file. It doesn't
need to be in the public header.
Change-Id: I0e072ea5eddc6edc84faecee9ef50fb2c20dbb24
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.gerrithub.io/c/442426
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
This is a holdover from before poll groups were introduced.
We just need a per-thread context for a set of connections,
so now that a poll group exists we can use that instead.
Change-Id: I1a91abf52dac6e77ea8505741519332548595c57
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.gerrithub.io/c/442430
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Seth Howell <seth.howell5141@gmail.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
The READ and ATOMIC in the comment above are capitalized, so
make this all caps too.
Change-Id: I49fae2ceb826b22953d9b26d42b95f17e2dac617
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.gerrithub.io/c/442427
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
request.c didn't have much code, so let's collapse
it into ctrlr.c and make that the place where all
software emulator of the NVMe controller, including
request handling, is done.
Change-Id: Id7c98010cb222a414a5aa0b78bfb299a0ffc418f
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.gerrithub.io/c/440592
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Seth Howell <seth.howell5141@gmail.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Previously, all I/O commands were implemented by simply
passing them to the bdev layer. Now, some I/O commands will
be emulated. Prepare for that by moving the code for this
function to ctrlr.c, where the emulation will occur.
Change-Id: Id34e5549e5ce216d602fb347b4506fbd324eed4e
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.gerrithub.io/c/440591
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Seth Howell <seth.howell5141@gmail.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
This was previously very unmap specific. Make at least the top level
DSM call more general purpose by eliminating the unmap_ctx.
Change-Id: I9c044263e9b7e4ce7613badc36b51d00b6957d3a
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.gerrithub.io/c/440590
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Seth Howell <seth.howell5141@gmail.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
These are left over from the removal of virtual mode over a year ago.
Change-Id: Ia797c4570bf9090346ff22ab9c7d719a78d023d0
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.gerrithub.io/c/440589
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Seth Howell <seth.howell5141@gmail.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
This was only used by the target, and it didn't actually need it.
Change-Id: Ibcef410165efdc16077da24419580ed51b087d70
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.gerrithub.io/c/442440
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
This type was actually two entirely different types for
the initiator and the target, so just make it void.
Change-Id: I15512d9d4efd790dce0fa4323b7230de66144bc6
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.gerrithub.io/c/442438
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
After passing the check of protective mbr, there is a high probability that
this bdev is in gpt format. If parsing primary table fails, read the secondary
table and try to get partition info from it. When parsing secondary table
successfully, add a warning log to notify users that primary table is broken.
Change-Id: I4f16edcdd57b9cde8d8cc74ec88ba95b97bd6b63
Signed-off-by: lorneli <lorneli@163.com>
Reviewed-on: https://review.gerrithub.io/c/441201
Reviewed-by: GangCao <gang.cao@intel.com>
Reviewed-by: wuzhouhui <wuzhouhui@kingsoft.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Modify existing code of parsing primary partition table to support parsing
the secondary.
Main difference of these two tables is that they have inverse buffer layout.
For primary table, header is in front of partition entries. And for secondary
table, header is after partition entries. So add helper functions to extract
header and partition entries buffer region from primary or secondary table
based on current parse phase.
Split the exported funtion spdk_gpt_parse into two functions spdk_gpt_parse_mbr
and spdk_gpt_parse_partition_table. So spdk_gpt_parse_partition_table could be
used to parse both primary and secondary table.
Change-Id: I7f7827e0ee7e3f1b2e88c56607ee5b702fb2490c
Signed-off-by: lorneli <lorneli@163.com>
Reviewed-on: https://review.gerrithub.io/c/441200
Reviewed-by: wuzhouhui <wuzhouhui@kingsoft.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: GangCao <gang.cao@intel.com>
Reviewed-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Unmap, discard, write zeros will be sent down from
higher stack. Remove these IOs for the QoS limit.
Change-Id: Ieb3cc19f31c43f8ddf8f8d2fd338f442ef48b679
Signed-off-by: GangCao <gang.cao@intel.com>
Reviewed-on: https://review.gerrithub.io/c/442673
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Liang Yan <liang.z.yan@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
When a connection goes to close and has no I/O outstanding,
the current_recv_depth was being decremented beyond 0 and rolling over.
If the poll group then finds a successful receive completion on the next
poll (for a command that arrived prior to starting the disconnect but
hadn't been processed yet), it would trip the max queue depth check
added recently and start another disconnect process. If only one command
arrives in this window, everything actually works out ok.
However, if there are two receive completions sitting in the completion
queue after the disconnect process is started, the first one does the
double disconnect and the second one does another disconnect which ends
up dereferencing a null pointer.
Since there is always a special reserved slot for the dummy recv, don't
do decrements or increments of the current_recv_depth for the dummy
recv. This allows the code to still enforce the actual max_queue_depth
on recvs without underflowing or overflowing the counter.
Change-Id: I56c95b2424e956a3b007b25c50cbf47262245b8f
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.gerrithub.io/c/442642
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Seth Howell <seth.howell5141@gmail.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
trace_record is used to poll the spdk trace shm file
and store new entries from it to another specified trace file.
This could help retain the trace_entires from the overlay of
trace circular buffer
Note:
* trace_record reads the input tracefile into a process-local
memory and writes trace entries to the output file only at shutdown.
* trace_record can be shut down on SIGINT or SIGTERM signal.
A usage sample is:
./spdk_trace_record -s bdev_svc -p <spdk app pid> -f trace.tmp -q
Change-Id: If073a05022ec9c1b45923c38ba407a873be8741b
Signed-off-by: Liu Xiaodong <xiaodong.liu@intel.com>
Reviewed-on: https://review.gerrithub.io/c/433385
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
This RPC doesn't really work in some cases - for example,
trying to delete one NVMe namespace bdev from a controller
with multiple namespaces, or just one virtio SCSI device
from a virtio-scsi controller. We've previously kept it
and marked it as "debugging only" - but every bdev module
has its own RPC method now for deleting what it constructed,
so keeping the generic delete_bdev RPC is asking for
trouble in some of the cases mentioned above. We'll remove
it in the 19.04 release.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I639254b32a3e1c840a4e9ae2658c42f4f321b676
Reviewed-on: https://review.gerrithub.io/c/442616
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
This was marked deprecated in the v18.10 release, so
remove it now before v19.01 is tagged.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I57673a5ab475b97c812bebcefd77ff90d9305d1c
Reviewed-on: https://review.gerrithub.io/c/442412
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
This includes properly detecting when a key's name
extends past the end of the valid data.
Note that the unit tests were using sizeof() instead
of strlen() since some of the strings contain
NULL characters. This means that we should be
subtracting one to account for the implicit null
character at the end of the string. Note that the
iSCSI spec only says that the key/value pair has to
end with a null character - a key/value pair that
is split across two PDUs will not have a NULL character
at the end of the first PDU.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: Ie95d6dd3b9ffa6a3902a31771ac4edb482418cce
Reviewed-on: https://review.gerrithub.io/c/442450
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Since params are parsed directly from the PDU's data
buffer, we need to know the end of the valid data. Otherwise
previous PDUs that used this same data buffer may have left
non-zero characters just after the end of the text associated
with a LOGIN or TEXT PDU.
Found this bug while debugging an intermittent Calsoft test
failure. Added a unit test to reproduce the original issue,
which now verifies that it is fixed.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: Ic3706639ff6c4f8f344fd58c88ec11e247ea654c
Reviewed-on: https://review.gerrithub.io/c/442449
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
For the number of trace entries, change strtoull to spdk_strtoll
because no issue will occur by the change.
Besides, getopt guarantees that if an argument is followed by a
semicolon, optstring of it is not NULL. spdk_app_parse_args()
had unnecessary NULL pointer check related with this. Hence
remove those NULL pointer checks too.
Change-Id: I33d0328205d1765f70f70fc734d0d8b4165fef5e
Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-on: https://review.gerrithub.io/c/441641
Reviewed-by: wuzhouhui <wuzhouhui@kingsoft.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Without this check valgrind complains that we are using
uninitialized variable.
Change-Id: I5cb73d10e167004f6e4df9e3621ec3b35ec2448d
Signed-off-by: Wojciech Malikowski <wojciech.malikowski@intel.com>
Reviewed-on: https://review.gerrithub.io/c/442519
Reviewed-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
In SPDK, we will build isa-l with no shared option
and then integrate it into SPDK. And we do not need
to install isal in the system libaries.
Note: ocf build in autobuild.sh now needs to build
include/spdk/config.h before building the ocf library,
to ensure that header is available in a clean build
environment.
Change-Id: I3f0ce6932b386de17a77cf5bfdfd738b22417e2d
Signed-off-by: Ziye Yang <optimistyzy@gmail.com>
Signed-off-by: paul luse <paul.e.luse@intel.com>
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Reviewed-on: https://review.gerrithub.io/c/441279
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Chunyang Hui <chunyang.hui@intel.com>
The io_device associated with the aio bdev was only
getting unregistered when the aio bdev was explicitly
deleted - not in the implicit deletion path at shutdown.
Move the io_device_unregister into the destruct_cb -
this makes sure the io_device is always unregistered, whether
the bdev is getting unregistered via an explicit RPC or
implicitly in the shutdown path.
Fixes#618.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I44b77f5c38339f4cf97b02c0ee4002bf5fcc9998
Reviewed-on: https://review.gerrithub.io/c/442119
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ziye Yang <optimistyzy@gmail.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Verify that the namespace used is formatted with a supported LBA format
(4K block size).
Change-Id: I59e2ed71354e8530d9fa0e3f6b323ded83097afa
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Reviewed-on: https://review.gerrithub.io/c/441881
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Wojciech Malikowski <wojciech.malikowski@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Existing specific vhost socket messages for vhost-nvme target
will get some information from backend target before start_session
call, so we should iterate the associated nvme controller by vid
but not session.
Fix issue #628.
Change-Id: Ia400bf33895a0feee0058a870f26b0ff72b7556f
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-on: https://review.gerrithub.io/c/442498
Reviewed-by: Ziye Yang <optimistyzy@gmail.com>
Reviewed-by: Liang Yan <liang.z.yan@intel.com>
Reviewed-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Marked as deprecated in 18.10.
Change-Id: I40d0e6103623aee6e6a0b9fa6e82f7b826ca1fe6
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.gerrithub.io/c/442420
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Error check of strtol is left to users of it. But some use cases
of strtol in SPDK do not have enough error check yet.
For example, strtol returns 0 if there were no digits at all.
It should be avoided for each use case to add enough error checking
for strtol.
Hence spdk_strtol and spdk_strtoll do additional error checking
according to the description of manual of strtol.
Besides, there is no use case of negative number now, and to keep
simplicity, spdk_trtol and spdk_strtoll allows only strings that
is positive number or zero.
As a result of this policy, callers of them only have to check if
the return value is not negative.
Subsequent patches will replace atoi to spdk_strtol because atoi
does not have error check.
Change-Id: If3d549970595e53b1141674e47710fe4dd062bc5
Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-on: https://review.gerrithub.io/c/441626
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: wuzhouhui <wuzhouhui@kingsoft.com>
Reviewed-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
This was marked deprecated in 18.10
Change-Id: Id47e770b0388c935fe684aeef7a9824f24cef47f
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.gerrithub.io/c/442416
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Add rte_pause to waiting while loop
This commit also adds spdk_pause as interface for rte_pause
Change-Id: I56e1023731e2e78febaa4f45808d6f07656d290f
Signed-off-by: Vitaliy Mysak <vitaliy.mysak@intel.com>
Reviewed-on: https://review.gerrithub.io/c/436494
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Use mempool for allocating OCF requests with constant size
Previous method was using mallocs which is significantly slower
Change-Id: I539ff22efc18fbd353ceb2687ea211d2baaa7523
Signed-off-by: Vitaliy Mysak <vitaliy.mysak@intel.com>
Reviewed-on: https://review.gerrithub.io/c/439680
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Maciej Szwed <maciej.szwed@intel.com>
One of the messages we send on memory hotplug event is
SET_VRING_BASE, which tells vhost e.g. the position in
a vring it should start processing requests from. Sending
this message with any outstanding I/O could cause that
I/O to be never processed as it could be at a vring
position that won't be practically polled.
To fix the above, we don't send SET_VRING_BASE message
on memory hotplug event anymore since it's completely
unnecessary. It was sent together with a couple other
messages that would reinitialize the vring, but we know
vrings occupy a memory buffer that won't be hotremoved
during vring lifetime. We also know that vring GPAs will
never change. Hence we can initialize the vrings just
once on device start now.
We still need to send SET_VRING_ADDR after updating the
memory table, as rte_vhost depends on it to apply that
new memory table. Luckily, this single message doesn't
cause us any trouble.
Change-Id: I2125099f1cf3f8c76e8160ec819bd1a9a3e7823c
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/c/439436
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Pawel Wodkowski <pawelx.wodkowski@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
We assumed the second descriptor in an I/O descriptor
chain will always point to a payload buffer, but in case
there is no payload, the second descriptor will point to
a response buffer. The vhost code doesn't provide proper
checks to handle such case, so to avoid various errors
down the stack, we just fail all requests with no
payload.
Change-Id: I6785c2843d6db4fc17e68e03562c2a1530bb469b
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/c/437187
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: <dstepanov.src@gmail.com>
This ensures that SPDK will detect descriptor chains
that are too long.
The additional check in vhost block stands as an
optimization and makes us fail the corrupted I/O early.
Change-Id: Icceaa0dd938dca96a1872e5ee96bf6a151fdd9e7
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Signed-off-by: Dima Stepanov <dstepanov.src@gmail.com>
Reviewed-on: https://review.gerrithub.io/c/433641
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
SPDK doesn't provide sufficient runtime checks to properly
handle clients with memory sizes that aren't 2MB multiples
and could potentially segfault during I/O processing.
That's why we'll reject such clients now.
Change-Id: I34e85be5b5c6df863371d0ad688f228ed44107ff
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/c/433640
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Add new RPC method for OCF bdev: get_ocf_bdevs
It is useful in respect to not registered OCF bdevs
which do not appear in standard get_bdevs call
Change-Id: I8a5fc86a880b04c47d5f139aa5fa4d07ca39c853
Signed-off-by: Vitaliy Mysak <vitaliy.mysak@intel.com>
Reviewed-on: https://review.gerrithub.io/c/441655
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Add basic handling of base devices hotremove
When either core or caching device gets unregistered,
the vbdev_ocf does so as well
Change-Id: I05769f714bf22cb320558fed86adc8c3d8a0a185
Signed-off-by: Vitaliy Mysak <vitaliy.mysak@intel.com>
Reviewed-on: https://review.gerrithub.io/c/435729
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Pawel Wodkowski <pawelx.wodkowski@intel.com>
Since we have different requirements for submitting RDMA read and write
operations, we should track them separately so that we don't block
writes when the device does not have enough resources for read
operations.
Change-Id: I5d6424c0e26f2f5362866d1bb21eb46700c245da
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/441794
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Before, the number of WRs and the number of RDMA requests were linked by
a constant multiple. This is no longer the case so we need to make sure
that we don't overshoot the limit of WRs for the qpair.
Change-Id: I0eac75e96c25d78d0656e4b22747f15902acdab7
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/439573
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Add OCF module based on OCF meta-library
Open CAS Framwework (OCF) is high performance block storage
caching meta-library
It is open-source, published at https://github.com/Open-CAS/ocf
With this patch OCF-enabled device is represented in SPDK
as virtual bdev having core and caching devices as its base devices
This patch includes implementation of:
* OCF top adapter (vbdev_ocf.c)
* OCF bottom adapter (dobj.c, data.c)
* Adaptation layer for OCF (env/)
* OCF context abstractions (ctx.c)
Adaptation layer and context abstractions are not dependent on SPDK bdev
OCF bdev supports reads and writes, configured at startup
Other features will be added with separate patches
Change-Id: Ic2dcab378c8238d16f1e4b64d4374bdf257565bc
Signed-off-by: Vitaliy Mysak <vitaliy.mysak@intel.com>
Reviewed-on: https://review.gerrithub.io/c/435708
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
_spdk_bdev_io_submit uses the bdev_io->internal.in_submit_request
flag to ensure we unwind in cases where the I/O is completed
inline (i.e. malloc or null bdevs). But when an I/O gets queued
for QoS, and then we iterate through the queued I/O in
_spdk_bdev_qos_io_submit(), this flag was not getting set
when those I/O would get submitted to the underlying bdev. This
would allow for _spdk_bdev_qos_io_submit recursion, resulting
in all kinds of different types of memory corruption.
Fixes#613.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I29263f4e7b2ead60f08b60474d210defa803348c
Reviewed-on: https://review.gerrithub.io/c/442127
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Liang Yan <liang.z.yan@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: GangCao <gang.cao@intel.com>
It is perfectly valid for a bdev to not support the
unmap command - there's no need to print an ERRLOG
when a SCSI INQUIRY 0xB2 (LOGICAL BLOCK PROVISIONING)
command is sent to query if the LUN supports it.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I18389df4d55a1ac186707d624ddea292a5470e80
Reviewed-on: https://review.gerrithub.io/c/442104
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Technically this check is correct, but the Linux kernel
target doesn't have it, and older versions of libiscsi
have a bug which result in stale ExpStatSN getting sent
resulting in terminated connections with the SPDK iSCSI
target at high queue depths.
Fixes#600.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I76eaf9dee2d733bfa3f8d43b86528de6b556cbd6
Reviewed-on: https://review.gerrithub.io/c/441981
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Those cases should never occur. Klocwork pointed out
possible dereference based on the returns later in
the functions.
Change-Id: I282a56f3f415f85c38e9c451cbb10bc80fc6176b
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/c/441546
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
This gives us more realistic control over the number of requests we can
submit.
Change-Id: Ie717912685eaa56905c32d143c7887b636c1a9e9
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/441606
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
rw_depth was a misinterpretation of the spec. It is based on the value
of max_qp_rd_atom which only governs the number of read and atomic
operations. However, we were using rw_depth to block both read and write
operations which is an unnecessary restriction. write operations should
only be governed by the number of Work Requests posted to the send
queue. We currently guarantee that we will never overshoot the queue
depth for Work requests since they are embedded in the requests and
limited to a size of max_queue_depth.
Change-Id: Ib945ade4ef9a63420afce5af7e4852932345a460
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/441165
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
This will be necessary later on when we need to throttle send and recv
requests in software.
Change-Id: Ifb25eaabd15e101fbfc2959a08a321f80857b280
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/441604
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Both initiator and target are using the minium 10 seconds
timeout value, so set it in kas field when initializing
the controller.
Change-Id: Idda68bdfe27613ebaf706a0de497145d3f9ed766
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-on: https://review.gerrithub.io/c/441995
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Currently, the code does not comply with the spec,
so remove such code for 19.01 and will add the code
which complies with the spec for 19.04
Change-Id: Icd3b2573fbc46dc2fa7a00c6672c23ea01ffe0ee
Signed-off-by: Ziye Yang <optimistyzy@gmail.com>
Reviewed-on: https://review.gerrithub.io/c/441985
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Although Vhost SCSI code is technically capable
of polling different sessions on different lcores,
the underlying SCSI API won't allow allocating
io_channels on more than one lcore.
That's why we will now let device backends assign
lcores by themselves.
The first Vhost SCSI session will now choose one
core from the available ones, and any subsequent
sessions will stick to the same one.
Change-Id: I616cd195a919960dff68508473cea236abf8d6a3
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/c/441581
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Pawel Wodkowski <pawelx.wodkowski@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
If there is socket read error, we should directly disconnect
the socket instead of set the tqpair into RECV_ERROR state.
When it is in ERROR_RECV state, it does not mean that
we should close the socket immediately.
Change-Id: I975906653c13eb3fa5195799c517015435176785
Signed-off-by: Ziye Yang <optimistyzy@gmail.com>
Reviewed-on: https://review.gerrithub.io/c/441830
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Bump the log level for EAL to RTE_LOG_NOTICE.
Reading from rte_log.h:
```
RTE_LOG_NOTICE 6U /**< Normal but significant condition. */
RTE_LOG_INFO 7U /**< Informational. */
RTE_LOG_DEBUG 8U /**< Debug-level messages. */
```
We're doing this primarily for the NVMe hotplug poller,
which calls spdk_pci_enumerate() and constantly bloats
the output with logs describing which device is currently
iterated over. We don't want to see those.
Change-Id: I1a90e514fdf467bc95da910f786f1818757cfdcf
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/c/441789
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Karol Latecki <karol.latecki@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
In patch fbec702944 (bdev/crypto: Set QAT alignment
requirement) we added an alignment requirement for I/O
buffers, but the internally-allocated buffers for
encryption haven't respected it.
We now allocate those buffers with the crypto bdev's
required alignment. It is only required for QAT and we
do it unconditionally, but we don't want to strcmp the
driver name in the hot I/O path just for that - the
code is to be refactored anyway.
Change-Id: I2cbc04408ddc5574f212b63536a05eb73ceba104
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/c/441908
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Assigned CQ size when creating CQ may run over due to
heavy workload with too many qpairs. Enlarge it dynamically
can prevent IBV_EVENT_CQ_ERR caused by CQ's runover.
This patch fixes issue #498:
https://github.com/spdk/spdk/issues/498
Change-Id: I6c2d7194d4147d812d49d4fe787fcba5c6bbede9
Signed-off-by: Xiaodong Liu <xiaodong.liu@intel.com>
Reviewed-on: https://review.gerrithub.io/c/440853
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Seth Howell <seth.howell5141@gmail.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
This change was provided by GitHub user vikasbrcm to fix issue 562.
I am uploading his change to facilitate testing of the issues and
possibly get it merged before the 19.01 window closes.
Change-Id: I58fb1058f68c6c02006ceed6e577be627e6dbc09
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/441611
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
This patch adds FTL bdev. RPC scripts have been updated to allow for
creation and removal of FTL bdevs.
Change-Id: I82a5c5033b65bbeb67c238cae969a68cff767dcc
Signed-off-by: Jakub Radtke <jakub.radtke@intel.com>
Signed-off-by: Wojciech Malikowski <wojciech.malikowski@intel.com>
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Signed-off-by: Mateusz Kozlowski <mateusz.kozlowski@intel.com>
Reviewed-on: https://review.gerrithub.io/c/431329
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
With all the patches in place, we can finally
enable having more than one simultaneous sessions
to a single vhost device.
This patch adds a unique id to the session structure,
similar to the one in a vhost device and also fills in
the implementation holes in foreach_session().
Vhost-NVMe can support only one session per device
and now has an additional check that prevents it from
starting more than one at a time.
Vhost-SCSI also has the same check now since it needs
additional work on the lcore assignment policy. The
check will be removed once the required work is done.
Change-Id: I13a32c7a0eae808e9bec63a7b8c15ec0bc2e36ed
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/c/439324
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Pawel Wodkowski <pawelx.wodkowski@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Particular backends will now be responsible for sending
events to vsession->lcore. This was previously done by
the generic vhost layer, but since some backends will
need different lcore assignment policies soon, we need
to give them more power now.
Change-Id: I72cbbccb9d5a5b2358acca6d4b6bb882131937af
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/c/441580
Reviewed-by: Pawel Wodkowski <pawelx.wodkowski@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
It's sessions that are tied with the lcores now.
This makes the vhost devices accessible by any
thread that only locks the global vhost mutex.
The mechanism used for external device events was
refactored to serve for foreach_session() API.
Additionally, since we don't want to handle cases
where the entire vhost device gets removed while
an asynchronous foreach_session chain is pending,
a new per-vdev counter of pending async operations
was added. We'll fail the device removal request
if there are any pending operations. Eventually
we would like the device removal to be asynchronous,
but that's a todo for later.
The external events are still there, although
they only lock the mutex and call the provided
function now.
Change-Id: I20618f9420a9bc04270373469deaad8fb2049c7c
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/c/439323
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Pawel Wodkowski <pawelx.wodkowski@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>