Renamed nvme_qpair_abort_reqs() to nvme_qpair_abort_reqs_with_cbarg() to
highlight the fact that it only aborts requests with specified cb_arg
and to distinguish it from _nvme_qpair_abort_reqs() which aborts all
requests immediately.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I32fec5ab0501b1beb8605689d73ec42a6424fba5
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/9323
Community-CI: Mellanox Build Bot
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
This removes some code that was duplicated in the
CHECK_EN and DISABLE_WAIT_FOR_READY_1 states.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: Ie5d175540f71c692f7784c7ff22a48f34b9b7082
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/8614
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Added mocks in preparation for making the NVMe controller initialization
use asynchronous versions of the register operations.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: Ifbcc3c73933fb965db710389fec8cd2d52886d4d
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/8610
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Monica Kenguva <monica.kenguva@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
It will make it easier to support asynchronous register set/get
functions.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I9915609ff940596ae4d67388238cc685dfa426fa
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/8608
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Poll group holds lists of qpairs in different states and
when we got rdma completion with error, we iterate these
lists to find a qpair which qp_num matches. qp_num
is stored inside of ibv_qp which belongs to spdk_rdma_qp
structure. When nvme_rdma_qpair is disconnected, pointer
to spdk_rdma_qp is cleaned but qpair may still exist in
poll group list and when we start searhing for qpair by
qp_num we may dereference NULL pointer.
This patch adds a check that pointer to spdk_rdma_qp
is valid before dereferencing it. To minimize boilerplate code,
wrap all check in macro. Add unit test to verify this fix.
Signed-off-by: Alexey Marchuk <alexeymar@mellanox.com>
Change-Id: I1925f93efb633fd5c176323d3bbd3641a1a632a9
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/9050
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
John Kariuki tested this patch on a system with
several Intel P5800X Optane SSDs, to determine the
performance impact of adding these two
spdk_trace_records() in the main NVMe I/O path.
The pathological case (512B random reads on a single
Xeon core) decreased from 13.10M to 12.88M, or 1.7%.
Normal workloads (4KB+) would incur a smaller penalty
since the I/O rate would be much lower - maybe even
unnoticeable..
This is a really valuable tracepoint to have enabled
by default, so I think this small amount of degradation
is acceptable.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: Ie2543cadf3541eb74398d31ac0f495522ab49ec0
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/9303
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
If NN is very, very large, this allocates too much memory. For now, just
use a list.
Change-Id: I904977673d8fb6c86f03c94ba798c6cc07f4a4d8
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/9301
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
This can be done immediately after receiving the controller identify
data for now.
Change-Id: I527a44c4d1f4d3ad2eeb8fc77e07086c2358cac3
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/9300
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Split the connection process across two states, which allows the
transport to connect the admin queue asynchronously.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: Ie84477331df0abf5ffdfc2a0ff5d5ada760c9e73
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/9076
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Monica Kenguva <monica.kenguva@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
The fabric connect command is now sent without. It will make it
possible to make `nvme_tcp_ctrlr_connect_qpair()` non-blocking too by
moving the polling to process_completions (this will be done in
subsequent patches). Additionally, two extra states,
`NVME_TCP_QPAIR_STATE_FABRIC_CONNECT_SEND` and
`NVME_TCP_QPAIR_STATE_FABRIC_CONNECT_POLL`, were added to keep track of
the state of the connect command. These states are only used by the
initiator code, as the target doesn't need them.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I25c16501e28bb3fbfde416b7c9214f42eb126358
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/8605
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
These functions will allow for sending the connect fabric command
asynchronously.
Additionally, this patch changes the return code for
`nvme_fabric_qpair_connect()` when a timeout occurs from -EIO to
-ECANCELED. It gives better description of the error as well as make it
more consistent with `nvme_wait_for_completion*` APIs.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I95806626d3573ebe4b1568157fd57013c4b909a7
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/8604
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Since the connect will be completed asynchronously, we
need to keep the pointer around so we can access (and
free it!) later when the command completes.
Also change the code to poll on the status using the
new nvme_wait_for_completion_poll(), as prep for upcoming
patches.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I28add8f967fd000afed1e50e491a16ea9da16c22
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/8603
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
This fixes unit test failure caused by 4ac203b2d, which changed the way
asynchronous events are reported to be on a per-process basis.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I150de552bb4be5e184d6eb518abf89f83de106eb
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/9308
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Modified the async_list to be per-process instead of
on the controller object. This allows an NVMe multi-
process setup to have Asynchronous Events Reported
to each process that may interested in them. In the
previous case, where the async event list was on the
controller object, AER (Async Event Requests) would
not be reported to all the processes.
Fixes: #1874
Signed-off-by: Curt Bruns <curt.e.bruns@gmail.com>
Change-Id: I3e885c0cf5a0fd471d243bc7d96a8b7ffe65d14b
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/8744
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
There may be multiple C2H data pdus recevied.
So we should use the following steps:
1 Use the SPDK_NVME_TCP_C2H_DATA_FLAGS_LAST_PDU
to check whether it is a last pdu or not.
Then we will not cleanup tcp_req, i.e., tcp_req->datao
will not be cleaned.
Then use the SPDK_NVME_TCP_C2H_DATA_FLAGS_SUCCESS
to check whether the controller will use resp pdu
or not.
Signed-off-by: Ziye Yang <ziye.yang@intel.com>
Change-Id: I9dccf2579aadd18f31361444e25bd4b3b76f06c5
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/9192
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
This will be done in stages. This patch adds the
nvme_tcp_ctrlr_connect_qpair_poll function and and makes the icreq step
asynchronous. Later patches will expand it and make the
nvme_fabric_qpair_connect part asynchronous as well.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: Ief06f783049723131cc2469b15ad8300d51b6f32
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/8599
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Just add a single .gitignore file in test/unit
that covers *_ut. That allows us to eliminate
100 .gitignore files in the test/unit directory
hierarchy.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: Ia190587b4d5c6f1847471be27550cbfb843dc01e
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/9235
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
These functions accept extendable structure with IO request options.
The options structure contains a memory domain that can be used to
translate or fetch data, metadata pointer and end-to-end data
protection parameters
Change-Id: I65bfba279904e77539348520c3dfac7aadbe80d9
Signed-off-by: Alexey Marchuk <alexeymar@mellanox.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/6270
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Ziye Yang <ziye.yang@intel.com>
Add a global list of memory domains with reference counter.
Memory domains are used by NVME RDMA qpairs.
Also refactor ibv_resize_cq in nvme_rdma_ut.c to stub
Signed-off-by: Alexey Marchuk <alexeymar@mellanox.com>
Change-Id: Ie58b7e99fcb2c57c967f5dee0417e74845d9e2d1
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/8127
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Ziye Yang <ziye.yang@intel.com>
If a qpair is part of a poll group and it's not configured in the async
mode, it should be using poll group's process_completions variant.
Additionally, connecting qpairs to the poll group was moved up, so that
qpairs are already on the connected qpairs queue when waiting for the
connection to complete.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I08f75bd61a566d1ab60029b6202d9337df75733f
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/9074
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Monica Kenguva <monica.kenguva@intel.com>
Replaced poll cycle count with a timeout when destroying a qpair that is
part of a poll group. Tracking the time instead of a poll count is more
stable, as the number of poll cycles can vary based on the application's
behavior when destroying a qpair.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I7445bc1b411f2905aab7bf3dc7b2d3344712e1eb
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/9200
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Number of active namespaces may change. So, on ANA log page update we
should check if buffer has to be resized.
Signed-off-by: Evgeniy Kochetov <evgeniik@nvidia.com>
Change-Id: I1720317ea7f59e5afef73d5c4bd1cd69a7dd6520
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/8583
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Community-CI: Mellanox Build Bot
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Log page reading function 'spdk_nvme_ctrlr_cmd_get_log_page' does read
into intermediate buffer and then copy to user provided buffer. So,
there is no need for user buffer to allow DMA.
Signed-off-by: Evgeniy Kochetov <evgeniik@nvidia.com>
Change-Id: I7337afa99c3ae666cc43ea2a48317de875334cfc
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/9177
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
async_mode option is currently supported in PCIe transport layer
to create io qpair asynchronously. User polls the io_qpair for
completions, after create cq and sq completes in order, pqpair
is set to READY state. I/O submitted before the qpair is ready
is queued internally. Currently other transports only support
synchronous io qpair creation.
Signed-off-by: Monica Kenguva <monica.kenguva@intel.com>
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Change-Id: Ib2f9043872bd5602274e2508cf1fe9ff4211cabb
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/8911
Community-CI: Mellanox Build Bot
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
The generic transport layer still does a busy wait, but at least
the logic in the PCIe transport now creates the queue pair
asynchronously.
Signed-off-by: Monica Kenguva <monica.kenguva@intel.com>
Change-Id: I9669ccb81a90ee0a36d3f5512bc49c503923b293
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/8910
Community-CI: Mellanox Build Bot
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Patch below broke the UT due to not accouting for changes
in nvme_cuse since original submission:
(19f0bfd) test/nvme_cuse: cases for stop cuse
The change was introduced with:
(d651f8a) nvme/nvme_cuse: Fix race condition in cuse session
Now when initalizing nvme_cuse controller it uses is_started,
to avoid race condition.
The UT was missing this part of nvme_cuse controller initalization.
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Change-Id: I46344977204c3383d8f400c80bc7df50e6d7581d
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/9001
Reviewed-by: Michal Berger <michalx.berger@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
This fix is as same as for NVMe bdev module.
If a ANA log page has two or more ANA group descriptors, the second
or later of ANA group descriptors will not be 8-bytes aligned.
Then runtime error would occur as follows:
runtime error: member access within misaligned address 0x612000000074
for type 'const struct spdk_nvme_ana_group_descriptor', which requires
8 byte alignment
nvmf_get_ana_log_page() in lib/nvmf/ctrlr.c creates a ANA log page
data and processes 8 bytes alignment correctly because we got the
same runtime error before. However, lib/nvme had been missed at that
time.
Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: Idaa610544dc5cb659c387fcd38a2b4b97cbd06e5
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/8398
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Ziye Yang <ziye.yang@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Monica Kenguva <monica.kenguva@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Nvmf/vfio-user uses this API to map NVMe command sent from
VM from Guest Physical Address to Host Virtual Address, so
now we moved this API from the nvme library to nvmf/vfio-user
as an internal API.
UT code will be added back in coming patch.
Change-Id: I54817fc9811ccd9ddd97b3aa6762a2fce4bbdda6
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/8574
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: <dongx.yi@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: GangCao <gang.cao@intel.com>
Reviewed-by: Ziye Yang <ziye.yang@intel.com>
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Returning an error from this function is not useful - there
is nothing the caller can do with that information. So
change the return value to void. Also add ERRLOG and assert
if a transport actually returns a non-zero status, to
force the transport implementer (which must be an out-of-tree
transport) to make changes as necessary.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I402afec045265db178af821d25b99a6dbe066eab
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/8659
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
This update will allow us to use spdk_nvme_detach_async() and
spdk_nvme_detach_poll_async() easier to aggregate multiple detachments.
Previously, we could do:
spdk_nvme_detach_async()
spdk_nvme_detach_async()
spdk_nvme_detach_async()
and then started doing spdk_nvme_detach_poll_async().
Hence aggregating multiple detachments is already supported.
After this patch, the following sequence is possible:
spdk_nvme_detach_async() = 0
spdk_nvme_detach_async() = 0
spdk_nvme_detach_async() = 0
spdk_nvme_detach_poll_async() = -EAGAIN
spdk_nvme_detach_async() = 0
spdk_nvme_detach_async() = 0
spdk_nvme_detach_poll_async() = -EAGAIN
spdk_nvme_detach_poll_async() = -EAGAIN
spdk_nvme_detach_poll_async() = -EAGAIN
spdk_nvme_detach_poll_async() = 0
The actual changes is to remove the variable polling_started from
struct spdk_nvme_detach_ctx because it is not necessary anymore.
Clarify this change via updating the header file and CHANGELOG.
Verify this change by unit test.
Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: Iebdf6c27c5304a2097b7084c315ccc99634ffa1e
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/8468
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
For receving the pdu, we add the crc32c offloading by Accel framework.
Because the size of to caculate the header digest size is too small, so
we do not offload the header digest.
Change-Id: If2c827a3a4e9d19f0b6d5aa8d89b0823925bd860
Signed-off-by: Ziye Yang <ziye.yang@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/7734
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Using the CMB for SQs is not a standard use case.
Performance can vary widely when using CMB for SQs
and is typically not the configuration used for
benchmarking.
So let's change the default value here to 'false',
users can still opt-in by setting this option to
true in the spdk_nvme_ctrlr_opts structure prior
to attach.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: Iab746ba777b04152ffb92fea2a2bb923a0a0bf21
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/8227
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
And for some internal functions we need to pass controller
parameter so that we can do vtophys based on transport type.
Change-Id: I3ca4fa162ec9305f62b295ba21f7474c21edfe52
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/8031
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>