Commit Graph

94 Commits

Author SHA1 Message Date
Shuhei Matsumoto
bcd987ea2d nvme_rdma: Support SRQ for I/O qpairs
Support SRQ in RDMA transport of NVMe-oF initiator.

Add a new spdk_nvme_transport_opts structure and add rdma_srq_size
to the spdk_nvme_transport_opts structure.

For the user of the NVMe driver, provide two public APIs,
spdk_nvme_transport_get_opts() and spdk_nvme_transport_set_opts().

In the NVMe driver, the instance of spdk_nvme_transport_opts,
g_spdk_nvme_transport_opts, is accessible throughtout.

From an issue that async event handling caused conflicts between
initiator and target, the NVMe-oF RDMA initiator does not handle
the LAST_WQE_REACHED event. Hence, it may geta WC for a already
destroyed QP. To clarify this, add a comment in the source code.

The following is a result of a small performance evaluation using
SPDK NVMe perf tool. Even for queue_depth=1, overhead was less than 1%.
Eventually, we may be able to enable SRQ by default for NVMe-oF
initiator.

1.1 randwrite, qd=1, srq=enabled
./build/examples/perf -q 1 -s 1024 -w randwrite -t 30 -c 0XF -o 4096 -r
========================================================
                                                                                                              Latency(us)
Device Information                                                        :       IOPS      MiB/s    Average        min        max
RDMA (addr:1.1.18.1 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  0:  162411.97     634.42       6.14       5.42     284.07
RDMA (addr:1.1.18.1 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  1:  163095.87     637.09       6.12       5.41     423.95
RDMA (addr:1.1.18.1 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  2:  164725.30     643.46       6.06       5.32     165.60
RDMA (addr:1.1.18.1 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  3:  162548.57     634.96       6.14       5.39     227.24
========================================================
Total                                                                     :  652781.70    2549.93       6.12

1.2 randwrite, qd=1, srq=disabled
./build/examples/perf -q 1 -s 1024 -w randwrite -t 30 -c 0XF -o 4096 -r
========================================================
                                                                                                              Latency(us)
Device Information                                                        :       IOPS      MiB/s    Average        min        max
RDMA (addr:1.1.18.1 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  0:  163398.03     638.27       6.11       5.33     240.76
RDMA (addr:1.1.18.1 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  1:  164632.47     643.10       6.06       5.29     125.22
RDMA (addr:1.1.18.1 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  2:  164694.40     643.34       6.06       5.31     408.43
RDMA (addr:1.1.18.1 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  3:  164007.13     640.65       6.08       5.33     170.10
========================================================
Total                                                                     :  656732.03    2565.36       6.08       5.29     408.43

2.1 randread, qd=1, srq=enabled
./build/examples/perf -q 1 -s 1024 -w randread -t 30 -c 0xF -o 4096 -r '
========================================================
                                                                                                              Latency(us)
Device Information                                                        :       IOPS      MiB/s    Average        min        max
RDMA (addr:1.1.18.1 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  0:  153514.40     599.67       6.50       5.97     277.22
RDMA (addr:1.1.18.1 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  1:  153567.57     599.87       6.50       5.95     408.06
RDMA (addr:1.1.18.1 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  2:  153590.33     599.96       6.50       5.88     134.74
RDMA (addr:1.1.18.1 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  3:  153357.40     599.05       6.51       5.97     229.03
========================================================
Total                                                                     :  614029.70    2398.55       6.50       5.88     408.06

2.2 randread, qd=1, srq=disabled
./build/examples/perf -q 1 -s 1024 -w randread -t 30 -c 0XF -o 4096 -r '
========================================================
                                                                                                              Latency(us)
Device Information                                                        :       IOPS      MiB/s    Average        min        max
RDMA (addr:1.1.18.1 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  0:  154452.40     603.33       6.46       5.94     233.15
RDMA (addr:1.1.18.1 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  1:  154711.67     604.34       6.45       5.91      25.55
RDMA (addr:1.1.18.1 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  2:  154717.70     604.37       6.45       5.88     130.92
RDMA (addr:1.1.18.1 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  3:  154713.77     604.35       6.45       5.91     128.19
========================================================
Total                                                                     :  618595.53    2416.39       6.45       5.88     233.15

3.1 randwrite, qd=32, srq=enabled
./build/examples/perf -q 32 -s 1024 -w randwrite -t 30 -c 0XF -o 4096 -r 'trtype:RDMA adrfam:IPv4 traddr:1.1.18.1 trsvcid:4420'
========================================================
                                                                                                              Latency(us)
Device Information                                                        :       IOPS      MiB/s    Average        min        max
RDMA (addr:1.1.18.1 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  0:  672608.17    2627.38      47.56      11.33     326.96
RDMA (addr:1.1.18.1 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  1:  672386.20    2626.51      47.58      11.03     221.88
RDMA (addr:1.1.18.1 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  2:  673343.70    2630.25      47.51       9.11     387.54
RDMA (addr:1.1.18.1 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  3:  672799.10    2628.12      47.55      10.48     552.80
========================================================
Total                                                                     : 2691137.17   10512.25      47.55       9.11     552.80

3.2 randwrite, qd=32, srq=disabled
./build/examples/perf -q 32 -s 1024 -w randwrite -t 30 -c 0XF -o 4096 -r 'trtype:RDMA adrfam:IPv4 traddr:1.1.18.1 trsvcid:4420'
========================================================
                                                                                                              Latency(us)
Device Information                                                        :       IOPS      MiB/s    Average        min        max
RDMA (addr:1.1.18.1 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  0:  672647.53    2627.53      47.56      11.13     389.95
RDMA (addr:1.1.18.1 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  1:  672756.50    2627.96      47.55       9.53     394.83
RDMA (addr:1.1.18.1 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  2:  672464.63    2626.81      47.57       9.48     528.07
RDMA (addr:1.1.18.1 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  3:  673250.73    2629.89      47.52       9.43     389.83
========================================================
Total                                                                     : 2691119.40   10512.19      47.55       9.43     528.07

4.1 randread, qd=32, srq=enabled
./build/examples/perf -q 32 -s 1024 -w randread -t 30 -c 0xF -o 4096 -r
========================================================
                                                                                                              Latency(us)
Device Information                                                        :       IOPS      MiB/s    Average        min        max
RDMA (addr:1.1.18.1 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  0:  677286.30    2645.65      47.23      12.29     335.90
RDMA (addr:1.1.18.1 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  1:  677554.97    2646.70      47.22      20.39     196.21
RDMA (addr:1.1.18.1 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  2:  677086.07    2644.87      47.25      19.17     386.26
RDMA (addr:1.1.18.1 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  3:  677654.93    2647.09      47.21      18.92     181.05
========================================================
Total                                                                     : 2709582.27   10584.31      47.23      12.29     386.26

4.2 randread, qd=32, srq=disabled
./build/examples/perf -q 32 -s 1024 -w randread -t 30 -c 0XF -o 4096 -r
========================================================
                                                                                                              Latency(us)
Device Information                                                        :       IOPS      MiB/s    Average        min        max
RDMA (addr:1.1.18.1 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  0:  677432.60    2646.22      47.22      13.05     435.91
RDMA (addr:1.1.18.1 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  1:  677450.43    2646.29      47.22      16.26     178.60
RDMA (addr:1.1.18.1 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  2:  677647.10    2647.06      47.21      17.82     177.83
RDMA (addr:1.1.18.1 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core  3:  677047.33    2644.72      47.25      15.62     308.21
========================================================
Total                                                                     : 2709577.47   10584.29      47.23      13.05     435.91

Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Signed-off-by: Denis Nagorny <denisn@nvidia.com>
Signed-off-by: Evgeniy Kochetov <evgeniik@nvidia.com>
Change-Id: I843a5eda14e872bf6e2010e9f63b8e46d5bba691
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14174
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2023-01-17 23:53:01 +00:00
Michal Berger
3f912cf0e9 misc: Fix spelling mistakes
Found with misspell-fixer.

Signed-off-by: Michal Berger <michal.berger@intel.com>
Change-Id: If062df0189d92e4fb2da3f055fb981909780dc04
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15207
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2022-12-09 08:16:18 +00:00
paul luse
a6dbe3721e update Intel copyright notices
per Intel policy to include file commit date using git cmd
below.  The policy does not apply to non-Intel (C) notices.

git log --follow -C90% --format=%ad --date default <file> | tail -1

and then pull just the 4 digit year from the result.

Intel copyrights were not added to files where Intel either had
no contribution ot the contribution lacked substance (ie license
header updates, formatting changes, etc).  Contribution date used
"--follow -C95%" to get the most accurate date.

Note that several files in this patch didn't end the license/(c)
block with a blank comment line so these were added as the vast
majority of files do have this last blank line.  Simply there for
consistency.

Signed-off-by: paul luse <paul.e.luse@intel.com>
Change-Id: Id5b7ce4f658fe87132f14139ead58d6e285c04d4
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15192
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Community-CI: Mellanox Build Bot
2022-11-10 08:28:53 +00:00
Shuhei Matsumoto
cdf61c2f22 nvme: Polls only the qpair if ctrlr is not fabrics when connecting synchronously
For non-fabric controllers, the corresponding I/O qpairs are simply
re-enabled at controller reset.

This had a issue when I/O qpairs span multiple threads and poll group
is used.

spdk_nvme_ctrlr_reconnect_poll_async() calls
nvme_transport_ctrlr_connect_qpair() with qpair->async being false.
Then nvme_transport_ctrlr_connect_qpair() calls
spdk_nvme_poll_group_process_completions() until the qpair is connected.
spdk_nvme_poll_group_process_completions() may poll other qpairs.
This may cause I/O to complete on a wrong thread.

For PCIe controller, spdk_nvme_poll_group_process_completions() calls
spdk_nvme_qpair_process_completions() simply for each qpair.

Hence change nvme_transport_ctrlr_connect_qpair() to call
spdk_nvme_qpair_process_completions() if the controller is non-fabrics.

Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: Ieb270c2fb154124021ef6d25577b817d05e5ca9e
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14295
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Michael Haeuptle <michaelhaeuptle@gmail.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Dong Yi <dongx.yi@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
2022-09-05 12:50:00 +00:00
Jim Harris
0f068506ca nvme: complete register_operations in the correct process
In multi-process, we need to make sure we don't
complete a register_operation in the wrong process.  So
save the pid in the nvme_register_completion structure
when it is inserted into the STAILQ, then only complete
operations where the pid matches.

Fixes issue #2630.

Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I58c995237db486fecdd89d95e9e7a64379d0b0e5
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/13940
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Dong Yi <dongx.yi@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
2022-08-18 10:09:55 +00:00
Evgeniy Kochetov
3dd0bc9e09 nvme: Add transport controller ready step
This step allows custom transports to perform extra actions or checks
at controller initialization and fail initialization if required.

Signed-off-by: Evgeniy Kochetov <evgeniik@nvidia.com>
Change-Id: Ic7cadae5398a35903917ceace3828f4371be63a3
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12631
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2022-08-04 07:29:03 +00:00
Changpeng Liu
ac31590b37 nvme: make spdk_nvme_ctrlr_free_io_qpair multi-process safe
In the multi-process case, a process may call `spdk_nvme_ctrlr_free_io_qpair` on
a foreign I/O qpair (i.e. one that this process did not create) when that qpairs
process exits unexpectedly.

The variable `qpair->poll_group` isn't multi-process safe, we can't use it
in `spdk_nvme_ctrlr_free_io_qpair` and related transport poll group APIs.

Change-Id: Ic13a6a2c7d760477be5be5a56a45caa2b5518717
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/13573
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
2022-07-11 07:41:09 +00:00
Ben Walker
8dd1cd2104 check_format: For C files only, fix return type breaks
In SPDK, declarations have the return type on the same line. Definitions
have the return type on a separate line. Astyle has an option for
enforcing this. Unfortunately, it seems to have two bugs:

1) It doesn't work correctly at all on C++ files.
2) It often fails on functions that return enums, or long type names

Deal with 1) by adjusting the check_format.sh script to only tell astyle
to fix return type line breaks for C files and not C++. Deal with 2) by
adding a few typedefs to work around the problem.

Change-Id: Idf28281466cab8411ce252d5f02ab384166790c6
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/13437
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Dong Yi <dongx.yi@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
2022-06-27 09:33:48 +00:00
Jim Harris
488570ebd4 Replace most BSD 3-clause license text with SPDX identifier.
Many open source projects have moved to using SPDX identifiers
to specify license information, reducing the amount of
boilerplate code in every source file.  This patch replaces
the bulk of SPDK .c, .cpp and Makefiles with the BSD-3-Clause
identifier.

Almost all of these files share the exact same license text,
and this patch only modifies the files that contain the
most common license text.  There can be slight variations
because the third clause contains company names - most say
"Intel Corporation", but there are instances for Nvidia,
Samsung, Eideticom and even "the copyright holder".

Used a bash script to automate replacement of the license text
with SPDX identifier which is checked into scripts/spdx.sh.

Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: Iaa88ab5e92ea471691dc298cfe41ebfb5d169780
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12904
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Dong Yi <dongx.yi@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-by: <qun.wan@intel.com>
2022-06-09 07:35:12 +00:00
Ben Walker
813756e75e nvme: Do not abort transport commands when disconnecting a qpair
Make this a transport-level decision instead. TCP and RDMA do want to
abort, but PCIe cannot because these commands may still be receiving DMA
operations from the device.

Change-Id: I305acddc3819c903eb3217e8f710d4216d0b3931
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11509
Community-CI: Mellanox Build Bot
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Michael Haeuptle <michaelhaeuptle@gmail.com>
2022-05-19 08:23:57 +00:00
Shuhei Matsumoto
8926303b59 nvme: Caller polls qpair until disconnected if async connect failed
nvme_transport_ctrlr_connect_qpair() calls
nvme_transport_ctrlr_disconnect_qpair() if failed.

If async qpair disconnect is supported, even when connect qpair failed,
nvme_transport_ctrlr_connect_qpair() may complete asynchronously later.

The cases that qpair->async is set to true are I/O qpair for the NVMe
bdev module and admin qpair.

example/nvme/perf and example/nvme/reconnect use I/O qpair but both
set qpair->async to false.

For the NVMe bdev module, I/O qpair is connected when creating I/O
channel or resetting ctrlr. If spdk_nvme_ctrlr_connect_io_qpair()
returns 0 for a I/O qpair, the qpair is in a poll group and is polled
by spdk_nvme_poll_group_process_completions() and a disconnected
callback is called to the qpair. Hence we do not need to add additional
polling for I/O qpair in the NVMe bdev module.

Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: I6e0aadcfd98e5cb77b362ef1a79e0eca2985f36e
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11112
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2022-03-21 10:49:11 +00:00
Shuhei Matsumoto
cfe11bd1db nvme: Factor out operations done after disconnect qpair completes
This is a preparation to make nvme_transport_ctrlr_disconnect_qpair()
asynchronous.

For nvme_transport_ctrlr_disconnect_qpair(), factor out operations after
returning from transport's specific ctrlr_disconnect_qpair() into a helper
function nvme_transport_ctrlr_disconnect_qpair_done().

Then move nvme_transport_ctrlr_disconnect_qpair_done() into the end of
the transport specific ctrlr_disconnect_qpair().

Additionally remove the operation to overwrite the qpair state to
DISCONNECTED from nvme_transport_connect_qpair_fail() because
this is duplicated and nvme_transport_ctrlr_disconnect_qpair() is responsible
to make the qpair disconnected even after it completes asynchronously.

Change-Id: I9c8faa7039d306d3e31a8f51826755ce8840a8aa
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/10851
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
2022-03-21 10:49:11 +00:00
Shuhei Matsumoto
1285481917 nvme: Free I/O qpair now even if it is in poll group completion
spdk_nvme_poll_group has followed spdk_nvme_qpair about how to
process I/O qpair deletion inside of a completion context.

spdk_nvme_qpair_process_completions() accesses qpair after
returning from nvme_transport_qpair_process_completions().

So this is reasonable.

On the other hand, if spdk_nvme_poll_group_process_completions()
can execute spdk_nvme_ctrlr_free_io_qpair() inside of a completion
context, the target qpair is ensured to be deleted after returning
from spdk_nvme_ctrlr_free_io_qpair(). Then the target qpair is
not accessed anymore in spdk_nvme_poll_group_process_completions().

Remove two variables, in_completion_context and num_qpairs_to_delete,
of spdk_nvme_transport_poll_group and the related code.

This change is really necessary to support the following case.

In the NVMe bdev module, a nvme_qpair has a qpair and a poll_group
channel. disconnected_qpair_cb calls spdk_nvme_ctrlr_free_io_qpair()
for the qpair and spdk_put_io_channel() to the poll_group_channel.
spdk_nvme_ctrlr_free_io_qpair() is executed after unwinding stack
but spdk_put_io_channel() is executed now. The callback to
spdk_put_io_channel() calls spdk_nvme_poll_group_destroy(). However,
spdk_nvme_ctrlr_free_io_qpair() is not executed. Hence
spdk_nvme_poll_group_destroy() fails.

Update the corresponding stub in unit test together.

Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: Icd1f1daf049c6c7ffb28790fe87989a1060f8952
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11496
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2022-03-15 09:05:09 +00:00
Shuhei Matsumoto
34eea269f5 nvme: Assume poll_group_disconnect_qpair() succeeds if qpair is in connected_qpairs
poll_group_disconnect_qpair() is used only in a single place now
and transport_poll_group_disconnect_qpair() always returns 0 for all
transport.

Let's remove unnecessary processing for return code.

Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: I45d7f8cea2117b3ec00028df234d1eb9ecc65713
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/10677
Community-CI: Mellanox Build Bot
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
2022-01-19 08:44:09 +00:00
Shuhei Matsumoto
7ae79a38a5 nvme: Limit spdk_nvme_poll_group_remove() to use only for disconnected qpairs
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: I3c06c41664ee757423641474141439f9c32fc0b6
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/10671
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Monica Kenguva <monica.kenguva@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2022-01-19 08:44:09 +00:00
Changpeng Liu
0af4a7cd84 nvme: abort outstanding requests case by case
For DSM command, the NVMe drive may take a long time to finish it,
if we set a small timeout value for DSM command, the bdev/nvme module
will try to reset the IO queue pair when timeout happens,
in `spdk_nvme_ctrlr_free_io_qpair`, we will abort the outstanding
IO requests first, then in the `nvme_pcie_ctrlr_delete_io_qpair`,
we will poll the CQ for any requests that have been completed by
the NVMe controller, if there are NVMe completions in the CQ,
we will finish them again, thus double completions happened.

Here we rename `nvme_qpair_abort_reqs` to `nvme_qpair_abort_all_queued_reqs`,
so the common layer will just abort queued request, and let each
transport to abort outstanding requests case by case.

Fix #2233.

Change-Id: Icae6214239160c615418cb514fc51cfe77b59211
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/10233
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2021-11-22 08:35:35 +00:00
Alexey Marchuk
9381d8d399 nvme: Update spdk_nvme_ctrlr_get_memory_domain
Allow to return more than one memory domain.
This change aligns bdev and nvme API and provides
more flexibility for custom transports.

Signed-off-by: Alexey Marchuk <alexeymar@mellanox.com>
Change-Id: Ica9b12ad8463c361be6cb62ee2c0513eec0b486d
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/9546
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
2021-09-24 07:37:45 +00:00
Konrad Sztyber
1bea880598 nvme: asynchronous register operations
This patch introduces asynchronous versions of the ctrlr_(get|set)_reg
functions.  Not all transports need to define them - for those that it
doesn't make sense (e.g. PCIe), the transport layer will call the
synchronous API and queue the callback to be executed during the next
process_completions call.

Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I2e78e72b5eba58340885381cb279f3c28e7995ec
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/8607
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
2021-09-16 07:16:52 +00:00
Alexey Marchuk
a422d8b06f nvme: Add API to get SPDK memory domain per nvme controller
Signed-off-by: Alexey Marchuk <alexeymar@mellanox.com>
Change-Id: I6db64c7075b1337b1489b2716fc686a6bed595e3
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/7239
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Ziye Yang <ziye.yang@intel.com>
2021-08-20 07:26:10 +00:00
Konrad Sztyber
3ada37faa3 nvme: use poll_group_process_completions in connect_qpair
If a qpair is part of a poll group and it's not configured in the async
mode, it should be using poll group's process_completions variant.

Additionally, connecting qpairs to the poll group was moved up, so that
qpairs are already on the connected qpairs queue when waiting for the
connection to complete.

Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I08f75bd61a566d1ab60029b6202d9337df75733f
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/9074
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Monica Kenguva <monica.kenguva@intel.com>
2021-08-18 08:13:39 +00:00
Konrad Sztyber
5263f0a12f nvme: extract qpair connect fail to a seprate function
In the next patch, the qpair is polled from a poll group and needs a
disconnect callback, which should also fail the qpair, so it makes sense
to have a separate function doing that.

Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: Ied76431520962b25220027be829a4609afb6bbda
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/9157
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
2021-08-13 07:27:07 +00:00
Monica Kenguva
771f65bb1f nvme: asynchronous create io qpair
async_mode option is currently supported in PCIe transport layer
to create io qpair asynchronously. User polls the io_qpair for
completions, after create cq and sq completes in order, pqpair
is set to READY state. I/O submitted before the qpair is ready
is queued internally. Currently other transports only support
synchronous io qpair creation.

Signed-off-by: Monica Kenguva <monica.kenguva@intel.com>
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Change-Id: Ib2f9043872bd5602274e2508cf1fe9ff4211cabb
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/8911
Community-CI: Mellanox Build Bot
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
2021-08-13 07:27:07 +00:00
Monica Kenguva
455a5d7821 nvme/pcie: Create queue pairs asynchronously
The generic transport layer still does a busy wait, but at least
the logic in the PCIe transport now creates the queue pair
asynchronously.

Signed-off-by: Monica Kenguva <monica.kenguva@intel.com>
Change-Id: I9669ccb81a90ee0a36d3f5512bc49c503923b293
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/8910
Community-CI: Mellanox Build Bot
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
2021-08-09 08:39:28 +00:00
Ben Walker
ea0aaf5e85 nvme: Transports now set qpair state to NVME_QPAIR_CONNECTED inside
.ctrlr_connect_qpair

Previously this was assumed to be a synchronous process so the generic
layer transport code updated the state after .ctrlr_connect_qpair
returned. In preparation for making this support asynchronous mode,
shift that responsibility down into the individual transports.

While none of the transports actually do this asynchronously, insert a
busy wait in nvme_transport_ctrlr_connect_qpair to wait for the qpair to
exit from the CONNECTING state. None of the upper layer code can
actually correct handle a transport doing this asynchronously, so the
busy wait will cover that.

Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Change-Id: I3c1a5c115264ffcb87e549765d891d796e0c81fe
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/8909
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Monica Kenguva <monica.kenguva@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2021-07-28 07:04:00 +00:00
Jim Harris
4246e79c04 nvme: change nvme_transport_ctrlr_delete_io_qpair to void
Returning an error from this function is not useful - there
is nothing the caller can do with that information. So
change the return value to void.  Also add ERRLOG and assert
if a transport actually returns a non-zero status, to
force the transport implementer (which must be an out-of-tree
transport) to make changes as necessary.

Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I402afec045265db178af821d25b99a6dbe066eab
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/8659
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2021-07-07 07:27:40 +00:00
Jim Harris
b333f00627 nvme: save last transport_failure_reason in transport
If a reconnect fails, we restore the original
transport_failure_reason after we're done with
the failed reconnect.  Save the original reason
in the qpair itself rather than a local variable,
to facilitate upcoming changes where connect will
be asynchronous.

Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I20ff43fc687a379aa5c930e17cf3ff8d730320be
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/8116
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: <dongx.yi@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
2021-06-04 20:03:25 +00:00
Krishna Kanth Reddy
89858bbf5d nvme/pcie: Add support for Persistent Memory Region (PMR)
Implemented functions to enable, disable, map and unmap the PMR.

Signed-off-by: Krishna Kanth Reddy <krish.reddy@samsung.com>
Change-Id: I580e0b5060cefe1230c3db1361aee1957db457b2
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/6559
Community-CI: Broadcom CI
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2021-04-22 20:10:21 +00:00
Alexey Marchuk
e966937625 nvme: Add functions to get/free poll group statistics
These are interface functions that can be used by
an application e.g. spdk_nvme_perf or bdev_nvme
library. The next patches will add usage of these
functions.

Change-Id: I33b88e0e713c2ea5967f9241885e3257c5070577
Signed-off-by: Alexey Marchuk <alexeymar@mellanox.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/6300
Community-CI: Broadcom CI
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
2021-04-13 21:30:52 +00:00
Alexey Marchuk
3fcda8e779 nvme: Add transport intrafce to get/free stats
The new 2 API function allow to get and free stats
per poll group. New function to get transport name
have been added to report not only transport type but
also the name.
For now only RDMA transport reports statistics,
other transports will be added later.

Signed-off-by: Alexey Marchuk <alexeymar@mellanox.com>
Change-Id: I2824cb474fde5fa859cf8196dabac2c48c05709c
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/6299
Community-CI: Broadcom CI
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
2021-04-13 21:30:52 +00:00
Ziye Yang
e749b5d3ec nvme: Add the interface to get the optimal polling group
This patch is used to add spdk_nvme_poll_group_get_optimal
public API.

Signed-off-by: Ziye Yang <ziye.yang@intel.com>
Change-Id: Iee34c89e0e1ff1f81167b18e198c144ca28f71de
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/3311
Community-CI: Broadcom CI
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2021-02-04 08:30:54 +00:00
yidong0635
10717b577c nvme/nvme_transport: Unify returns in disconnect and connect.
Here "return rc == -EINPROGRESS ? 0 : rc;"
They  are the same meaning in these two functions.
Keep the comments here. This makes more clear to readers.

Signed-off-by: yidong0635 <dongx.yi@intel.com>
Change-Id: I8590de3f0fe27337163ee8b02ea63e166f1bbe7c
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/5689
Community-CI: Broadcom CI
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
2020-12-28 13:29:35 +00:00
Tomasz Zawadzki
2172c432cf log: simplify SPDK_LOG_REGISTER_COMPONENT
This patch removes the string from register component.
Removed are all instances in libs or hardcoded in apps.

Starting with this patch literal passed to register,
serves as name for the flag.

All instances of SPDK_LOG_* were replaced with just *
in lowercase.
No actual name change for flags occur in this patch.

Affected are SPDK_LOG_REGISTER_COMPONENT() and
SPDK_*LOG() macros.

Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Change-Id: I002b232fde57ecf9c6777726b181fc0341f1bb17
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/4495
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Mellanox Build Bot
Reviewed-by: Anil Veerabhadrappa <anil.veerabhadrappa@broadcom.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Community-CI: Broadcom CI
2020-10-14 08:00:35 +00:00
Seth Howell
0b1799cd98 nvme/transport: addd assert for transport.
Silences a KW error.

Signed-off-by: Seth Howell <seth.howell@intel.com>
Change-Id: Ifd8d6088a22de7c230d48751be2b3991d0649778
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/3553
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
2020-07-29 07:37:26 +00:00
Jim Harris
751e2812bc nvme: do not abort reqs in multi-process cleanup path
When a process cleans up IO qpairs from another crashed
process in a multi-process environment, we must not try to
abort reqs for that IO qpair.  Any reqs will contain callbacks
for the crashed process which we must not try to execute in
a different process.

Fixes issue #1509.

Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I5e58cce7bdb86e3feb4084733815c086901f867e

Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/3536
Community-CI: Mellanox Build Bot
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2020-07-27 22:42:07 +00:00
Shuhei Matsumoto
f2bd635ecf lib/nvme: Add qpair_iterate_requests() to iterate the common operation among transports
To abort requests whose cb_arg matches, add child abort request greedily.
Iterating all outstanding requests is unique for each transport but
adding child abort is common among transports, and adding child abort
is replaceable by other operations.

Hence add qpair_iterate_requests() function to the function pointer table
of transport, and pass the operation done in the iteration by a
parameter of it.

In each transport, the implementation of qpair_iterate_requests() uses
TAILQ_FOREACH_SAFE() for potential future use cases.

Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: Ic70d1bf2613fce2566eade26335ceed731f66a89
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/2038
Community-CI: Mellanox Build Bot
Community-CI: Broadcom CI
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Michael Haeuptle <michaelhaeuptle@gmail.com>
2020-07-08 07:54:01 +00:00
Seth Howell
8bef6f0bdf lib/nvme: rdma poll group with shared cq.
Signed-off-by: Seth Howell <seth.howell@intel.com>
Change-Id: Ifde29f633f09cccbebfdcde5ab2f96d9590449f1
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/1167
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Broadcom CI
Community-CI: Mellanox Build Bot
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
2020-06-04 07:20:16 +00:00
GangCao
2234bb665d Transport: allocate a global array of transports
Currently the new transport is dynamically allocated and looks like
not freed when the application exits. Trying to use the
__attribute__((destructor)) function to free the allocated memory,
it will not work in the case of user created thread as this function
is called right after the "main" function while other operations
may be still ongoing.

In this case, add a global array of transports.

Change-Id: I610b1e8114ba2e68abbd09ea5e02a9abce055e70
Signed-off-by: GangCao <gang.cao@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/2415
Community-CI: Mellanox Build Bot
Community-CI: Broadcom CI
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Xiaodong Liu <xiaodong.liu@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2020-05-15 08:11:54 +00:00
Seth Howell
fe5e1db68e nvme/tcp: add naive implementation of poll_group api
This implementation simply loops over qpairs calling process_completions.

Signed-off-by: Seth Howell <seth.howell@intel.com>
Change-Id: Ia1f59c13444703e00c6b769d378874f48b9ef03e
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/627
Community-CI: Mellanox Build Bot
Community-CI: Broadcom CI
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
2020-04-24 16:36:03 +00:00
Seth Howell
a8f18b0da8 lib/nvme: set in_completion_context in poll group.
This needs to be done for all qpairs in the poll group.

Signed-off-by: Seth Howell <seth.howell@intel.com>
Change-Id: Ic3a84713a3f9941f90613152328d06ac8c1f586b
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/1954
Community-CI: Mellanox Build Bot
Community-CI: Broadcom CI
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2020-04-24 16:36:03 +00:00
Seth Howell
fc86e792e4 lib/nvme: switch poll group to use connect/disconnect semantics.
This makes more sense within the context of the nvme driver and
helps us avoid the awkward situation of getting a failed_qp callback
on a qpair that simply hasn't been connected.

Signed-off-by: Seth Howell <seth.howell@intel.com>
Change-Id: Ibac83c87c514ddcf7bd360af10fab462ae011112
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/1734
Community-CI: Mellanox Build Bot
Community-CI: Broadcom CI
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
2020-04-22 19:06:26 +00:00
Seth Howell
6189c0ceb7 lib/nvme: abort all requests when disconnecting a qpair.
By aborting all requests from every qpair when it is disconnected,
we can completely avoid having to abort requests when we enable the
qpair since nothing will be left enabled.

Signed-off-by: Seth Howell <seth.howell@intel.com>
Change-Id: Iba3bd866405dd182b72285def0843c9809f6500e
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/1788
Community-CI: Mellanox Build Bot
Community-CI: Broadcom CI
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2020-04-22 19:06:26 +00:00
Seth Howell
6338af34fc lib/nvme: handle qpair state in transport layer.
The state should be changed and checked by the transport
layer. All transports should follow the same list of steps
when disconnecting/reconnecting.

Signed-off-by: Seth Howell <seth.howell@intel.com>
Change-Id: If2647624345f2c70f78a20bba4e2206d2762f120
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/1853
Community-CI: Mellanox Build Bot
Community-CI: Broadcom CI
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2020-04-22 19:06:26 +00:00
Seth Howell
9649ee09fa lib/nvme: rename NVME_QPAIR_DISABLED
This variable really indicates when a qpair is
no longer connected. So NVME_QPAIR_DISCONNECTED is
actually much more accurate.

Signed-off-by: Seth Howell <seth.howell@intel.com>
Change-Id: Ia480d94f795bb0d8f5b4eff9f2857d6fe8ea1b34
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/1850
Community-CI: Mellanox Build Bot
Community-CI: Broadcom CI
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2020-04-22 19:06:26 +00:00
Ben Walker
7b28450b3f nvme: Allow users to reserve the CMB for data without mapping it
Separate these two operations into different functions. It is
possible that a CMB may not be visible from the CPU, but still
be present and have data transferred to it by some other DMA
engine. Generalize the API to handle that case.

Change-Id: Ifcd282af0db734fe4a6ef2283ae8e8933d017809
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/787
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2020-04-16 08:14:18 +00:00
Ben Walker
265a8436f4 nvme: Change mapping semantics of controller memory buffer
Instead of creating an allocator where the driver manages the space,
now, since using the CMB for queues and data has already been
disallowed, just create functions to map and unmap the entire CMB.
The user can manage the space.

Change-Id: I023994deda3b517e14d2ba464c7375bf22b58456
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/785
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Broadcom CI
2020-04-16 08:14:18 +00:00
Seth Howell
b9a187977d nvme: add poll group handling to qpair path.
Signed-off-by: Seth Howell <seth.howell@intel.com>
Change-Id: I9116cdcb5bbeb16ee74decee5586bda9a42090aa
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/633
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Broadcom CI
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2020-04-07 08:38:40 +00:00
Seth Howell
c998c6c69e nvme: add API for qpair poll groups.
This API will allow us to simplify the polling mechanism for qpairs on a single
thread. It also will pave the way for doing transport specific aggregation of
qpair polling to increase performance.

The generic implementation is included. The transport specific calls
have yet to be implemented.

Change-Id: If07b4170b2be61e4690847c993ec3bde9560b0f0
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/579
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Broadcom CI
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2020-04-07 08:38:40 +00:00
Seth Howell
f146bbe42d lib/nvme: move common connect code into transport shim
This gets rid of some duplicate lines of code.

Change-Id: I24d4864921f6030672f3640b33f88f37a9e8175a
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/1136
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
2020-03-06 10:29:21 +00:00
Seth Howell
b1daf62be5 nvme: add internal function to iterate over transports.
This will come in handy in the new poll group API.

Change-Id: I9cf14082270b28b35f4a440c5cb2471c81e374ba
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/578
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2020-02-18 08:05:08 +00:00
Seth Howell
19260848f6 nvme: publicly declare spdk_nvme_transport
This will be useful in the upcoming spdk_nvme_poll_group api.

Change-Id: Id83340a2ce9887817312f5aac38db4de8c588974
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/577
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2020-02-18 08:05:08 +00:00