Compare commits
20 Commits
Author | SHA1 | Date | |
---|---|---|---|
|
06bba16f0a | ||
|
1a527e501f | ||
|
9866a31b49 | ||
|
fda3aafd14 | ||
|
ab691e3d5a | ||
|
71469b64ba | ||
|
fbf9098c0c | ||
|
fbe0a864b2 | ||
|
9d504f223a | ||
|
f758696fa6 | ||
|
6d2247c2e2 | ||
|
97b095904a | ||
|
89f55134f6 | ||
|
84de31e494 | ||
|
297290f4b5 | ||
|
b1deebd18c | ||
|
d9a6e4e220 | ||
|
d963a3313e | ||
|
4be65dbf65 | ||
|
459d3e8f07 |
125
CHANGELOG.md
125
CHANGELOG.md
@ -1,6 +1,26 @@
|
|||||||
# Changelog
|
# Changelog
|
||||||
|
|
||||||
## v20.07: (Upcoming Release)
|
## v20.07.1: (Upcoming Release)
|
||||||
|
|
||||||
|
## v20.07:
|
||||||
|
|
||||||
|
### accel
|
||||||
|
|
||||||
|
A new API was added `spdk_accel_get_capabilities` that allows applications to
|
||||||
|
query the capabilities of the currently enabled accel engine back-end.
|
||||||
|
|
||||||
|
A new capability, CRC-32C, was added via `spdk_accel_submit_crc32c`.
|
||||||
|
|
||||||
|
The software accel engine implemenation has added support for CRC-32C.
|
||||||
|
|
||||||
|
A new capability, compare, was added via `spdk_accel_submit_compare`.
|
||||||
|
|
||||||
|
The software accel engine implemenation has added support for compare.
|
||||||
|
|
||||||
|
Several APIs were added to `accel_engine.h` to support batched submission
|
||||||
|
of operations.
|
||||||
|
|
||||||
|
Several APIs were added to `accel_engine.h` to support dualcast operations.
|
||||||
|
|
||||||
### accel_fw
|
### accel_fw
|
||||||
|
|
||||||
@ -16,16 +36,54 @@ The accel_fw was updated to support compare, dualcast, crc32c.
|
|||||||
The accel_fw introduced batching support for all commands in all plug-ins.
|
The accel_fw introduced batching support for all commands in all plug-ins.
|
||||||
See docs for detailed information.
|
See docs for detailed information.
|
||||||
|
|
||||||
|
### bdev
|
||||||
|
|
||||||
|
A new API `spdk_bdev_abort` has been added to submit abort requests to abort all I/Os
|
||||||
|
whose callback context match to the bdev on the given channel.
|
||||||
|
|
||||||
|
### build
|
||||||
|
|
||||||
|
The fio plugins now compile to `build/fio` and are named `spdk_bdev` and `spdk_nvme`.
|
||||||
|
Existing fio configuration files will need to be updated.
|
||||||
|
|
||||||
|
### dpdk
|
||||||
|
|
||||||
|
Updated DPDK submodule to DPDK 20.05.
|
||||||
|
|
||||||
|
### env
|
||||||
|
|
||||||
|
Several new APIs have been added to provide greater flexibility in registering and
|
||||||
|
accessing polled mode PCI drivers. See `env.h` for more details.
|
||||||
|
|
||||||
### idxd
|
### idxd
|
||||||
|
|
||||||
The idxd library and plug-in module for the accel_fw were updated to support
|
The idxd library and plug-in module for the accel_fw were updated to support
|
||||||
all accel_fw commands as well as batching. Batching is supported both
|
all accel_fw commands as well as batching. Batching is supported both
|
||||||
through the library and the plug-in module.
|
through the library and the plug-in module.
|
||||||
|
|
||||||
|
IDXD engine support for CRC-32C has been added.
|
||||||
|
|
||||||
### ioat
|
### ioat
|
||||||
|
|
||||||
A new API `spdk_ioat_get_max_descriptors` was added.
|
A new API `spdk_ioat_get_max_descriptors` was added.
|
||||||
|
|
||||||
|
### nvme
|
||||||
|
|
||||||
|
An `opts_size`element was added in the `spdk_nvme_ctrlr_opts` structure
|
||||||
|
to solve the ABI compatiblity issue between different SPDK version.
|
||||||
|
|
||||||
|
A new API `spdk_nvme_ctrlr_cmd_abort_ext` has been added to abort previously submitted
|
||||||
|
commands whose callback argument match.
|
||||||
|
|
||||||
|
Convenience functions, `spdk_nvme_print_command` and `spdk_nvme-print_completion` were added
|
||||||
|
to the public API.
|
||||||
|
|
||||||
|
A new function, `spdk_nvmf_cuse_update_namespaces`, updates the cuse representation of an NVMe
|
||||||
|
controller.
|
||||||
|
|
||||||
|
A new function `qpair_iterate_requests` has been added to the nvme transport interface. ALl
|
||||||
|
implementations of the transport interface will have to implement that function.
|
||||||
|
|
||||||
### nvmf
|
### nvmf
|
||||||
|
|
||||||
The NVMe-oF target no longer supports connecting scheduling configuration and instead
|
The NVMe-oF target no longer supports connecting scheduling configuration and instead
|
||||||
@ -40,51 +98,6 @@ takes a function pointer as an argument. Instead, transports should call
|
|||||||
The NVMe-oF target now supports aborting any submitted NVM or Admin command. Previously,
|
The NVMe-oF target now supports aborting any submitted NVM or Admin command. Previously,
|
||||||
the NVMe-oF target could abort only Asynchronous Event Request commands.
|
the NVMe-oF target could abort only Asynchronous Event Request commands.
|
||||||
|
|
||||||
### nvme
|
|
||||||
|
|
||||||
Add `opts_size` in `spdk_nvme_ctrlr_opts` structure in order to solve the compatiblity issue
|
|
||||||
for different ABI version.
|
|
||||||
|
|
||||||
A new API `spdk_nvme_ctrlr_cmd_abort_ext` has been added to abort previously submitted
|
|
||||||
commands whose callback argument match.
|
|
||||||
|
|
||||||
### bdev
|
|
||||||
|
|
||||||
A new API `spdk_bdev_abort` has been added to submit abort requests to abort all I/Os
|
|
||||||
whose callback context match to the bdev on the given channel.
|
|
||||||
|
|
||||||
### RPC
|
|
||||||
|
|
||||||
Command line parameters `-r` and `--rpc-socket` will longer accept TCP ports. RPC server
|
|
||||||
must now be started on a Unix domain socket. Exposing RPC on the network, as well as providing
|
|
||||||
proper authentication (if needed) is now a responsibility of the user.
|
|
||||||
|
|
||||||
### build
|
|
||||||
|
|
||||||
The fio plugins now compile to `build/fio` and are named `spdk_bdev` and `spdk_nvme`.
|
|
||||||
Existing fio configuration files will need to be updated.
|
|
||||||
|
|
||||||
### accel
|
|
||||||
|
|
||||||
A new API was added `spdk_accel_get_capabilities` that allows applications to
|
|
||||||
query the capabilities of the currently enabled accel engine back-end.
|
|
||||||
|
|
||||||
A new capability, CRC-32C, was added via `spdk_accel_submit_crc32c`.
|
|
||||||
|
|
||||||
The software accel engine implemenation has added support for CRC-32C.
|
|
||||||
|
|
||||||
A new capability, compare, was added via `spdk_accel_submit_compare`.
|
|
||||||
|
|
||||||
The software accel engine implemenation has added support for compare.
|
|
||||||
|
|
||||||
### dpdk
|
|
||||||
|
|
||||||
Updated DPDK submodule to DPDK 20.05.
|
|
||||||
|
|
||||||
### idxd
|
|
||||||
|
|
||||||
IDXD engine support for CRC-32C has been added.
|
|
||||||
|
|
||||||
### rdma
|
### rdma
|
||||||
|
|
||||||
A new `rdma` library has been added. It is an abstraction layer over different RDMA providers.
|
A new `rdma` library has been added. It is an abstraction layer over different RDMA providers.
|
||||||
@ -95,13 +108,20 @@ Using mlx5_dv requires libmlx5 installed on the system.
|
|||||||
### rpc
|
### rpc
|
||||||
|
|
||||||
Parameter `-p` or `--max-qpairs-per-ctrlr` of `nvmf_create_transport` RPC command accepted by the
|
Parameter `-p` or `--max-qpairs-per-ctrlr` of `nvmf_create_transport` RPC command accepted by the
|
||||||
rpc.py script is deprecated, new parameter `-m` or `--max-io-qpairs-per-ctrlr` is added.
|
rpc.py script is deprecated, new parameter `-m` or `--max-io-qpairs-per-ctrlr` was added.
|
||||||
|
|
||||||
Parameter `max_qpairs_per_ctrlr` of `nvmf_create_transport` RPC command accepted by the NVMF target
|
|
||||||
is deprecated, new parameter `max_io_qpairs_per_ctrlr` is added.
|
|
||||||
|
|
||||||
Added `sock_impl_get_options` and `sock_impl_set_options` RPC methods.
|
Added `sock_impl_get_options` and `sock_impl_set_options` RPC methods.
|
||||||
|
|
||||||
|
Command line parameters `-r` and `--rpc-socket` will longer accept TCP ports. RPC server
|
||||||
|
must now be started on a Unix domain socket. Exposing RPC on the network, as well as providing
|
||||||
|
proper authentication (if needed) is now a responsibility of the user.
|
||||||
|
|
||||||
|
The `bdev_set_options` RPC has a new option, `bdev_auto_examine` to control the auto examine function
|
||||||
|
of bdev modules.
|
||||||
|
|
||||||
|
New RPCs `sock_impl_get_options` and `sock_impl_set_options` been added to expose new socket features.
|
||||||
|
See `sock` section for more details.
|
||||||
|
|
||||||
### sock
|
### sock
|
||||||
|
|
||||||
Added `spdk_sock_impl_get_opts` and `spdk_sock_impl_set_opts` functions to set/get socket layer configuration
|
Added `spdk_sock_impl_get_opts` and `spdk_sock_impl_set_opts` functions to set/get socket layer configuration
|
||||||
@ -119,6 +139,11 @@ New option is used only in posix implementation.
|
|||||||
Added `enable_zerocopy_send` socket layer option to allow disabling of zero copy flow on send.
|
Added `enable_zerocopy_send` socket layer option to allow disabling of zero copy flow on send.
|
||||||
New option is used only in posix implementation.
|
New option is used only in posix implementation.
|
||||||
|
|
||||||
|
### util
|
||||||
|
|
||||||
|
Some previously exposed CRC32 functions have been removed from the public API -
|
||||||
|
`spdk_crc32_update`, `spdk_crc32_table_init`, and the `spdk_crc32_table` struct.
|
||||||
|
|
||||||
### vhost
|
### vhost
|
||||||
|
|
||||||
The function `spdk_vhost_blk_get_dev` has been removed.
|
The function `spdk_vhost_blk_get_dev` has been removed.
|
||||||
|
@ -173,6 +173,7 @@ if [ $SPDK_RUN_FUNCTIONAL_TEST -eq 1 ]; then
|
|||||||
if [ $SPDK_TEST_BLOCKDEV -eq 1 ]; then
|
if [ $SPDK_TEST_BLOCKDEV -eq 1 ]; then
|
||||||
run_test "blockdev_general" test/bdev/blockdev.sh
|
run_test "blockdev_general" test/bdev/blockdev.sh
|
||||||
run_test "bdev_raid" test/bdev/bdev_raid.sh
|
run_test "bdev_raid" test/bdev/bdev_raid.sh
|
||||||
|
run_test "bdevperf_config" test/bdev/bdevperf/test_config.sh
|
||||||
if [[ $(uname -s) == Linux ]]; then
|
if [[ $(uname -s) == Linux ]]; then
|
||||||
run_test "spdk_dd" test/dd/dd.sh
|
run_test "spdk_dd" test/dd/dd.sh
|
||||||
fi
|
fi
|
||||||
|
@ -803,6 +803,7 @@ INPUT += \
|
|||||||
accel_fw.md \
|
accel_fw.md \
|
||||||
applications.md \
|
applications.md \
|
||||||
bdev.md \
|
bdev.md \
|
||||||
|
bdevperf.md \
|
||||||
bdev_module.md \
|
bdev_module.md \
|
||||||
bdev_pg.md \
|
bdev_pg.md \
|
||||||
blob.md \
|
blob.md \
|
||||||
|
86
doc/bdevperf.md
Normal file
86
doc/bdevperf.md
Normal file
@ -0,0 +1,86 @@
|
|||||||
|
# Using bdevperf application {#bdevperf}
|
||||||
|
|
||||||
|
## Introduction
|
||||||
|
|
||||||
|
bdevperf is an SPDK application that is used for performance testing
|
||||||
|
of block devices (bdevs) exposed by the SPDK bdev layer. It is an
|
||||||
|
alternative to the SPDK bdev fio plugin for benchmarking SPDK bdevs.
|
||||||
|
In some cases, bdevperf can provide much lower overhead than the fio
|
||||||
|
plugin, resulting in much better performance for tests using a limited
|
||||||
|
number of CPU cores.
|
||||||
|
|
||||||
|
bdevperf exposes command line interface that allows to specify
|
||||||
|
SPDK framework options as well as testing options.
|
||||||
|
Since SPDK 20.07, bdevperf supports configuration file that is similar
|
||||||
|
to FIO. It allows user to create jobs parameterized by
|
||||||
|
filename, cpumask, blocksize, queuesize, etc.
|
||||||
|
|
||||||
|
## Config file
|
||||||
|
|
||||||
|
Bdevperf's config file is similar to FIO's config file format.
|
||||||
|
|
||||||
|
Below is an example config file that uses all available parameters:
|
||||||
|
|
||||||
|
~~~{.ini}
|
||||||
|
[global]
|
||||||
|
filename=Malloc0:Malloc1
|
||||||
|
bs=1024
|
||||||
|
iosize=256
|
||||||
|
rw=randrw
|
||||||
|
rwmixread=90
|
||||||
|
|
||||||
|
[A]
|
||||||
|
cpumask=0xff
|
||||||
|
|
||||||
|
[B]
|
||||||
|
cpumask=[0-128]
|
||||||
|
filename=Malloc1
|
||||||
|
|
||||||
|
[global]
|
||||||
|
filename=Malloc0
|
||||||
|
rw=write
|
||||||
|
|
||||||
|
[C]
|
||||||
|
bs=4096
|
||||||
|
iosize=128
|
||||||
|
offset=1000000
|
||||||
|
length=1000000
|
||||||
|
~~~
|
||||||
|
|
||||||
|
Jobs `[A]` `[B]` or `[C]`, inherit default values from `[global]`
|
||||||
|
section residing above them. So in the example, job `[A]` inherits
|
||||||
|
`filename` value and uses both `Malloc0` and `Malloc1` bdevs as targets,
|
||||||
|
job `[B]` overrides its `filename` value and uses `Malloc1` and
|
||||||
|
job `[C]` inherits value `Malloc0` for its `filename`.
|
||||||
|
|
||||||
|
Interaction with CLI arguments is not the same as in FIO however.
|
||||||
|
If bdevperf receives CLI argument, it overrides values
|
||||||
|
of corresponding parameter for all `[global]` sections of config file.
|
||||||
|
So if example config is used, specifying `-q` argument
|
||||||
|
will make jobs `[A]` and `[B]` use its value.
|
||||||
|
|
||||||
|
Below is a full list of supported parameters with descriptions.
|
||||||
|
|
||||||
|
Param | Default | Description
|
||||||
|
--------- | ----------------- | -----------
|
||||||
|
filename | | Bdevs to use, separated by ":"
|
||||||
|
cpumask | Maximum available | CPU mask. Format is defined at @ref cpu_mask
|
||||||
|
bs | | Block size (io size)
|
||||||
|
iodepth | | Queue depth
|
||||||
|
rwmixread | `50` | Percentage of a mixed workload that should be reads
|
||||||
|
offset | `0` | Start I/O at the provided offset on the bdev
|
||||||
|
length | 100% of bdev size | End I/O at `offset`+`length` on the bdev
|
||||||
|
rw | | Type of I/O pattern
|
||||||
|
|
||||||
|
Available rw types:
|
||||||
|
- read
|
||||||
|
- randread
|
||||||
|
- write
|
||||||
|
- randwrite
|
||||||
|
- verify
|
||||||
|
- reset
|
||||||
|
- unmap
|
||||||
|
- write_zeroes
|
||||||
|
- flush
|
||||||
|
- rw
|
||||||
|
- randrw
|
@ -2,3 +2,4 @@
|
|||||||
|
|
||||||
- @subpage spdkcli
|
- @subpage spdkcli
|
||||||
- @subpage nvme-cli
|
- @subpage nvme-cli
|
||||||
|
- @subpage bdevperf
|
||||||
|
@ -51,10 +51,6 @@
|
|||||||
|
|
||||||
#ifdef SPDK_CONFIG_URING
|
#ifdef SPDK_CONFIG_URING
|
||||||
#include <liburing.h>
|
#include <liburing.h>
|
||||||
|
|
||||||
#ifndef __NR_sys_io_uring_enter
|
|
||||||
#define __NR_sys_io_uring_enter 426
|
|
||||||
#endif
|
|
||||||
#endif
|
#endif
|
||||||
|
|
||||||
#if HAVE_LIBAIO
|
#if HAVE_LIBAIO
|
||||||
@ -310,25 +306,19 @@ uring_check_io(struct ns_worker_ctx *ns_ctx)
|
|||||||
struct perf_task *task;
|
struct perf_task *task;
|
||||||
|
|
||||||
to_submit = ns_ctx->u.uring.io_pending;
|
to_submit = ns_ctx->u.uring.io_pending;
|
||||||
to_complete = ns_ctx->u.uring.io_inflight;
|
|
||||||
|
|
||||||
if (to_submit > 0) {
|
if (to_submit > 0) {
|
||||||
/* If there are I/O to submit, use io_uring_submit here.
|
/* If there are I/O to submit, use io_uring_submit here.
|
||||||
* It will automatically call spdk_io_uring_enter appropriately. */
|
* It will automatically call spdk_io_uring_enter appropriately. */
|
||||||
ret = io_uring_submit(&ns_ctx->u.uring.ring);
|
ret = io_uring_submit(&ns_ctx->u.uring.ring);
|
||||||
ns_ctx->u.uring.io_pending = 0;
|
|
||||||
ns_ctx->u.uring.io_inflight += to_submit;
|
|
||||||
} else if (to_complete > 0) {
|
|
||||||
/* If there are I/O in flight but none to submit, we need to
|
|
||||||
* call io_uring_enter ourselves. */
|
|
||||||
ret = syscall(__NR_sys_io_uring_enter, ns_ctx->u.uring.ring.ring_fd, 0,
|
|
||||||
0, IORING_ENTER_GETEVENTS, NULL, 0);
|
|
||||||
}
|
|
||||||
|
|
||||||
if (ret < 0) {
|
if (ret < 0) {
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
ns_ctx->u.uring.io_pending = 0;
|
||||||
|
ns_ctx->u.uring.io_inflight += to_submit;
|
||||||
|
}
|
||||||
|
|
||||||
|
to_complete = ns_ctx->u.uring.io_inflight;
|
||||||
if (to_complete > 0) {
|
if (to_complete > 0) {
|
||||||
count = io_uring_peek_batch_cqe(&ns_ctx->u.uring.ring, ns_ctx->u.uring.cqes, to_complete);
|
count = io_uring_peek_batch_cqe(&ns_ctx->u.uring.ring, ns_ctx->u.uring.cqes, to_complete);
|
||||||
ns_ctx->u.uring.io_inflight -= count;
|
ns_ctx->u.uring.io_inflight -= count;
|
||||||
@ -353,7 +343,7 @@ uring_verify_io(struct perf_task *task, struct ns_entry *entry)
|
|||||||
static int
|
static int
|
||||||
uring_init_ns_worker_ctx(struct ns_worker_ctx *ns_ctx)
|
uring_init_ns_worker_ctx(struct ns_worker_ctx *ns_ctx)
|
||||||
{
|
{
|
||||||
if (io_uring_queue_init(g_queue_depth, &ns_ctx->u.uring.ring, IORING_SETUP_IOPOLL) < 0) {
|
if (io_uring_queue_init(g_queue_depth, &ns_ctx->u.uring.ring, 0) < 0) {
|
||||||
SPDK_ERRLOG("uring I/O context setup failure\n");
|
SPDK_ERRLOG("uring I/O context setup failure\n");
|
||||||
return -1;
|
return -1;
|
||||||
}
|
}
|
||||||
|
@ -166,6 +166,17 @@ struct spdk_accel_batch *spdk_accel_batch_create(struct spdk_io_channel *ch);
|
|||||||
int spdk_accel_batch_submit(struct spdk_io_channel *ch, struct spdk_accel_batch *batch,
|
int spdk_accel_batch_submit(struct spdk_io_channel *ch, struct spdk_accel_batch *batch,
|
||||||
spdk_accel_completion_cb cb_fn, void *cb_arg);
|
spdk_accel_completion_cb cb_fn, void *cb_arg);
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Synchronous call to cancel a batch sequence. In some cases prepared commands will be
|
||||||
|
* processed if they cannot be cancelled.
|
||||||
|
*
|
||||||
|
* \param ch I/O channel associated with this call.
|
||||||
|
* \param batch Handle provided when the batch was started with spdk_accel_batch_create().
|
||||||
|
*
|
||||||
|
* \return 0 on success, negative errno on failure.
|
||||||
|
*/
|
||||||
|
int spdk_accel_batch_cancel(struct spdk_io_channel *ch, struct spdk_accel_batch *batch);
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Synchronous call to prepare a copy request into a previously initialized batch
|
* Synchronous call to prepare a copy request into a previously initialized batch
|
||||||
* created with spdk_accel_batch_create(). The callback will be called when the copy
|
* created with spdk_accel_batch_create(). The callback will be called when the copy
|
||||||
|
@ -468,13 +468,6 @@ uint32_t spdk_env_get_core_count(void);
|
|||||||
*/
|
*/
|
||||||
uint32_t spdk_env_get_current_core(void);
|
uint32_t spdk_env_get_current_core(void);
|
||||||
|
|
||||||
/**
|
|
||||||
* Get the index of the primary dedicated CPU core for this application.
|
|
||||||
*
|
|
||||||
* \return the index of the primary dedicated CPU core.
|
|
||||||
*/
|
|
||||||
uint32_t spdk_env_get_primary_core(void);
|
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Get the index of the first dedicated CPU core for this application.
|
* Get the index of the first dedicated CPU core for this application.
|
||||||
*
|
*
|
||||||
|
@ -172,6 +172,16 @@ struct idxd_batch *spdk_idxd_batch_create(struct spdk_idxd_io_channel *chan);
|
|||||||
int spdk_idxd_batch_submit(struct spdk_idxd_io_channel *chan, struct idxd_batch *batch,
|
int spdk_idxd_batch_submit(struct spdk_idxd_io_channel *chan, struct idxd_batch *batch,
|
||||||
spdk_idxd_req_cb cb_fn, void *cb_arg);
|
spdk_idxd_req_cb cb_fn, void *cb_arg);
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Cancel a batch sequence.
|
||||||
|
*
|
||||||
|
* \param chan IDXD channel to submit request.
|
||||||
|
* \param batch Handle provided when the batch was started with spdk_idxd_batch_create().
|
||||||
|
*
|
||||||
|
* \return 0 on success, negative errno on failure.
|
||||||
|
*/
|
||||||
|
int spdk_idxd_batch_cancel(struct spdk_idxd_io_channel *chan, struct idxd_batch *batch);
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Synchronous call to prepare a copy request into a previously initialized batch
|
* Synchronous call to prepare a copy request into a previously initialized batch
|
||||||
* created with spdk_idxd_batch_create(). The callback will be called when the copy
|
* created with spdk_idxd_batch_create(). The callback will be called when the copy
|
||||||
|
@ -54,7 +54,7 @@
|
|||||||
* Patch level is incremented on maintenance branch releases and reset to 0 for each
|
* Patch level is incremented on maintenance branch releases and reset to 0 for each
|
||||||
* new major.minor release.
|
* new major.minor release.
|
||||||
*/
|
*/
|
||||||
#define SPDK_VERSION_PATCH 0
|
#define SPDK_VERSION_PATCH 1
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Version string suffix.
|
* Version string suffix.
|
||||||
|
@ -67,6 +67,7 @@ struct spdk_accel_engine {
|
|||||||
spdk_accel_completion_cb cb_fn, void *cb_arg);
|
spdk_accel_completion_cb cb_fn, void *cb_arg);
|
||||||
int (*batch_submit)(struct spdk_io_channel *ch, struct spdk_accel_batch *batch,
|
int (*batch_submit)(struct spdk_io_channel *ch, struct spdk_accel_batch *batch,
|
||||||
spdk_accel_completion_cb cb_fn, void *cb_arg);
|
spdk_accel_completion_cb cb_fn, void *cb_arg);
|
||||||
|
int (*batch_cancel)(struct spdk_io_channel *ch, struct spdk_accel_batch *batch);
|
||||||
int (*compare)(struct spdk_io_channel *ch, void *src1, void *src2,
|
int (*compare)(struct spdk_io_channel *ch, void *src1, void *src2,
|
||||||
uint64_t nbytes, spdk_accel_completion_cb cb_fn, void *cb_arg);
|
uint64_t nbytes, spdk_accel_completion_cb cb_fn, void *cb_arg);
|
||||||
int (*fill)(struct spdk_io_channel *ch, void *dst, uint8_t fill,
|
int (*fill)(struct spdk_io_channel *ch, void *dst, uint8_t fill,
|
||||||
|
@ -231,6 +231,17 @@ spdk_accel_batch_get_max(struct spdk_io_channel *ch)
|
|||||||
return accel_ch->engine->batch_get_max();
|
return accel_ch->engine->batch_get_max();
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/* Accel framework public API for for when an app is unable to complete a batch sequence,
|
||||||
|
* it cancels with this API.
|
||||||
|
*/
|
||||||
|
int
|
||||||
|
spdk_accel_batch_cancel(struct spdk_io_channel *ch, struct spdk_accel_batch *batch)
|
||||||
|
{
|
||||||
|
struct accel_io_channel *accel_ch = spdk_io_channel_get_ctx(ch);
|
||||||
|
|
||||||
|
return accel_ch->engine->batch_cancel(accel_ch->ch, batch);
|
||||||
|
}
|
||||||
|
|
||||||
/* Accel framework public API for batch prep_copy function. All engines are
|
/* Accel framework public API for batch prep_copy function. All engines are
|
||||||
* required to implement this API.
|
* required to implement this API.
|
||||||
*/
|
*/
|
||||||
@ -791,6 +802,27 @@ sw_accel_batch_prep_crc32c(struct spdk_io_channel *ch, struct spdk_accel_batch *
|
|||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
|
static int
|
||||||
|
sw_accel_batch_cancel(struct spdk_io_channel *ch, struct spdk_accel_batch *batch)
|
||||||
|
{
|
||||||
|
struct sw_accel_op *op;
|
||||||
|
struct sw_accel_io_channel *sw_ch = spdk_io_channel_get_ctx(ch);
|
||||||
|
|
||||||
|
if ((struct spdk_accel_batch *)&sw_ch->batch != batch) {
|
||||||
|
SPDK_ERRLOG("Invalid batch\n");
|
||||||
|
return -EINVAL;
|
||||||
|
}
|
||||||
|
|
||||||
|
/* Cancel the batch items by moving them back to the op_pool. */
|
||||||
|
while ((op = TAILQ_FIRST(&sw_ch->batch))) {
|
||||||
|
TAILQ_REMOVE(&sw_ch->batch, op, link);
|
||||||
|
TAILQ_INSERT_TAIL(&sw_ch->op_pool, op, link);
|
||||||
|
}
|
||||||
|
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
static int
|
static int
|
||||||
sw_accel_batch_submit(struct spdk_io_channel *ch, struct spdk_accel_batch *batch,
|
sw_accel_batch_submit(struct spdk_io_channel *ch, struct spdk_accel_batch *batch,
|
||||||
spdk_accel_completion_cb cb_fn, void *cb_arg)
|
spdk_accel_completion_cb cb_fn, void *cb_arg)
|
||||||
@ -927,6 +959,7 @@ static struct spdk_accel_engine sw_accel_engine = {
|
|||||||
.dualcast = sw_accel_submit_dualcast,
|
.dualcast = sw_accel_submit_dualcast,
|
||||||
.batch_get_max = sw_accel_batch_get_max,
|
.batch_get_max = sw_accel_batch_get_max,
|
||||||
.batch_create = sw_accel_batch_start,
|
.batch_create = sw_accel_batch_start,
|
||||||
|
.batch_cancel = sw_accel_batch_cancel,
|
||||||
.batch_prep_copy = sw_accel_batch_prep_copy,
|
.batch_prep_copy = sw_accel_batch_prep_copy,
|
||||||
.batch_prep_dualcast = sw_accel_batch_prep_dualcast,
|
.batch_prep_dualcast = sw_accel_batch_prep_dualcast,
|
||||||
.batch_prep_compare = sw_accel_batch_prep_compare,
|
.batch_prep_compare = sw_accel_batch_prep_compare,
|
||||||
|
@ -16,6 +16,7 @@
|
|||||||
spdk_accel_batch_prep_fill;
|
spdk_accel_batch_prep_fill;
|
||||||
spdk_accel_batch_prep_crc32c;
|
spdk_accel_batch_prep_crc32c;
|
||||||
spdk_accel_batch_submit;
|
spdk_accel_batch_submit;
|
||||||
|
spdk_accel_batch_cancel;
|
||||||
spdk_accel_submit_copy;
|
spdk_accel_submit_copy;
|
||||||
spdk_accel_submit_dualcast;
|
spdk_accel_submit_dualcast;
|
||||||
spdk_accel_submit_compare;
|
spdk_accel_submit_compare;
|
||||||
|
@ -33,7 +33,6 @@
|
|||||||
spdk_mempool_lookup;
|
spdk_mempool_lookup;
|
||||||
spdk_env_get_core_count;
|
spdk_env_get_core_count;
|
||||||
spdk_env_get_current_core;
|
spdk_env_get_current_core;
|
||||||
spdk_env_get_primary_core;
|
|
||||||
spdk_env_get_first_core;
|
spdk_env_get_first_core;
|
||||||
spdk_env_get_last_core;
|
spdk_env_get_last_core;
|
||||||
spdk_env_get_next_core;
|
spdk_env_get_next_core;
|
||||||
|
@ -48,12 +48,6 @@ spdk_env_get_current_core(void)
|
|||||||
return rte_lcore_id();
|
return rte_lcore_id();
|
||||||
}
|
}
|
||||||
|
|
||||||
uint32_t
|
|
||||||
spdk_env_get_primary_core(void)
|
|
||||||
{
|
|
||||||
return rte_get_master_lcore();
|
|
||||||
}
|
|
||||||
|
|
||||||
uint32_t
|
uint32_t
|
||||||
spdk_env_get_first_core(void)
|
spdk_env_get_first_core(void)
|
||||||
{
|
{
|
||||||
|
@ -926,6 +926,26 @@ _does_batch_exist(struct idxd_batch *batch, struct spdk_idxd_io_channel *chan)
|
|||||||
return found;
|
return found;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
int
|
||||||
|
spdk_idxd_batch_cancel(struct spdk_idxd_io_channel *chan, struct idxd_batch *batch)
|
||||||
|
{
|
||||||
|
if (_does_batch_exist(batch, chan) == false) {
|
||||||
|
SPDK_ERRLOG("Attempt to cancel a batch that doesn't exist\n.");
|
||||||
|
return -EINVAL;
|
||||||
|
}
|
||||||
|
|
||||||
|
if (batch->remaining > 0) {
|
||||||
|
SPDK_ERRLOG("Cannot cancel batch, already submitted to HW\n.");
|
||||||
|
return -EINVAL;
|
||||||
|
}
|
||||||
|
|
||||||
|
TAILQ_REMOVE(&chan->batches, batch, link);
|
||||||
|
spdk_bit_array_clear(chan->ring_ctrl.user_ring_slots, batch->batch_num);
|
||||||
|
TAILQ_INSERT_TAIL(&chan->batch_pool, batch, link);
|
||||||
|
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
int
|
int
|
||||||
spdk_idxd_batch_submit(struct spdk_idxd_io_channel *chan, struct idxd_batch *batch,
|
spdk_idxd_batch_submit(struct spdk_idxd_io_channel *chan, struct idxd_batch *batch,
|
||||||
spdk_idxd_req_cb cb_fn, void *cb_arg)
|
spdk_idxd_req_cb cb_fn, void *cb_arg)
|
||||||
|
@ -13,6 +13,7 @@
|
|||||||
spdk_idxd_batch_prep_compare;
|
spdk_idxd_batch_prep_compare;
|
||||||
spdk_idxd_batch_submit;
|
spdk_idxd_batch_submit;
|
||||||
spdk_idxd_batch_create;
|
spdk_idxd_batch_create;
|
||||||
|
spdk_idxd_batch_cancel;
|
||||||
spdk_idxd_batch_get_max;
|
spdk_idxd_batch_get_max;
|
||||||
spdk_idxd_set_config;
|
spdk_idxd_set_config;
|
||||||
spdk_idxd_submit_compare;
|
spdk_idxd_submit_compare;
|
||||||
|
@ -563,7 +563,16 @@ spdk_nvme_ctrlr_free_io_qpair(struct spdk_nvme_qpair *qpair)
|
|||||||
|
|
||||||
/* Do not retry. */
|
/* Do not retry. */
|
||||||
nvme_qpair_set_state(qpair, NVME_QPAIR_DESTROYING);
|
nvme_qpair_set_state(qpair, NVME_QPAIR_DESTROYING);
|
||||||
|
|
||||||
|
/* In the multi-process case, a process may call this function on a foreign
|
||||||
|
* I/O qpair (i.e. one that this process did not create) when that qpairs process
|
||||||
|
* exits unexpectedly. In that case, we must not try to abort any reqs associated
|
||||||
|
* with that qpair, since the callbacks will also be foreign to this process.
|
||||||
|
*/
|
||||||
|
if (qpair->active_proc == nvme_ctrlr_get_current_process(ctrlr)) {
|
||||||
nvme_qpair_abort_reqs(qpair, 1);
|
nvme_qpair_abort_reqs(qpair, 1);
|
||||||
|
}
|
||||||
|
|
||||||
nvme_robust_mutex_lock(&ctrlr->ctrlr_lock);
|
nvme_robust_mutex_lock(&ctrlr->ctrlr_lock);
|
||||||
|
|
||||||
nvme_ctrlr_proc_remove_io_qpair(qpair);
|
nvme_ctrlr_proc_remove_io_qpair(qpair);
|
||||||
|
@ -278,7 +278,16 @@ nvme_transport_ctrlr_create_io_qpair(struct spdk_nvme_ctrlr *ctrlr, uint16_t qid
|
|||||||
int
|
int
|
||||||
nvme_transport_ctrlr_delete_io_qpair(struct spdk_nvme_ctrlr *ctrlr, struct spdk_nvme_qpair *qpair)
|
nvme_transport_ctrlr_delete_io_qpair(struct spdk_nvme_ctrlr *ctrlr, struct spdk_nvme_qpair *qpair)
|
||||||
{
|
{
|
||||||
return qpair->transport->ops.ctrlr_delete_io_qpair(ctrlr, qpair);
|
const struct spdk_nvme_transport *transport = nvme_get_transport(ctrlr->trid.trstring);
|
||||||
|
|
||||||
|
assert(transport != NULL);
|
||||||
|
|
||||||
|
/* Do not rely on qpair->transport. For multi-process cases, a foreign process may delete
|
||||||
|
* the IO qpair, in which case the transport object would be invalid (each process has their
|
||||||
|
* own unique transport objects since they contain function pointers). So we look up the
|
||||||
|
* transport object in the delete_io_qpair case.
|
||||||
|
*/
|
||||||
|
return transport->ops.ctrlr_delete_io_qpair(ctrlr, qpair);
|
||||||
}
|
}
|
||||||
|
|
||||||
int
|
int
|
||||||
|
@ -403,6 +403,11 @@ struct spdk_nvmf_rdma_qpair {
|
|||||||
|
|
||||||
struct spdk_poller *destruct_poller;
|
struct spdk_poller *destruct_poller;
|
||||||
|
|
||||||
|
/*
|
||||||
|
* io_channel which is used to destroy qpair when it is removed from poll group
|
||||||
|
*/
|
||||||
|
struct spdk_io_channel *destruct_channel;
|
||||||
|
|
||||||
/* List of ibv async events */
|
/* List of ibv async events */
|
||||||
STAILQ_HEAD(, spdk_nvmf_rdma_ibv_event_ctx) ibv_events;
|
STAILQ_HEAD(, spdk_nvmf_rdma_ibv_event_ctx) ibv_events;
|
||||||
|
|
||||||
@ -910,6 +915,11 @@ nvmf_rdma_qpair_destroy(struct spdk_nvmf_rdma_qpair *rqpair)
|
|||||||
|
|
||||||
nvmf_rdma_qpair_clean_ibv_events(rqpair);
|
nvmf_rdma_qpair_clean_ibv_events(rqpair);
|
||||||
|
|
||||||
|
if (rqpair->destruct_channel) {
|
||||||
|
spdk_put_io_channel(rqpair->destruct_channel);
|
||||||
|
rqpair->destruct_channel = NULL;
|
||||||
|
}
|
||||||
|
|
||||||
free(rqpair);
|
free(rqpair);
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -3076,22 +3086,36 @@ nvmf_rdma_send_qpair_async_event(struct spdk_nvmf_rdma_qpair *rqpair,
|
|||||||
spdk_nvmf_rdma_qpair_ibv_event fn)
|
spdk_nvmf_rdma_qpair_ibv_event fn)
|
||||||
{
|
{
|
||||||
struct spdk_nvmf_rdma_ibv_event_ctx *ctx;
|
struct spdk_nvmf_rdma_ibv_event_ctx *ctx;
|
||||||
|
struct spdk_thread *thr = NULL;
|
||||||
|
int rc;
|
||||||
|
|
||||||
if (!rqpair->qpair.group) {
|
if (rqpair->qpair.group) {
|
||||||
return EINVAL;
|
thr = rqpair->qpair.group->thread;
|
||||||
|
} else if (rqpair->destruct_channel) {
|
||||||
|
thr = spdk_io_channel_get_thread(rqpair->destruct_channel);
|
||||||
|
}
|
||||||
|
|
||||||
|
if (!thr) {
|
||||||
|
SPDK_DEBUGLOG(SPDK_LOG_RDMA, "rqpair %p has no thread\n", rqpair);
|
||||||
|
return -EINVAL;
|
||||||
}
|
}
|
||||||
|
|
||||||
ctx = calloc(1, sizeof(*ctx));
|
ctx = calloc(1, sizeof(*ctx));
|
||||||
if (!ctx) {
|
if (!ctx) {
|
||||||
return ENOMEM;
|
return -ENOMEM;
|
||||||
}
|
}
|
||||||
|
|
||||||
ctx->rqpair = rqpair;
|
ctx->rqpair = rqpair;
|
||||||
ctx->cb_fn = fn;
|
ctx->cb_fn = fn;
|
||||||
STAILQ_INSERT_TAIL(&rqpair->ibv_events, ctx, link);
|
STAILQ_INSERT_TAIL(&rqpair->ibv_events, ctx, link);
|
||||||
|
|
||||||
return spdk_thread_send_msg(rqpair->qpair.group->thread, nvmf_rdma_qpair_process_ibv_event,
|
rc = spdk_thread_send_msg(thr, nvmf_rdma_qpair_process_ibv_event, ctx);
|
||||||
ctx);
|
if (rc) {
|
||||||
|
STAILQ_REMOVE(&rqpair->ibv_events, ctx, spdk_nvmf_rdma_ibv_event_ctx, link);
|
||||||
|
free(ctx);
|
||||||
|
}
|
||||||
|
|
||||||
|
return rc;
|
||||||
}
|
}
|
||||||
|
|
||||||
static void
|
static void
|
||||||
@ -3115,8 +3139,9 @@ nvmf_process_ib_event(struct spdk_nvmf_rdma_device *device)
|
|||||||
SPDK_ERRLOG("Fatal event received for rqpair %p\n", rqpair);
|
SPDK_ERRLOG("Fatal event received for rqpair %p\n", rqpair);
|
||||||
spdk_trace_record(TRACE_RDMA_IBV_ASYNC_EVENT, 0, 0,
|
spdk_trace_record(TRACE_RDMA_IBV_ASYNC_EVENT, 0, 0,
|
||||||
(uintptr_t)rqpair->cm_id, event.event_type);
|
(uintptr_t)rqpair->cm_id, event.event_type);
|
||||||
if (nvmf_rdma_send_qpair_async_event(rqpair, nvmf_rdma_handle_qp_fatal)) {
|
rc = nvmf_rdma_send_qpair_async_event(rqpair, nvmf_rdma_handle_qp_fatal);
|
||||||
SPDK_ERRLOG("Failed to send QP_FATAL event for rqpair %p\n", rqpair);
|
if (rc) {
|
||||||
|
SPDK_WARNLOG("Failed to send QP_FATAL event. rqpair %p, err %d\n", rqpair, rc);
|
||||||
nvmf_rdma_handle_qp_fatal(rqpair);
|
nvmf_rdma_handle_qp_fatal(rqpair);
|
||||||
}
|
}
|
||||||
break;
|
break;
|
||||||
@ -3124,8 +3149,9 @@ nvmf_process_ib_event(struct spdk_nvmf_rdma_device *device)
|
|||||||
/* This event only occurs for shared receive queues. */
|
/* This event only occurs for shared receive queues. */
|
||||||
rqpair = event.element.qp->qp_context;
|
rqpair = event.element.qp->qp_context;
|
||||||
SPDK_DEBUGLOG(SPDK_LOG_RDMA, "Last WQE reached event received for rqpair %p\n", rqpair);
|
SPDK_DEBUGLOG(SPDK_LOG_RDMA, "Last WQE reached event received for rqpair %p\n", rqpair);
|
||||||
if (nvmf_rdma_send_qpair_async_event(rqpair, nvmf_rdma_handle_last_wqe_reached)) {
|
rc = nvmf_rdma_send_qpair_async_event(rqpair, nvmf_rdma_handle_last_wqe_reached);
|
||||||
SPDK_ERRLOG("Failed to send LAST_WQE_REACHED event for rqpair %p\n", rqpair);
|
if (rc) {
|
||||||
|
SPDK_WARNLOG("Failed to send LAST_WQE_REACHED event. rqpair %p, err %d\n", rqpair, rc);
|
||||||
rqpair->last_wqe_reached = true;
|
rqpair->last_wqe_reached = true;
|
||||||
}
|
}
|
||||||
break;
|
break;
|
||||||
@ -3137,8 +3163,9 @@ nvmf_process_ib_event(struct spdk_nvmf_rdma_device *device)
|
|||||||
spdk_trace_record(TRACE_RDMA_IBV_ASYNC_EVENT, 0, 0,
|
spdk_trace_record(TRACE_RDMA_IBV_ASYNC_EVENT, 0, 0,
|
||||||
(uintptr_t)rqpair->cm_id, event.event_type);
|
(uintptr_t)rqpair->cm_id, event.event_type);
|
||||||
if (nvmf_rdma_update_ibv_state(rqpair) == IBV_QPS_ERR) {
|
if (nvmf_rdma_update_ibv_state(rqpair) == IBV_QPS_ERR) {
|
||||||
if (nvmf_rdma_send_qpair_async_event(rqpair, nvmf_rdma_handle_sq_drained)) {
|
rc = nvmf_rdma_send_qpair_async_event(rqpair, nvmf_rdma_handle_sq_drained);
|
||||||
SPDK_ERRLOG("Failed to send SQ_DRAINED event for rqpair %p\n", rqpair);
|
if (rc) {
|
||||||
|
SPDK_WARNLOG("Failed to send SQ_DRAINED event. rqpair %p, err %d\n", rqpair, rc);
|
||||||
nvmf_rdma_handle_sq_drained(rqpair);
|
nvmf_rdma_handle_sq_drained(rqpair);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@ -3510,12 +3537,53 @@ nvmf_rdma_poll_group_add(struct spdk_nvmf_transport_poll_group *group,
|
|||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static int
|
||||||
|
nvmf_rdma_poll_group_remove(struct spdk_nvmf_transport_poll_group *group,
|
||||||
|
struct spdk_nvmf_qpair *qpair)
|
||||||
|
{
|
||||||
|
struct spdk_nvmf_rdma_qpair *rqpair;
|
||||||
|
|
||||||
|
rqpair = SPDK_CONTAINEROF(qpair, struct spdk_nvmf_rdma_qpair, qpair);
|
||||||
|
assert(group->transport->tgt != NULL);
|
||||||
|
|
||||||
|
rqpair->destruct_channel = spdk_get_io_channel(group->transport->tgt);
|
||||||
|
|
||||||
|
if (!rqpair->destruct_channel) {
|
||||||
|
SPDK_WARNLOG("failed to get io_channel, qpair %p\n", qpair);
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
/* Sanity check that we get io_channel on the correct thread */
|
||||||
|
if (qpair->group) {
|
||||||
|
assert(qpair->group->thread == spdk_io_channel_get_thread(rqpair->destruct_channel));
|
||||||
|
}
|
||||||
|
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
static int
|
static int
|
||||||
nvmf_rdma_request_free(struct spdk_nvmf_request *req)
|
nvmf_rdma_request_free(struct spdk_nvmf_request *req)
|
||||||
{
|
{
|
||||||
struct spdk_nvmf_rdma_request *rdma_req = SPDK_CONTAINEROF(req, struct spdk_nvmf_rdma_request, req);
|
struct spdk_nvmf_rdma_request *rdma_req = SPDK_CONTAINEROF(req, struct spdk_nvmf_rdma_request, req);
|
||||||
struct spdk_nvmf_rdma_transport *rtransport = SPDK_CONTAINEROF(req->qpair->transport,
|
struct spdk_nvmf_rdma_transport *rtransport = SPDK_CONTAINEROF(req->qpair->transport,
|
||||||
struct spdk_nvmf_rdma_transport, transport);
|
struct spdk_nvmf_rdma_transport, transport);
|
||||||
|
struct spdk_nvmf_rdma_qpair *rqpair = SPDK_CONTAINEROF(rdma_req->req.qpair,
|
||||||
|
struct spdk_nvmf_rdma_qpair, qpair);
|
||||||
|
|
||||||
|
/*
|
||||||
|
* AER requests are freed when a qpair is destroyed. The recv corresponding to that request
|
||||||
|
* needs to be returned to the shared receive queue or the poll group will eventually be
|
||||||
|
* starved of RECV structures.
|
||||||
|
*/
|
||||||
|
if (rqpair->srq && rdma_req->recv) {
|
||||||
|
int rc;
|
||||||
|
struct ibv_recv_wr *bad_recv_wr;
|
||||||
|
|
||||||
|
rc = ibv_post_srq_recv(rqpair->srq, &rdma_req->recv->wr, &bad_recv_wr);
|
||||||
|
if (rc) {
|
||||||
|
SPDK_ERRLOG("Unable to re-post rx descriptor\n");
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
_nvmf_rdma_request_free(rdma_req, rtransport);
|
_nvmf_rdma_request_free(rdma_req, rtransport);
|
||||||
return 0;
|
return 0;
|
||||||
@ -4225,6 +4293,7 @@ const struct spdk_nvmf_transport_ops spdk_nvmf_transport_rdma = {
|
|||||||
.get_optimal_poll_group = nvmf_rdma_get_optimal_poll_group,
|
.get_optimal_poll_group = nvmf_rdma_get_optimal_poll_group,
|
||||||
.poll_group_destroy = nvmf_rdma_poll_group_destroy,
|
.poll_group_destroy = nvmf_rdma_poll_group_destroy,
|
||||||
.poll_group_add = nvmf_rdma_poll_group_add,
|
.poll_group_add = nvmf_rdma_poll_group_add,
|
||||||
|
.poll_group_remove = nvmf_rdma_poll_group_remove,
|
||||||
.poll_group_poll = nvmf_rdma_poll_group_poll,
|
.poll_group_poll = nvmf_rdma_poll_group_poll,
|
||||||
|
|
||||||
.req_free = nvmf_rdma_request_free,
|
.req_free = nvmf_rdma_request_free,
|
||||||
|
@ -44,6 +44,7 @@
|
|||||||
#include "spdk/vhost.h"
|
#include "spdk/vhost.h"
|
||||||
|
|
||||||
#include "vhost_internal.h"
|
#include "vhost_internal.h"
|
||||||
|
#include <rte_version.h>
|
||||||
|
|
||||||
/* Minimal set of features supported by every SPDK VHOST-BLK device */
|
/* Minimal set of features supported by every SPDK VHOST-BLK device */
|
||||||
#define SPDK_VHOST_BLK_FEATURES_BASE (SPDK_VHOST_FEATURES | \
|
#define SPDK_VHOST_BLK_FEATURES_BASE (SPDK_VHOST_FEATURES | \
|
||||||
@ -801,6 +802,32 @@ to_blk_dev(struct spdk_vhost_dev *vdev)
|
|||||||
return SPDK_CONTAINEROF(vdev, struct spdk_vhost_blk_dev, vdev);
|
return SPDK_CONTAINEROF(vdev, struct spdk_vhost_blk_dev, vdev);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static int
|
||||||
|
vhost_session_bdev_resize_cb(struct spdk_vhost_dev *vdev,
|
||||||
|
struct spdk_vhost_session *vsession,
|
||||||
|
void *ctx)
|
||||||
|
{
|
||||||
|
#if RTE_VERSION >= RTE_VERSION_NUM(20, 02, 0, 0)
|
||||||
|
SPDK_NOTICELOG("bdev send slave msg to vid(%d)\n", vsession->vid);
|
||||||
|
rte_vhost_slave_config_change(vsession->vid, false);
|
||||||
|
#else
|
||||||
|
SPDK_NOTICELOG("bdev does not support resize until DPDK submodule version >= 20.02\n");
|
||||||
|
#endif
|
||||||
|
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
|
static void
|
||||||
|
blk_resize_cb(void *resize_ctx)
|
||||||
|
{
|
||||||
|
struct spdk_vhost_blk_dev *bvdev = resize_ctx;
|
||||||
|
|
||||||
|
spdk_vhost_lock();
|
||||||
|
vhost_dev_foreach_session(&bvdev->vdev, vhost_session_bdev_resize_cb,
|
||||||
|
NULL, NULL);
|
||||||
|
spdk_vhost_unlock();
|
||||||
|
}
|
||||||
|
|
||||||
static void
|
static void
|
||||||
vhost_dev_bdev_remove_cpl_cb(struct spdk_vhost_dev *vdev, void *ctx)
|
vhost_dev_bdev_remove_cpl_cb(struct spdk_vhost_dev *vdev, void *ctx)
|
||||||
{
|
{
|
||||||
@ -845,6 +872,29 @@ bdev_remove_cb(void *remove_ctx)
|
|||||||
spdk_vhost_unlock();
|
spdk_vhost_unlock();
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static void
|
||||||
|
bdev_event_cb(enum spdk_bdev_event_type type, struct spdk_bdev *bdev,
|
||||||
|
void *event_ctx)
|
||||||
|
{
|
||||||
|
SPDK_DEBUGLOG(SPDK_LOG_VHOST_BLK, "Bdev event: type %d, name %s\n",
|
||||||
|
type,
|
||||||
|
bdev->name);
|
||||||
|
|
||||||
|
switch (type) {
|
||||||
|
case SPDK_BDEV_EVENT_REMOVE:
|
||||||
|
SPDK_NOTICELOG("bdev name (%s) received event(SPDK_BDEV_EVENT_REMOVE)\n", bdev->name);
|
||||||
|
bdev_remove_cb(event_ctx);
|
||||||
|
break;
|
||||||
|
case SPDK_BDEV_EVENT_RESIZE:
|
||||||
|
SPDK_NOTICELOG("bdev name (%s) received event(SPDK_BDEV_EVENT_RESIZE)\n", bdev->name);
|
||||||
|
blk_resize_cb(event_ctx);
|
||||||
|
break;
|
||||||
|
default:
|
||||||
|
SPDK_NOTICELOG("Unsupported bdev event: type %d\n", type);
|
||||||
|
break;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
static void
|
static void
|
||||||
free_task_pool(struct spdk_vhost_blk_session *bvsession)
|
free_task_pool(struct spdk_vhost_blk_session *bvsession)
|
||||||
{
|
{
|
||||||
@ -1234,7 +1284,7 @@ spdk_vhost_blk_construct(const char *name, const char *cpumask, const char *dev_
|
|||||||
vdev->virtio_features |= (1ULL << VIRTIO_BLK_F_FLUSH);
|
vdev->virtio_features |= (1ULL << VIRTIO_BLK_F_FLUSH);
|
||||||
}
|
}
|
||||||
|
|
||||||
ret = spdk_bdev_open(bdev, true, bdev_remove_cb, bvdev, &bvdev->bdev_desc);
|
ret = spdk_bdev_open_ext(dev_name, true, bdev_event_cb, bvdev, &bvdev->bdev_desc);
|
||||||
if (ret != 0) {
|
if (ret != 0) {
|
||||||
SPDK_ERRLOG("%s: could not open bdev '%s', error=%d\n",
|
SPDK_ERRLOG("%s: could not open bdev '%s', error=%d\n",
|
||||||
name, dev_name, ret);
|
name, dev_name, ret);
|
||||||
|
@ -443,6 +443,15 @@ idxd_batch_start(struct spdk_io_channel *ch)
|
|||||||
return (struct spdk_accel_batch *)spdk_idxd_batch_create(chan->chan);
|
return (struct spdk_accel_batch *)spdk_idxd_batch_create(chan->chan);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static int
|
||||||
|
idxd_batch_cancel(struct spdk_io_channel *ch, struct spdk_accel_batch *_batch)
|
||||||
|
{
|
||||||
|
struct idxd_io_channel *chan = spdk_io_channel_get_ctx(ch);
|
||||||
|
struct idxd_batch *batch = (struct idxd_batch *)_batch;
|
||||||
|
|
||||||
|
return spdk_idxd_batch_cancel(chan->chan, batch);
|
||||||
|
}
|
||||||
|
|
||||||
static int
|
static int
|
||||||
idxd_batch_submit(struct spdk_io_channel *ch, struct spdk_accel_batch *_batch,
|
idxd_batch_submit(struct spdk_io_channel *ch, struct spdk_accel_batch *_batch,
|
||||||
spdk_accel_completion_cb cb_fn, void *cb_arg)
|
spdk_accel_completion_cb cb_fn, void *cb_arg)
|
||||||
@ -561,6 +570,7 @@ static struct spdk_accel_engine idxd_accel_engine = {
|
|||||||
.copy = idxd_submit_copy,
|
.copy = idxd_submit_copy,
|
||||||
.batch_get_max = idxd_batch_get_max,
|
.batch_get_max = idxd_batch_get_max,
|
||||||
.batch_create = idxd_batch_start,
|
.batch_create = idxd_batch_start,
|
||||||
|
.batch_cancel = idxd_batch_cancel,
|
||||||
.batch_prep_copy = idxd_batch_prep_copy,
|
.batch_prep_copy = idxd_batch_prep_copy,
|
||||||
.batch_prep_fill = idxd_batch_prep_fill,
|
.batch_prep_fill = idxd_batch_prep_fill,
|
||||||
.batch_prep_dualcast = idxd_batch_prep_dualcast,
|
.batch_prep_dualcast = idxd_batch_prep_dualcast,
|
||||||
|
@ -390,6 +390,30 @@ ioat_batch_prep_crc32c(struct spdk_io_channel *ch,
|
|||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static int
|
||||||
|
ioat_batch_cancel(struct spdk_io_channel *ch, struct spdk_accel_batch *batch)
|
||||||
|
{
|
||||||
|
struct ioat_accel_op *op;
|
||||||
|
struct ioat_io_channel *ioat_ch = spdk_io_channel_get_ctx(ch);
|
||||||
|
|
||||||
|
if ((struct spdk_accel_batch *)&ioat_ch->hw_batch != batch) {
|
||||||
|
SPDK_ERRLOG("Invalid batch\n");
|
||||||
|
return -EINVAL;
|
||||||
|
}
|
||||||
|
|
||||||
|
/* Flush the batched HW items, there's no way to cancel these without resetting. */
|
||||||
|
spdk_ioat_flush(ioat_ch->ioat_ch);
|
||||||
|
ioat_ch->hw_batch = false;
|
||||||
|
|
||||||
|
/* Return batched software items to the pool. */
|
||||||
|
while ((op = TAILQ_FIRST(&ioat_ch->sw_batch))) {
|
||||||
|
TAILQ_REMOVE(&ioat_ch->sw_batch, op, link);
|
||||||
|
TAILQ_INSERT_TAIL(&ioat_ch->op_pool, op, link);
|
||||||
|
}
|
||||||
|
|
||||||
|
return 0;
|
||||||
|
}
|
||||||
|
|
||||||
static int
|
static int
|
||||||
ioat_batch_submit(struct spdk_io_channel *ch, struct spdk_accel_batch *batch,
|
ioat_batch_submit(struct spdk_io_channel *ch, struct spdk_accel_batch *batch,
|
||||||
spdk_accel_completion_cb cb_fn, void *cb_arg)
|
spdk_accel_completion_cb cb_fn, void *cb_arg)
|
||||||
@ -449,6 +473,7 @@ static struct spdk_accel_engine ioat_accel_engine = {
|
|||||||
.fill = ioat_submit_fill,
|
.fill = ioat_submit_fill,
|
||||||
.batch_get_max = ioat_batch_get_max,
|
.batch_get_max = ioat_batch_get_max,
|
||||||
.batch_create = ioat_batch_create,
|
.batch_create = ioat_batch_create,
|
||||||
|
.batch_cancel = ioat_batch_cancel,
|
||||||
.batch_prep_copy = ioat_batch_prep_copy,
|
.batch_prep_copy = ioat_batch_prep_copy,
|
||||||
.batch_prep_dualcast = ioat_batch_prep_dualcast,
|
.batch_prep_dualcast = ioat_batch_prep_dualcast,
|
||||||
.batch_prep_compare = ioat_batch_prep_compare,
|
.batch_prep_compare = ioat_batch_prep_compare,
|
||||||
|
@ -188,7 +188,7 @@ static struct rte_comp_xform g_decomp_xform = {
|
|||||||
static void vbdev_compress_examine(struct spdk_bdev *bdev);
|
static void vbdev_compress_examine(struct spdk_bdev *bdev);
|
||||||
static void vbdev_compress_claim(struct vbdev_compress *comp_bdev);
|
static void vbdev_compress_claim(struct vbdev_compress *comp_bdev);
|
||||||
static void vbdev_compress_queue_io(struct spdk_bdev_io *bdev_io);
|
static void vbdev_compress_queue_io(struct spdk_bdev_io *bdev_io);
|
||||||
struct vbdev_compress *_prepare_for_load_init(struct spdk_bdev *bdev);
|
struct vbdev_compress *_prepare_for_load_init(struct spdk_bdev *bdev, uint32_t lb_size);
|
||||||
static void vbdev_compress_submit_request(struct spdk_io_channel *ch, struct spdk_bdev_io *bdev_io);
|
static void vbdev_compress_submit_request(struct spdk_io_channel *ch, struct spdk_bdev_io *bdev_io);
|
||||||
static void comp_bdev_ch_destroy_cb(void *io_device, void *ctx_buf);
|
static void comp_bdev_ch_destroy_cb(void *io_device, void *ctx_buf);
|
||||||
static void vbdev_compress_delete_done(void *cb_arg, int bdeverrno);
|
static void vbdev_compress_delete_done(void *cb_arg, int bdeverrno);
|
||||||
@ -1284,7 +1284,7 @@ vbdev_compress_base_bdev_hotremove_cb(void *ctx)
|
|||||||
* information for reducelib to init or load.
|
* information for reducelib to init or load.
|
||||||
*/
|
*/
|
||||||
struct vbdev_compress *
|
struct vbdev_compress *
|
||||||
_prepare_for_load_init(struct spdk_bdev *bdev)
|
_prepare_for_load_init(struct spdk_bdev *bdev, uint32_t lb_size)
|
||||||
{
|
{
|
||||||
struct vbdev_compress *meta_ctx;
|
struct vbdev_compress *meta_ctx;
|
||||||
|
|
||||||
@ -1306,7 +1306,12 @@ _prepare_for_load_init(struct spdk_bdev *bdev)
|
|||||||
meta_ctx->backing_dev.blockcnt = bdev->blockcnt;
|
meta_ctx->backing_dev.blockcnt = bdev->blockcnt;
|
||||||
|
|
||||||
meta_ctx->params.chunk_size = CHUNK_SIZE;
|
meta_ctx->params.chunk_size = CHUNK_SIZE;
|
||||||
|
if (lb_size == 0) {
|
||||||
meta_ctx->params.logical_block_size = bdev->blocklen;
|
meta_ctx->params.logical_block_size = bdev->blocklen;
|
||||||
|
} else {
|
||||||
|
meta_ctx->params.logical_block_size = lb_size;
|
||||||
|
}
|
||||||
|
|
||||||
meta_ctx->params.backing_io_unit_size = BACKING_IO_SZ;
|
meta_ctx->params.backing_io_unit_size = BACKING_IO_SZ;
|
||||||
return meta_ctx;
|
return meta_ctx;
|
||||||
}
|
}
|
||||||
@ -1334,12 +1339,12 @@ _set_pmd(struct vbdev_compress *comp_dev)
|
|||||||
|
|
||||||
/* Call reducelib to initialize a new volume */
|
/* Call reducelib to initialize a new volume */
|
||||||
static int
|
static int
|
||||||
vbdev_init_reduce(struct spdk_bdev *bdev, const char *pm_path)
|
vbdev_init_reduce(struct spdk_bdev *bdev, const char *pm_path, uint32_t lb_size)
|
||||||
{
|
{
|
||||||
struct vbdev_compress *meta_ctx;
|
struct vbdev_compress *meta_ctx;
|
||||||
int rc;
|
int rc;
|
||||||
|
|
||||||
meta_ctx = _prepare_for_load_init(bdev);
|
meta_ctx = _prepare_for_load_init(bdev, lb_size);
|
||||||
if (meta_ctx == NULL) {
|
if (meta_ctx == NULL) {
|
||||||
return -EINVAL;
|
return -EINVAL;
|
||||||
}
|
}
|
||||||
@ -1471,7 +1476,7 @@ comp_bdev_ch_destroy_cb(void *io_device, void *ctx_buf)
|
|||||||
|
|
||||||
/* RPC entry point for compression vbdev creation. */
|
/* RPC entry point for compression vbdev creation. */
|
||||||
int
|
int
|
||||||
create_compress_bdev(const char *bdev_name, const char *pm_path)
|
create_compress_bdev(const char *bdev_name, const char *pm_path, uint32_t lb_size)
|
||||||
{
|
{
|
||||||
struct spdk_bdev *bdev;
|
struct spdk_bdev *bdev;
|
||||||
|
|
||||||
@ -1480,7 +1485,12 @@ create_compress_bdev(const char *bdev_name, const char *pm_path)
|
|||||||
return -ENODEV;
|
return -ENODEV;
|
||||||
}
|
}
|
||||||
|
|
||||||
return vbdev_init_reduce(bdev, pm_path);;
|
if ((lb_size != 0) && (lb_size != LB_SIZE_4K) && (lb_size != LB_SIZE_512B)) {
|
||||||
|
SPDK_ERRLOG("Logical block size must be 512 or 4096\n");
|
||||||
|
return -EINVAL;
|
||||||
|
}
|
||||||
|
|
||||||
|
return vbdev_init_reduce(bdev, pm_path, lb_size);
|
||||||
}
|
}
|
||||||
|
|
||||||
/* On init, just init the compress drivers. All metadata is stored on disk. */
|
/* On init, just init the compress drivers. All metadata is stored on disk. */
|
||||||
@ -1822,7 +1832,7 @@ vbdev_compress_examine(struct spdk_bdev *bdev)
|
|||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
meta_ctx = _prepare_for_load_init(bdev);
|
meta_ctx = _prepare_for_load_init(bdev, 0);
|
||||||
if (meta_ctx == NULL) {
|
if (meta_ctx == NULL) {
|
||||||
spdk_bdev_module_examine_done(&compress_if);
|
spdk_bdev_module_examine_done(&compress_if);
|
||||||
return;
|
return;
|
||||||
|
@ -38,6 +38,9 @@
|
|||||||
|
|
||||||
#include "spdk/bdev.h"
|
#include "spdk/bdev.h"
|
||||||
|
|
||||||
|
#define LB_SIZE_4K 0x1000UL
|
||||||
|
#define LB_SIZE_512B 0x200UL
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Get the first compression bdev.
|
* Get the first compression bdev.
|
||||||
*
|
*
|
||||||
@ -85,9 +88,10 @@ typedef void (*spdk_delete_compress_complete)(void *cb_arg, int bdeverrno);
|
|||||||
*
|
*
|
||||||
* \param bdev_name Bdev on which compression bdev will be created.
|
* \param bdev_name Bdev on which compression bdev will be created.
|
||||||
* \param pm_path Path to persistent memory.
|
* \param pm_path Path to persistent memory.
|
||||||
|
* \param lb_size Logical block size for the compressed volume in bytes. Must be 4K or 512.
|
||||||
* \return 0 on success, other on failure.
|
* \return 0 on success, other on failure.
|
||||||
*/
|
*/
|
||||||
int create_compress_bdev(const char *bdev_name, const char *pm_path);
|
int create_compress_bdev(const char *bdev_name, const char *pm_path, uint32_t lb_size);
|
||||||
|
|
||||||
/**
|
/**
|
||||||
* Delete compress bdev.
|
* Delete compress bdev.
|
||||||
|
@ -149,6 +149,7 @@ SPDK_RPC_REGISTER_ALIAS_DEPRECATED(compress_set_pmd, set_compress_pmd)
|
|||||||
struct rpc_construct_compress {
|
struct rpc_construct_compress {
|
||||||
char *base_bdev_name;
|
char *base_bdev_name;
|
||||||
char *pm_path;
|
char *pm_path;
|
||||||
|
uint32_t lb_size;
|
||||||
};
|
};
|
||||||
|
|
||||||
/* Free the allocated memory resource after the RPC handling. */
|
/* Free the allocated memory resource after the RPC handling. */
|
||||||
@ -163,6 +164,7 @@ free_rpc_construct_compress(struct rpc_construct_compress *r)
|
|||||||
static const struct spdk_json_object_decoder rpc_construct_compress_decoders[] = {
|
static const struct spdk_json_object_decoder rpc_construct_compress_decoders[] = {
|
||||||
{"base_bdev_name", offsetof(struct rpc_construct_compress, base_bdev_name), spdk_json_decode_string},
|
{"base_bdev_name", offsetof(struct rpc_construct_compress, base_bdev_name), spdk_json_decode_string},
|
||||||
{"pm_path", offsetof(struct rpc_construct_compress, pm_path), spdk_json_decode_string},
|
{"pm_path", offsetof(struct rpc_construct_compress, pm_path), spdk_json_decode_string},
|
||||||
|
{"lb_size", offsetof(struct rpc_construct_compress, lb_size), spdk_json_decode_uint32},
|
||||||
};
|
};
|
||||||
|
|
||||||
/* Decode the parameters for this RPC method and properly construct the compress
|
/* Decode the parameters for this RPC method and properly construct the compress
|
||||||
@ -181,12 +183,12 @@ rpc_bdev_compress_create(struct spdk_jsonrpc_request *request,
|
|||||||
SPDK_COUNTOF(rpc_construct_compress_decoders),
|
SPDK_COUNTOF(rpc_construct_compress_decoders),
|
||||||
&req)) {
|
&req)) {
|
||||||
SPDK_DEBUGLOG(SPDK_LOG_VBDEV_COMPRESS, "spdk_json_decode_object failed\n");
|
SPDK_DEBUGLOG(SPDK_LOG_VBDEV_COMPRESS, "spdk_json_decode_object failed\n");
|
||||||
spdk_jsonrpc_send_error_response(request, SPDK_JSONRPC_ERROR_INTERNAL_ERROR,
|
spdk_jsonrpc_send_error_response(request, SPDK_JSONRPC_ERROR_PARSE_ERROR,
|
||||||
"spdk_json_decode_object failed");
|
"spdk_json_decode_object failed");
|
||||||
goto cleanup;
|
goto cleanup;
|
||||||
}
|
}
|
||||||
|
|
||||||
rc = create_compress_bdev(req.base_bdev_name, req.pm_path);
|
rc = create_compress_bdev(req.base_bdev_name, req.pm_path, req.lb_size);
|
||||||
if (rc != 0) {
|
if (rc != 0) {
|
||||||
spdk_jsonrpc_send_error_response(request, rc, spdk_strerror(-rc));
|
spdk_jsonrpc_send_error_response(request, rc, spdk_strerror(-rc));
|
||||||
goto cleanup;
|
goto cleanup;
|
||||||
|
@ -135,6 +135,15 @@ get_other_cache_base(struct vbdev_ocf_base *base)
|
|||||||
return NULL;
|
return NULL;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
static bool
|
||||||
|
is_ocf_cache_running(struct vbdev_ocf *vbdev)
|
||||||
|
{
|
||||||
|
if (vbdev->cache.attached && vbdev->ocf_cache) {
|
||||||
|
return ocf_cache_is_running(vbdev->ocf_cache);
|
||||||
|
}
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
/* Get existing OCF cache instance
|
/* Get existing OCF cache instance
|
||||||
* that is started by other vbdev */
|
* that is started by other vbdev */
|
||||||
static ocf_cache_t
|
static ocf_cache_t
|
||||||
@ -149,7 +158,7 @@ get_other_cache_instance(struct vbdev_ocf *vbdev)
|
|||||||
if (strcmp(cmp->cache.name, vbdev->cache.name)) {
|
if (strcmp(cmp->cache.name, vbdev->cache.name)) {
|
||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
if (cmp->ocf_cache) {
|
if (is_ocf_cache_running(cmp)) {
|
||||||
return cmp->ocf_cache;
|
return cmp->ocf_cache;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
@ -190,6 +199,7 @@ static void
|
|||||||
unregister_finish(struct vbdev_ocf *vbdev)
|
unregister_finish(struct vbdev_ocf *vbdev)
|
||||||
{
|
{
|
||||||
spdk_bdev_destruct_done(&vbdev->exp_bdev, vbdev->state.stop_status);
|
spdk_bdev_destruct_done(&vbdev->exp_bdev, vbdev->state.stop_status);
|
||||||
|
ocf_mngt_cache_put(vbdev->ocf_cache);
|
||||||
vbdev_ocf_cache_ctx_put(vbdev->cache_ctx);
|
vbdev_ocf_cache_ctx_put(vbdev->cache_ctx);
|
||||||
vbdev_ocf_mngt_continue(vbdev, 0);
|
vbdev_ocf_mngt_continue(vbdev, 0);
|
||||||
}
|
}
|
||||||
@ -230,7 +240,7 @@ remove_core_cache_lock_cmpl(ocf_cache_t cache, void *priv, int error)
|
|||||||
static void
|
static void
|
||||||
detach_core(struct vbdev_ocf *vbdev)
|
detach_core(struct vbdev_ocf *vbdev)
|
||||||
{
|
{
|
||||||
if (vbdev->ocf_cache && ocf_cache_is_running(vbdev->ocf_cache)) {
|
if (is_ocf_cache_running(vbdev)) {
|
||||||
ocf_mngt_cache_lock(vbdev->ocf_cache, remove_core_cache_lock_cmpl, vbdev);
|
ocf_mngt_cache_lock(vbdev->ocf_cache, remove_core_cache_lock_cmpl, vbdev);
|
||||||
} else {
|
} else {
|
||||||
vbdev_ocf_mngt_continue(vbdev, 0);
|
vbdev_ocf_mngt_continue(vbdev, 0);
|
||||||
@ -291,7 +301,7 @@ stop_vbdev_cache_lock_cmpl(ocf_cache_t cache, void *priv, int error)
|
|||||||
static void
|
static void
|
||||||
stop_vbdev(struct vbdev_ocf *vbdev)
|
stop_vbdev(struct vbdev_ocf *vbdev)
|
||||||
{
|
{
|
||||||
if (!ocf_cache_is_running(vbdev->ocf_cache)) {
|
if (!is_ocf_cache_running(vbdev)) {
|
||||||
vbdev_ocf_mngt_continue(vbdev, 0);
|
vbdev_ocf_mngt_continue(vbdev, 0);
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
@ -334,7 +344,7 @@ flush_vbdev_cache_lock_cmpl(ocf_cache_t cache, void *priv, int error)
|
|||||||
static void
|
static void
|
||||||
flush_vbdev(struct vbdev_ocf *vbdev)
|
flush_vbdev(struct vbdev_ocf *vbdev)
|
||||||
{
|
{
|
||||||
if (!ocf_cache_is_running(vbdev->ocf_cache)) {
|
if (!is_ocf_cache_running(vbdev)) {
|
||||||
vbdev_ocf_mngt_continue(vbdev, -EINVAL);
|
vbdev_ocf_mngt_continue(vbdev, -EINVAL);
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
@ -1040,7 +1050,7 @@ start_cache(struct vbdev_ocf *vbdev)
|
|||||||
ocf_cache_t existing;
|
ocf_cache_t existing;
|
||||||
int rc;
|
int rc;
|
||||||
|
|
||||||
if (vbdev->ocf_cache) {
|
if (is_ocf_cache_running(vbdev)) {
|
||||||
vbdev_ocf_mngt_stop(vbdev, NULL, -EALREADY);
|
vbdev_ocf_mngt_stop(vbdev, NULL, -EALREADY);
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
@ -1050,6 +1060,7 @@ start_cache(struct vbdev_ocf *vbdev)
|
|||||||
SPDK_NOTICELOG("OCF bdev %s connects to existing cache device %s\n",
|
SPDK_NOTICELOG("OCF bdev %s connects to existing cache device %s\n",
|
||||||
vbdev->name, vbdev->cache.name);
|
vbdev->name, vbdev->cache.name);
|
||||||
vbdev->ocf_cache = existing;
|
vbdev->ocf_cache = existing;
|
||||||
|
ocf_mngt_cache_get(vbdev->ocf_cache);
|
||||||
vbdev->cache_ctx = ocf_cache_get_priv(existing);
|
vbdev->cache_ctx = ocf_cache_get_priv(existing);
|
||||||
vbdev_ocf_cache_ctx_get(vbdev->cache_ctx);
|
vbdev_ocf_cache_ctx_get(vbdev->cache_ctx);
|
||||||
vbdev_ocf_mngt_continue(vbdev, 0);
|
vbdev_ocf_mngt_continue(vbdev, 0);
|
||||||
@ -1070,6 +1081,7 @@ start_cache(struct vbdev_ocf *vbdev)
|
|||||||
vbdev_ocf_mngt_exit(vbdev, unregister_path_dirty, rc);
|
vbdev_ocf_mngt_exit(vbdev, unregister_path_dirty, rc);
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
ocf_mngt_cache_get(vbdev->ocf_cache);
|
||||||
|
|
||||||
ocf_cache_set_priv(vbdev->ocf_cache, vbdev->cache_ctx);
|
ocf_cache_set_priv(vbdev->ocf_cache, vbdev->cache_ctx);
|
||||||
|
|
||||||
|
@ -2,12 +2,12 @@
|
|||||||
%bcond_with doc
|
%bcond_with doc
|
||||||
|
|
||||||
Name: spdk
|
Name: spdk
|
||||||
Version: master
|
Version: 20.07.x
|
||||||
Release: 0%{?dist}
|
Release: 0%{?dist}
|
||||||
Epoch: 0
|
Epoch: 0
|
||||||
URL: http://spdk.io
|
URL: http://spdk.io
|
||||||
|
|
||||||
Source: https://github.com/spdk/spdk/archive/master.tar.gz
|
Source: https://github.com/spdk/spdk/archive/v20.07.x.tar.gz
|
||||||
Summary: Set of libraries and utilities for high performance user-mode storage
|
Summary: Set of libraries and utilities for high performance user-mode storage
|
||||||
|
|
||||||
%define package_version %{epoch}:%{version}-%{release}
|
%define package_version %{epoch}:%{version}-%{release}
|
||||||
|
@ -27,6 +27,7 @@ function install_all_dependencies() {
|
|||||||
INSTALL_FUSE=true
|
INSTALL_FUSE=true
|
||||||
INSTALL_RDMA=true
|
INSTALL_RDMA=true
|
||||||
INSTALL_DOCS=true
|
INSTALL_DOCS=true
|
||||||
|
INSTALL_LIBURING=true
|
||||||
}
|
}
|
||||||
|
|
||||||
function install_liburing() {
|
function install_liburing() {
|
||||||
|
@ -177,12 +177,14 @@ if __name__ == "__main__":
|
|||||||
def bdev_compress_create(args):
|
def bdev_compress_create(args):
|
||||||
print_json(rpc.bdev.bdev_compress_create(args.client,
|
print_json(rpc.bdev.bdev_compress_create(args.client,
|
||||||
base_bdev_name=args.base_bdev_name,
|
base_bdev_name=args.base_bdev_name,
|
||||||
pm_path=args.pm_path))
|
pm_path=args.pm_path,
|
||||||
|
lb_size=args.lb_size))
|
||||||
|
|
||||||
p = subparsers.add_parser('bdev_compress_create', aliases=['construct_compress_bdev'],
|
p = subparsers.add_parser('bdev_compress_create', aliases=['construct_compress_bdev'],
|
||||||
help='Add a compress vbdev')
|
help='Add a compress vbdev')
|
||||||
p.add_argument('-b', '--base_bdev_name', help="Name of the base bdev")
|
p.add_argument('-b', '--base_bdev_name', help="Name of the base bdev")
|
||||||
p.add_argument('-p', '--pm_path', help="Path to persistent memory")
|
p.add_argument('-p', '--pm_path', help="Path to persistent memory")
|
||||||
|
p.add_argument('-l', '--lb_size', help="Compressed vol logical block size (optional, if used must be 512 or 4096)", type=int, default=0)
|
||||||
p.set_defaults(func=bdev_compress_create)
|
p.set_defaults(func=bdev_compress_create)
|
||||||
|
|
||||||
def bdev_compress_delete(args):
|
def bdev_compress_delete(args):
|
||||||
|
@ -23,17 +23,18 @@ def bdev_set_options(client, bdev_io_pool_size=None, bdev_io_cache_size=None, bd
|
|||||||
|
|
||||||
|
|
||||||
@deprecated_alias('construct_compress_bdev')
|
@deprecated_alias('construct_compress_bdev')
|
||||||
def bdev_compress_create(client, base_bdev_name, pm_path):
|
def bdev_compress_create(client, base_bdev_name, pm_path, lb_size):
|
||||||
"""Construct a compress virtual block device.
|
"""Construct a compress virtual block device.
|
||||||
|
|
||||||
Args:
|
Args:
|
||||||
base_bdev_name: name of the underlying base bdev
|
base_bdev_name: name of the underlying base bdev
|
||||||
pm_path: path to persistent memory
|
pm_path: path to persistent memory
|
||||||
|
lb_size: logical block size for the compressed vol in bytes. Must be 4K or 512.
|
||||||
|
|
||||||
Returns:
|
Returns:
|
||||||
Name of created virtual block device.
|
Name of created virtual block device.
|
||||||
"""
|
"""
|
||||||
params = {'base_bdev_name': base_bdev_name, 'pm_path': pm_path}
|
params = {'base_bdev_name': base_bdev_name, 'pm_path': pm_path, 'lb_size': lb_size}
|
||||||
|
|
||||||
return client.call('bdev_compress_create', params)
|
return client.call('bdev_compress_create', params)
|
||||||
|
|
||||||
|
33
test/bdev/bdevperf/common.sh
Normal file
33
test/bdev/bdevperf/common.sh
Normal file
@ -0,0 +1,33 @@
|
|||||||
|
bdevperf=$rootdir/test/bdev/bdevperf/bdevperf
|
||||||
|
|
||||||
|
function create_job() {
|
||||||
|
local job_section=$1
|
||||||
|
local rw=$2
|
||||||
|
local filename=$3
|
||||||
|
|
||||||
|
if [[ $job_section == "global" ]]; then
|
||||||
|
cat <<- EOF >> "$testdir"/test.conf
|
||||||
|
[global]
|
||||||
|
filename=${filename}
|
||||||
|
EOF
|
||||||
|
fi
|
||||||
|
job="[${job_section}]"
|
||||||
|
echo $global
|
||||||
|
cat <<- EOF >> "$testdir"/test.conf
|
||||||
|
${job}
|
||||||
|
filename=${filename}
|
||||||
|
bs=1024
|
||||||
|
rwmixread=70
|
||||||
|
rw=${rw}
|
||||||
|
iodepth=256
|
||||||
|
cpumask=0xff
|
||||||
|
EOF
|
||||||
|
}
|
||||||
|
|
||||||
|
function get_num_jobs() {
|
||||||
|
echo "$1" | grep -oE "Using job config with [0-9]+ jobs" | grep -oE "[0-9]+"
|
||||||
|
}
|
||||||
|
|
||||||
|
function cleanup() {
|
||||||
|
rm -f $testdir/test.conf
|
||||||
|
}
|
25
test/bdev/bdevperf/conf.json
Normal file
25
test/bdev/bdevperf/conf.json
Normal file
@ -0,0 +1,25 @@
|
|||||||
|
{
|
||||||
|
"subsystems": [
|
||||||
|
{
|
||||||
|
"subsystem": "bdev",
|
||||||
|
"config": [
|
||||||
|
{
|
||||||
|
"method": "bdev_malloc_create",
|
||||||
|
"params": {
|
||||||
|
"name": "Malloc0",
|
||||||
|
"num_blocks": 102400,
|
||||||
|
"block_size": 512
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"method": "bdev_malloc_create",
|
||||||
|
"params": {
|
||||||
|
"name": "Malloc1",
|
||||||
|
"num_blocks": 102400,
|
||||||
|
"block_size": 512
|
||||||
|
}
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
41
test/bdev/bdevperf/test_config.sh
Executable file
41
test/bdev/bdevperf/test_config.sh
Executable file
@ -0,0 +1,41 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
|
||||||
|
testdir=$(readlink -f $(dirname $0))
|
||||||
|
rootdir=$(readlink -f $testdir/../../..)
|
||||||
|
source $rootdir/test/common/autotest_common.sh
|
||||||
|
source $testdir/common.sh
|
||||||
|
|
||||||
|
jsonconf=$testdir/conf.json
|
||||||
|
testconf=$testdir/test.conf
|
||||||
|
|
||||||
|
trap 'cleanup; exit 1' SIGINT SIGTERM EXIT
|
||||||
|
#Test inheriting filename and rw_mode parameters from global section.
|
||||||
|
create_job "global" "read" "Malloc0"
|
||||||
|
create_job "job0"
|
||||||
|
create_job "job1"
|
||||||
|
create_job "job2"
|
||||||
|
create_job "job3"
|
||||||
|
bdevperf_output=$($bdevperf -t 2 --json $jsonconf -j $testconf 2>&1)
|
||||||
|
[[ $(get_num_jobs "$bdevperf_output") == "4" ]]
|
||||||
|
|
||||||
|
bdevperf_output=$($bdevperf -C -t 2 --json $jsonconf -j $testconf)
|
||||||
|
|
||||||
|
cleanup
|
||||||
|
#Test missing global section.
|
||||||
|
create_job "job0" "write" "Malloc0"
|
||||||
|
create_job "job1" "write" "Malloc0"
|
||||||
|
create_job "job2" "write" "Malloc0"
|
||||||
|
bdevperf_output=$($bdevperf -t 2 --json $jsonconf -j $testconf 2>&1)
|
||||||
|
[[ $(get_num_jobs "$bdevperf_output") == "3" ]]
|
||||||
|
|
||||||
|
cleanup
|
||||||
|
#Test inheriting multiple filenames and rw_mode parameters from global section.
|
||||||
|
create_job "global" "rw" "Malloc0:Malloc1"
|
||||||
|
create_job "job0"
|
||||||
|
create_job "job1"
|
||||||
|
create_job "job2"
|
||||||
|
create_job "job3"
|
||||||
|
bdevperf_output=$($bdevperf -t 2 --json $jsonconf -j $testconf 2>&1)
|
||||||
|
[[ $(get_num_jobs "$bdevperf_output") == "4" ]]
|
||||||
|
cleanup
|
||||||
|
trap - SIGINT SIGTERM EXIT
|
@ -84,7 +84,7 @@ function install_qat() {
|
|||||||
sudo rm -rf "$GIT_REPOS/QAT"
|
sudo rm -rf "$GIT_REPOS/QAT"
|
||||||
fi
|
fi
|
||||||
|
|
||||||
sudo mkdir "$GIT_REPOS/QAT"
|
mkdir "$GIT_REPOS/QAT"
|
||||||
|
|
||||||
tar -C "$GIT_REPOS/QAT" -xzof - < <(wget -O- "$DRIVER_LOCATION_QAT")
|
tar -C "$GIT_REPOS/QAT" -xzof - < <(wget -O- "$DRIVER_LOCATION_QAT")
|
||||||
|
|
||||||
|
@ -33,7 +33,11 @@ function create_vols() {
|
|||||||
waitforbdev lvs0/lv0
|
waitforbdev lvs0/lv0
|
||||||
|
|
||||||
$rpc_py compress_set_pmd -p "$pmd"
|
$rpc_py compress_set_pmd -p "$pmd"
|
||||||
|
if [ -z "$1" ]; then
|
||||||
$rpc_py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem
|
$rpc_py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem
|
||||||
|
else
|
||||||
|
$rpc_py bdev_compress_create -b lvs0/lv0 -p /tmp/pmem -l $1
|
||||||
|
fi
|
||||||
waitforbdev COMP_lvs0/lv0
|
waitforbdev COMP_lvs0/lv0
|
||||||
}
|
}
|
||||||
|
|
||||||
@ -54,7 +58,7 @@ function run_bdevperf() {
|
|||||||
bdevperf_pid=$!
|
bdevperf_pid=$!
|
||||||
trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT
|
trap 'killprocess $bdevperf_pid; error_cleanup; exit 1' SIGINT SIGTERM EXIT
|
||||||
waitforlisten $bdevperf_pid
|
waitforlisten $bdevperf_pid
|
||||||
create_vols
|
create_vols $4
|
||||||
$rootdir/test/bdev/bdevperf/bdevperf.py perform_tests
|
$rootdir/test/bdev/bdevperf/bdevperf.py perform_tests
|
||||||
destroy_vols
|
destroy_vols
|
||||||
trap - SIGINT SIGTERM EXIT
|
trap - SIGINT SIGTERM EXIT
|
||||||
@ -78,7 +82,10 @@ esac
|
|||||||
mkdir -p /tmp/pmem
|
mkdir -p /tmp/pmem
|
||||||
|
|
||||||
# per patch bdevperf uses slightly different params than nightly
|
# per patch bdevperf uses slightly different params than nightly
|
||||||
|
# logical block size same as underlying device, then 512 then 4096
|
||||||
run_bdevperf 32 4096 3
|
run_bdevperf 32 4096 3
|
||||||
|
run_bdevperf 32 4096 3 512
|
||||||
|
run_bdevperf 32 4096 3 4096
|
||||||
|
|
||||||
if [ $RUN_NIGHTLY -eq 1 ]; then
|
if [ $RUN_NIGHTLY -eq 1 ]; then
|
||||||
run_bdevio
|
run_bdevio
|
||||||
|
@ -10,6 +10,5 @@ run_test "ocf_bdevperf_iotypes" "$testdir/integrity/bdevperf-iotypes.sh"
|
|||||||
run_test "ocf_stats" "$testdir/integrity/stats.sh"
|
run_test "ocf_stats" "$testdir/integrity/stats.sh"
|
||||||
run_test "ocf_create_destruct" "$testdir/management/create-destruct.sh"
|
run_test "ocf_create_destruct" "$testdir/management/create-destruct.sh"
|
||||||
run_test "ocf_multicore" "$testdir/management/multicore.sh"
|
run_test "ocf_multicore" "$testdir/management/multicore.sh"
|
||||||
# Disabled due to issue #1498
|
run_test "ocf_persistent_metadata" "$testdir/management/persistent-metadata.sh"
|
||||||
# run_test "ocf_persistent_metadata" "$testdir/management/persistent-metadata.sh"
|
|
||||||
run_test "ocf_remove" "$testdir/management/remove.sh"
|
run_test "ocf_remove" "$testdir/management/remove.sh"
|
||||||
|
@ -127,6 +127,8 @@ DEFINE_STUB_V(spdk_nvme_trid_populate_transport, (struct spdk_nvme_transport_id
|
|||||||
enum spdk_nvme_transport_type trtype));
|
enum spdk_nvme_transport_type trtype));
|
||||||
DEFINE_STUB_V(spdk_nvmf_ctrlr_data_init, (struct spdk_nvmf_transport_opts *opts,
|
DEFINE_STUB_V(spdk_nvmf_ctrlr_data_init, (struct spdk_nvmf_transport_opts *opts,
|
||||||
struct spdk_nvmf_ctrlr_data *cdata));
|
struct spdk_nvmf_ctrlr_data *cdata));
|
||||||
|
DEFINE_STUB(spdk_nvmf_request_complete, int, (struct spdk_nvmf_request *req),
|
||||||
|
-ENOSPC);
|
||||||
|
|
||||||
const char *
|
const char *
|
||||||
spdk_nvme_transport_id_trtype_str(enum spdk_nvme_transport_type trtype)
|
spdk_nvme_transport_id_trtype_str(enum spdk_nvme_transport_type trtype)
|
||||||
|
Loading…
Reference in New Issue
Block a user