Compare commits

..

252 Commits

Author SHA1 Message Date
Konrad Sztyber
4a4c905b32 test/bdev: extend chaining test with bdev layer ENOMEM case
The test already checked ENOMEM handling, but it only used bdevs that
support chaining (crypto, malloc), so bdev layer didn't need to execute
any accel operations.  So, to force bdev layer to do that, a passthru
bdev was added, as it doesn't support chaining.

Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I322a65ccebb0f144c759692fff285cfd44bbab4b
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17766
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
2023-05-09 05:35:39 +00:00
Konrad Sztyber
dd06b35ed8 bdev: remove handle_no_momem from push/seq cb
The IOs are never completed with NOMEM from push/sequence callbacks and
NOMEM IOs are already retried in internal callbacks, so there's no point
in calling _bdev_io_handle_no_mem().

Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: Iecc2a41f2a394836f62d541e6235277f333f226b
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17765
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
2023-05-09 05:35:39 +00:00
Konrad Sztyber
b059b49bdf bdev: rename (pull|push)_done callbacks
The functions that were passed as callbacks for the memory domain
pull/push calls were prefixed with an underscore, which doesn't really
explain the difference between the corresponding functions without an
underscore.  So, they're now renamed to *_and_track() to emphasize that
they additionally responsible for tracking IOs.

Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: Ia9e56230fe244d2c64d729e97445fae105418a76
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17931
Community-CI: Mellanox Build Bot
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2023-05-09 05:35:39 +00:00
Konrad Sztyber
f8a33650d2 bdev: retry IOs on ENOMEM when pushing bounce data/md
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: Ia7634b570eb7d04c22003337a46630d152171157
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17764
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2023-05-09 05:35:39 +00:00
Konrad Sztyber
fafb7d4741 bdev: enqueue IOs on the memory domain queue only when pushing
The IOs don't need to be put onto the io_memory_domain queue if there's
no need for memory domain push.  This makes push_data consistent with
other memory domain operations (pull_data, pull_md, push_md).

Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I85d95f6ce580a15b23f56ab5101e49236f341cb1
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17763
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2023-05-09 05:35:39 +00:00
Jim Harris
6a0d4e5ed8 nvmf: use iterator APIs in nvmf_tgt_destroy_cb
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I27b1b851fc8f47150670636cb65ccba40d1a57d6
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17961
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Community-CI: Mellanox Build Bot
2023-05-08 13:50:02 +00:00
Jim Harris
820e7c59bf nvmf: refactor nvmf_tgt_destroy_cb
This preps for some upcoming patches as well as
removing two levels of indentation.

Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I4f685c1e44ec4aa261e68af1786cfc110f451ed5
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17960
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
2023-05-08 13:50:02 +00:00
Jim Harris
516639cf37 nvmf: use iterator APIs in nvmf_tgt_create_poll_group()
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I4d9a5dd4655edb8315503e7551aec1926d1cc017
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17959
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
2023-05-08 13:50:02 +00:00
Jim Harris
8d2e6b6711 nvmf: use iterator APIs to generate discovery log
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: Iedd1c0a92e8b5f839ad4905d8063a04ec47f3d9b
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17938
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
2023-05-08 13:50:02 +00:00
Jim Harris
5cf6cd5f1b examples/nvme: fix reconnect memory leaks
1) In submit_single_io(), if an I/O fails to submit,
   we need to free the associated task structure,
   otherwise it gets leaked.
2) When draining I/O, just always check_io() instead
   of only doing it when current_queue_depth > 0.
   This is the simplest of ensuring that we cleanup
   the ns_ctx (including freeing the IO qpairs and
   the qpair pointer array) if the current_queue_depth
   is already 0 when starting to drain.

Fixes issue #2995.

Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I53f336c6a11ff63782dc81c087a58feca0e8a5d7
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17873
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
2023-05-08 13:38:35 +00:00
Jim Harris
3fefff7218 nvme: remove unnecessary initialization value
spdk_nvme_trid_populate_transport() inits
trstring to an empty string, but that value
is never used - it always gets overwritten by
a different value before it gets used.

Found by scan-build.

Fixes issue #3003.

Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I2f5f9bedd39fc540df758ad3e6719ba992552896
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17872
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
2023-05-08 13:38:35 +00:00
Jim Harris
7ad55b80fa nvme: remove deprecated spdk_nvme_ctrlr_prepare_for_reset()
Note that the prepare_for_reset flag in spdk_nvme_ctrlr is
still needed - it's just set now in the nvme_ctrlr_disconnect
path instead of this deprecated and now removed API.

Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I0a6aa1c72767eb67a84b8928a986e06cbac88240
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17936
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2023-05-08 13:31:09 +00:00
Jim Harris
3616be85f2 examples/nvme/perf: connect io qpairs asynchronously
This significantly speeds up testing with high connection
workloads (i.e. -P 64) with TCP especially.  We already
set async_mode=true all of the time for the bdev/nvme
module, so there's no reason we shouldn't do it in
perf too.

After allocating all of the IO qpairs, busy poll the
poll group, using the new spdk_nvme_poll_group_all_connected()
API to ensure the qpairs are all connected before proceeding
with I/O.

Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: If0c3c944cd5f3d87170a5bbf7d766ac1a4dcef7c
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17578
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
2023-05-08 10:12:21 +00:00
Jim Harris
366aabdf69 nvme: add spdk_nvme_poll_group_all_connected
Performance tools such as nvme-perf may want to
create lots of qpairs to measure scaling, and then
want to set async_mode = true to amortize the
connection cost across the group of connections.

But we don't want connections to be connecting
in the background while we are doing I/O.  So add
a new API spdk_nvme_poll_group_all_connected to
check if all of the qpairs are connected.

Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I109f9ee96b6d6d3263e20dc2d3b3e11a475d246d
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17637
Community-CI: Mellanox Build Bot
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
2023-05-08 10:12:21 +00:00
Jim Harris
be79373a97 fio: set FIO_DISKLESSIO flag for spdk engines
This tells fio to not try to use POSIX calls on
"files" specified for an SPDK engine.

Note that w/o DISKLESSIO option set, fio would
figure out that "*" wasn't a real file.  With this
option set, we now need to explicitly set its
real_file_size to 0 to tell fio to ignore it.

Found by Karol Latecki - he noticed that when
specifying lvols in the form "lvs/lvol" that
fio would create an empty "lvs" directory.  Adding
this flag prevents things like this from happening.

Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I5d457631b122ba5eb480813ab9d8aa6578f38277
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17937
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
2023-05-05 23:11:57 +00:00
Amir Haroush
9c274912d0 bdev/ocf: fix possible memory leak in ctx_data_alloc
Signed-off-by: Amir Haroush <amir.haroush@huawei.com>
Signed-off-by: Shai Fultheim <shai.fultheim@huawei.com>
Change-Id: I8b33e62bd6e0f297e6fc325942c501100855fd6c
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17939
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Community-CI: Mellanox Build Bot
2023-05-05 07:55:06 +00:00
Shuhei Matsumoto
559a97aa7c bdev/nvme: Change if->else to if->return for failover_trid()
This refactroing will reduce the size of the next patch significantly.

Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: I2eb7ec62e6c559d9e69334e73de49e8bf97a35dd
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17652
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
2023-05-05 00:10:11 +00:00
Shuhei Matsumoto
681a5aa459 bdev/nvme: Reset I/O disables retry when destroying I/O qpairs
As the RBD bdev module does, the upper layer wants the reset command
to abort or complete all I/Os submitted before the reset command.

To satisfy this requirement, return all aborted I/Os by deleting I/O
qpairs to the upper layer without retry. To return all aborted I/Os
by deleting I/O qpairs, enable DNR for I/O qpairs. These I/O qpairs
are deleted and recreated. Hence, we do not have to disable DNR.

No more I/O comes at a reset I/O because the generic bdev layer already
blocks I/O submission. However, some I/Os may be queued for retry even
after deleting I/O qpairs. Hence, abort all queued I/Os for the bdev
before completing the reset I/O.

Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: I9830026ef5f2b9c28aee92e6ce4018ed8541c808
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16836
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
2023-05-05 00:10:11 +00:00
Shuhei Matsumoto
49d3a5e47c nvme: The upper layer controls DNR dynamically for I/O aborts
When I/O error resiliency is supported, most DNR parameters for internal
APIs were cleared. However, for some cases, especially for the reset I/O
command, the upper layer wants the NVMe driver to return I/O errors
immediately without retry even if the upper layer enables I/O error retry.

To satisfy such requirement, add an abort_dnr variable to the spdk_nvme_qpair
structure and internal abort APIs use the abort_dnr variable. A public API
spdk_nvme_qpair_set_abort_dnr() can change the abort_dnr variable dynamically.

The public spdk_nvme_transport_ops structure is not changed to avoid
premature changes.

Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: I486a1b3ad8411f9fa261a2bf3a45aea9da292e9c
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17099
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2023-05-05 00:10:11 +00:00
Shuhei Matsumoto
0ba9ba5c40 bdev/nvme: Reset I/O cancels reconnect timer and starts reconnection
Previously, if a reconnect timer was registered when a reset request
came, the reset request failed with -EBUSY. However, this means the
reset request was queued for a long time until the reconnect timer was
expired.

When a reconnect timer is registered, reset is not actually in progress.
Hence, a new reset request can cancel the reconnect timer and can start
reconnection safely.

Add a unit test case to verify this change.

Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: Ied8dd0ad822d2fd6829d88cd56cb36bd4fad13f9
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16823
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
2023-05-05 00:10:11 +00:00
Amir Haroush
6b79f76769 bdev/ocf: add bdev_ocf_reset_stats RPC
Signed-off-by: Amir Haroush <amir.haroush@huawei.com>
Signed-off-by: Shai Fultheim <shai.fultheim@huawei.com>
Change-Id: Ife91df62099e14d328a767b1bbb3ddd3ded57264
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17916
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
2023-05-04 14:05:43 +00:00
Kamil Godzwon
e236311e39 test/autobuild: Update llvm_precompile function to handle newer CLANG versions
The llvm_precompile function checks for the CLANG version available on the machine
using bash regex and searches for fuzzer libraries in a path based on the full CLANG
version number. (e.g. /usr/lib64/clang/15.0.3/...)

However, on the newest Fedora distribution, the path has changed and fuzzer libraries
couldn't be found. Currently, CLANG libraries path contains only major version number
(/usr/lib64/clang/16)

To address this issue, the function has been updated to search only for the major
CLANG version number instead of the full version number. Instead of using clang_version,
the function now uses clang_num because in every Fedora distribution there is directory
or symlink that points to the right CLANG version.

e.g. symlinks
/usr/lib64/clang/13 -> /usr/lib64/clang/13.0.1
/usr/lib64/clang/15 -> /usr/lib64/clang/15.0.3

or directory:
/usr/lib64/clang/16

Fixes #3000

Signed-off-by: Kamil Godzwon <kamilx.godzwon@intel.com>
Change-Id: Iaf0dedc2bb3956cf06796e2eb60a5fa6f492b780
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17907
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2023-05-04 14:04:04 +00:00
Amir Haroush
72e058bba3 test/setup: Fix dm_mount test for slow hosts
on some hosts, it might take 1 or 2 seconds for the
mapper device to appear on /dev
in this case, the test will fail
because we check if the device exists immediately.
by giving it chance to get up the test will pass.

Signed-off-by: Amir Haroush <amir.haroush@huawei.com>
Signed-off-by: Shai Fultheim <shai.fultheim@huawei.com>
Change-Id: I996d84861333d29d5c9370a2c5a471e7962d91b1
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17912
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2023-05-04 14:02:08 +00:00
Mike Gerdts
6828ed1807 lvol: add spdk_lvol_is_degraded
This is mostly a wrapper around spdk_blob_is_degraded(), but it also
performs a NULL check on lvol->blob. Since an lvol without a blob cannot
perform IO, this condition returns true.

The two callers of spdk_blob_is_degraded() in vbdev_lvol.c have been
updated to use spdk_lvol_is_degraded().

Change-Id: I11dc682a26d971c8854aeab280c8199fced358c3
Signed-off-by: Mike Gerdts <mgerdts@nvidia.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17896
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2023-05-03 17:42:53 +00:00
Mike Gerdts
08650f8629 lvol: lvol destruction race leads to null deref
As an lvolstore is being destroyed, _vbdev_lvs_remove() starts an
interation through the lvols to delete each one, ultimately leading to
the destruction of the lvolstore with a call to lvs_free(). The callback
passed to vbdev_lvs_destruct() is always called asynchronously via
spdk_io_device_unregister() in bs_free().

When the lvolstore resides on bdevs that perform async IO (i.e. most
bdevs other than malloc), this gives a small window when the lvol bdev
is not registered but a lookup with spdk_lvol_get_by_uuid() or
spdk_lvol_get_by_names() will succeed. If rpc_bdev_lvol_delete() runs
during this window, it can get a reference to an lvol that has just been
unregistered and lvol->blob may be NULL. This lvol is then passed to
vbdev_lvol_destroy().

Before this fix, vbdev_lvol_destroy() would call:

   spdk_blob_is_degraded(lvol->blob);

Which would then lead to a NULL pointer dereference, as
spdk_blob_is_degraded() assumes a valid blob is passed. While a NULL
check would avoid this particular problem, a NULL blob is not
necessarily caused by the condition described above. It would better to
flag the lvstore's destruction before returning from
vbdev_lvs_destruct() and use that flag to prevent operations on the
lvolstore that is being deleted. Such a flag already exists in the form
of 'lvs_bdev->req != NULL', but that is set too late to close this race.

This fix introduces lvs_bdev->removal_in_progress which is set prior to
returning from vbdev_lvs_unload() and vbdev_lvs_destruct(). It is
checked by vbdev_lvol_destroy() before trying to destroy the lvol.  Now,
any lvol destruction initiated by something other than
vbdev_lvs_destruct() while an lvolstore unload or destroy is in progress
will fail with -ENODEV.

Fixes issue: #2998

Signed-off-by: Mike Gerdts <mgerdts@nvidia.com>
Change-Id: I4d861879097703b0d8e3180e6de7ad6898f340fd
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17891
Community-CI: Mellanox Build Bot
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2023-05-03 17:42:53 +00:00
Mike Gerdts
aee609e17c test/lvol: unlink aio files at start of test
This automatically cleans up aio files left over from earlier aborted
runs. This helps streamline development of new tests and should have no
impact on CI.

Signed-off-by: Mike Gerdts <mgerdts@nvidia.com>
Change-Id: Id65f60cdfc9969fda1dcdd17e60643ad87f45de7
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17898
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2023-05-03 17:42:53 +00:00
Amir Haroush
8b05f7bea6 bdev/ocf: add missing name to bdev_ocf_get_stats example
Signed-off-by: Amir Haroush <amir.haroush@huawei.com>
Signed-off-by: Shai Fultheim <shai.fultheim@huawei.com>
Change-Id: I538d5de79529fff3567e9fe89eb6739bf3f21e8c
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17917
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
2023-05-03 17:42:39 +00:00
Mike Gerdts
f7cc6174ef blob: log blob ID as hex, again
This is a followup to commit f4dc558245
which strove to log blob IDs as hex to make small blob IDs more
recognizable. That commit missed a few cases where the blob ID is logged
as decimal.

Signed-off-by: Mike Gerdts <mgerdts@nvidia.com>
Change-Id: I75d1b5973ee7e812f7caf0e826d3edbcba126743
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17641
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Community-CI: Mellanox Build Bot
2023-05-03 17:42:02 +00:00
Amir Haroush
fdeb57c0a1 OCF: fix compilation dependencies
we don't have dependency files for OCF sources/headers.
for example, if someone 'touch metadata_collision.h'
it will not compile anything.
with this fix, it will compile all the relevant files.

Signed-off-by: Amir Haroush <amir.haroush@huawei.com>
Signed-off-by: Shai Fultheim <shai.fultheim@huawei.com>
Change-Id: I35b1c1f80a60f4be59cdca95f68bbafc7a212774
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17914
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2023-05-03 17:41:37 +00:00
Amir Haroush
04bc3962ad markdownlint: set indent 2 to rule MD007
the default indent is 3 so we must set it to 2
as our md files are all indented with 2.

Signed-off-by: Amir Haroush <amir.haroush@huawei.com>
Signed-off-by: Shai Fultheim <shai.fultheim@huawei.com>
Change-Id: I76c501311b6a4443dc6fc655894487b762d67abb
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17913
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2023-05-03 17:41:17 +00:00
Sebastian Brzezinka
d11222e239 app/fuzz: discard randoms of insufficient length
LLVMFuzzerRunDriver does not allow to specify minimum input length,
return immediately when data insufficient.

Signed-off-by: Sebastian Brzezinka <sebastian.brzezinka@intel.com>
Change-Id: I306e1774b17b04108f2454b2fdaadb4d912bd274
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17884
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2023-05-03 17:39:52 +00:00
Shuhei Matsumoto
479ad83ebe bdev: Use unified split logic for write_zeroes command fallback
Write_zeroes command fallback had used its own split logic but multiple
writes had been serialized.

Use the unified split logic also for the write_zeroes command fallback.

This not only improves the performance but also simplifies the code.

Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: I955870947ae036482871453b4870f06f6f7f947b
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17902
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Community-CI: Mellanox Build Bot
2023-05-03 16:01:48 +00:00
Shuhei Matsumoto
af92c28b9c bdev: Calculate max_write_zeroes once at bdev registration for fallback case
As same as copy command, calculation of max write_zeroes size for
fallback case includes division and is costly. The result is constant
for each bdev. Hence, we can calculate it only once and store it into
bdev->max_write_zeroes at bdev registration. However, in unit tests,
bdev->blocklen and bdev->md_len can be changed dynamically. Hence,
adjust bdev->max_write_zeroes for such changes.

Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: I16e4980e7a283caa6c995a7dc61f7e77585d464e
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17911
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2023-05-03 16:01:48 +00:00
Shuhei Matsumoto
2dcaa3851f bdev: Fix max write_zeroes calculation for fallback case
ZERO_BUFFER_SIZE is in bytes but it is easier to calculate max
write_zeroes in blocks first and then get the minimum between max
write_zeroes in blocks and remaining_num_blocks rather than converting
remaining_num_blocks to num_bytes. This is helpful to store the result
into bdev->max_write_zeroes for fallback case.

We have one small fix in this patch. As we recently fixed
bdev_io_get_max_buf_len(), to get aligned length,
spdk_bdev_get_buf_align() - 1 is correct.

Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: I104bc837c9eee1303664bfdb3559b0e840d6f0e5
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17910
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
2023-05-03 16:01:48 +00:00
Shuhei Matsumoto
0c1df53e7a bdev: Copy command fallback supports split to make copy size unlimited
The generic bdev layer has a fallback meachanism for the copy command
used when the backend bdev module does not support it. However, its max
size is limited. To remove the limitation, the fallback supports split by
using the unified split logic rather than following the write zeroes
command.

bdev_copy_should_split() and bdev_copy_split() use spdk_bdev_get_max_copy()
rather then referring bdev->max_copy to include the fallback case.

Then, spdk_bdev_copy_blocks() does the following.

If the copy size is large and should be split, use the generic split
logic regardless of whether copy is supported or not.
If copy is supported, send the copy request, or if copy is not
supported, emulate it using regulard read and write requests.

Add unit test case to verify this addition.

Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: Iaf51db56bb4b95f99a0ea7a0237d8fa8ae039a54
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17073
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2023-05-03 16:01:48 +00:00
Shuhei Matsumoto
bf8f5afa44 bdev: Small clean up for copy command fallback
As name suffix, _done has been used more often than _complete for
fallback function names. 100 chars per line is suggested implicitly.

Do these small clean up in this patch.

Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: Id14dd3f09be8fd49b947b7a8f8b87108fb56c346
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17900
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Community-CI: Mellanox Build Bot
2023-05-03 16:01:48 +00:00
Shuhei Matsumoto
1ce7786f42 bdev: Calculate max_copy once at bdev registration for fallback case
Calculation of max copy size for fallback case includes division and is
costly. The result is constant for each bdev. Hence we can calculate it
only once and store it into bdev->max_copy at bdev registration.
Calculation of max copy size for fallback case is almost same as
calculation of max write zero size for fallback case. To reuse the
calculation, the helper function is named as bdev_get_max_write() and
has a num_bytes parameter.

Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: Iac83a1f16b908d8b36b51d9c51782de40313b6c8
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17909
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
2023-05-03 16:01:48 +00:00
Shuhei Matsumoto
cec70a601f ut/bdev: Configure bdev size and iobuf for all test cases
The following patches will change spdk_bdev_register() to access iobuf
and bdev's blocklen and blockcnt.
Hence, we have to configure these correctly for alltest cases.

Move ut_init/fini_bdev() up in a file. Add missing ut_init/fini_bdev()
and allocate/free_bdev() calls for some test cases. Add blockcnt and
blocklen to allocate_vbdev().

Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: Iccbb1cfe4dcdc4496f15304b5362d76d5296607f
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17908
Community-CI: Mellanox Build Bot
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2023-05-03 16:01:48 +00:00
Shuhei Matsumoto
5bced73616 bdev: Fix spdk_bdev_get_max_copy() for fallback case
As we recently fixed bdev_io_get_max_buf_len(), to get aligned length,
spdk_bdev_get_buf_align() - 1 is correct.

_bdev_get_block_size_with_md() considers both interleaved metadata and
separate metadata cases. It is simpler to use
_bdev_get_block_size_with_md().

The copy command fallback uses write command. As the write zeroes
fallback does, bdev->write_unit_size should be considered.

Fix all in this patch.

Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: I88fe1b250289f2bab7b541523e8be931eeb8150c
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17899
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2023-05-03 16:01:48 +00:00
Mike Gerdts
c9f3613fcd thread: detect spinlocks that are not initialized
If spdk_spin_lock() is called on an uninitialized spinlock, it will
deadlock. This commit detects whether a lock is initialized and aborts
instead of deadlocking.

Signed-off-by: Mike Gerdts <mgerdts@nvidia.com>
Change-Id: Ie7497633091edd4127c06ca0530e9a1dff530d1b
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16002
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2023-05-02 22:32:01 +00:00
Mike Gerdts
3d9395c69e thread: spinlock aborts print stack traces
Debug builds have information about when each spinlock was initialized,
last locked and last unlocked. This commit logs that information when
a spinlock operation aborts.

Signed-off-by: Mike Gerdts <mgerdts@nvidia.com>
Change-Id: I11232f4000f04d222dcaaed44c46303b7ea6cf6b
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16001
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2023-05-02 22:32:01 +00:00
Mike Gerdts
adc2ca50e9 scripts: gdb needs a pretty printer for spinlocks
In debug builds, SPDK spinlocks will have stack traces that track where
they were allocated, last locked, and last unlocked. This adds gdb
pretty printers to make that information easily visible. See the updates
in doc/gdb_macros.md for details.

Signed-off-by: Mike Gerdts <mgerdts@nvidia.com>
Change-Id: I4f903c588d9384c4005eec01348fa5c2d3cab5db
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16000
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
2023-05-02 22:32:01 +00:00
Mike Gerdts
531258aa51 thread: get debug stack traces on spinlocks
To help debug spinlocks, capture stack traces as spinlocks are used.
Future commits in this series will make debugging with these stack
traces easier.

Signed-off-by: Mike Gerdts <mgerdts@nvidia.com>
Change-Id: I597b730ca771ea3c5b831f5ba4058d359215f7f6
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15998
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
2023-05-02 22:32:01 +00:00
Amir Haroush
268078c128 CHANGELOG: OCF deprecation notice has removed as Huawei takes ownership
Signed-off-by: Amir Haroush <amir.haroush@huawei.com>
Signed-off-by: Shai Fultheim <shai.fultheim@huawei.com>
Change-Id: I81a5445320d90e2ece1c8154508c2739a6a82444
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17895
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
2023-05-02 22:31:16 +00:00
Amir Haroush
b4d441fd22 Revert "deprecation: remove Open CAS Framework"
This reverts commit 32908cbfc8.

OCF deprecation notice has removed as
Huawei is picking up support for the OCF project.

Signed-off-by: Amir Haroush <amir.haroush@huawei.com>
Signed-off-by: Shai Fultheim <shai.fultheim@huawei.com>
Change-Id: I007e80bc74dc50cfa9b8cde97fc6fdc9608d7ebd
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17894
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2023-05-02 22:31:16 +00:00
Amir Haroush
10db58ef77 Revert "ocf: clarify deprecation notice"
This reverts commit c5224a96ae.

OCF deprecation notice has removed as
Huawei is picking up support for the OCF project.

Signed-off-by: Amir Haroush <amir.haroush@huawei.com>
Signed-off-by: Shai Fultheim <shai.fultheim@huawei.com>
Change-Id: I80ebfe75eaa1a9b96249ed578fcaff6e9576928f
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17893
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
2023-05-02 22:31:16 +00:00
Mike Gerdts
ff12a5ed6a bdev_gpt: use unique partition GUID as bdev UUID
In releases of SPDK prior to v23.01, GPT bdevs had a random UUID. This
ended with commit a1c7ae2d3f, which is OK
because a non-persistent UUID is not all that useful.

Per Table 5.6 in Section 5.3.3 of UEFI Spec 2.3, each partition has a
16-byte UniquePartitionGUID:

  GUID that is unique for every partition entry. Every partition ever
  created will have a unique GUID. This GUID must be assigned when the
  GPT Partition Entry is created.  The GPT Partition Entry is created
  whenever the NumberOfPartitionEntries in the GPT Header is increased
  to include a larger range of addresses.

With this change, GPT bdevs use this unique partition GUID as the bdev's
UUID.

Signed-off-by: Mike Gerdts <mgerdts@nvidia.com>
Change-Id: Id8e8aa9e7903d31f199e8cfdb487e45ce1524d7b
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17351
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Community-CI: Mellanox Build Bot
2023-05-02 18:59:58 +00:00
Mike Gerdts
54db60cdb3 bdev_part: allow UUID to be specified
This introduces spdk_bdev_part_construct_ext(), which takes an options
structure as an optional parameter. The options structure has one
option: uuid.

Signed-off-by: Mike Gerdts <mgerdts@nvidia.com>
Change-Id: I5e9fdc8e88b78b303e60a0e721d7a74854ac37a9
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17835
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2023-05-02 18:59:58 +00:00
Alexey Marchuk
a347d3e747 accel/dpdk_cryptodev: Fix use of uninitialized variable
rc might be not initialized and it was not correct to
use it in this place.

Fixes 6b7cca1542 accel/dpdk_cryptodev: Handle OP_STATUS_SUCCESS

Signed-off-by: Alexey Marchuk <alexeymar@nvidia.com>
Change-Id: Ifd2b3032afd6830bd851adb61f68ae4fa9621d33
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17656
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2023-05-02 18:50:41 +00:00
Alexey Marchuk
d7b2f5b96e bdev/crypto: Put accel buffer when write completes
Accel buffer is released when encrypt operation
completes, however it doesn't mean that base
bdev finishes writing encrypted data. As result,
accel buffer might be reused in another IO, that
leads to data corruption.

Signed-off-by: Alexey Marchuk <alexeymar@nvidia.com>
Change-Id: I1acf7c30da2f92989ecc44e96b00f7609058ec5a
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17655
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Community-CI: Mellanox Build Bot
2023-05-02 18:50:41 +00:00
Konrad Sztyber
599aee6003 bdev: add extra function when pushing bounce data
This is done in preparation for retrying IOs on ENOMEM when pushing
bounce data.  Also, rename md_buffer to md_buf to keep the naming
consistent with other code which uses this abbreviation.

Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I014f178a45a2a751ecca40d119f45bf323f37d0c
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17762
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2023-05-02 18:48:27 +00:00
Konrad Sztyber
28bcf6a760 bdev: retry IOs on ENOMEM from pull/append_copy
The IOs will now be retried after ENOMEM is received when doing memory
domain pull or appending an accel copy.  The retries are performed using
the mechanism that's already in place for IOs completed with
SPDK_BDEV_IO_STATUS_NOMEM.

Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I284643bf9971338094e14617974f7511f745f24e
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17761
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2023-05-02 18:48:27 +00:00
Konrad Sztyber
7952ef88e0 bdev: count push/pull/seq_finish as io_outstanding
The IOs with an outstanding memory domain push/pull or accel sequence
finish operation are now added to the io_outstanding counter.  It'll be
necessary to correctly calculate nomem_threshold when handling ENOMEM
from those operations.

Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: Ice1fb94f1c9054a3a96312a0960ac5085d0b21bc
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17760
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2023-05-02 18:48:27 +00:00
Konrad Sztyber
6ed8bdf7d7 bdev: remove leading underscore from _bdev_io_(inc|dec)rement_outstanding
The leading underscore usually indicate that a function providing the
actual implementation for something that's called from some other
wrapper function without the leading underscore.  That is not the case
for these functions, so this patch removes the leading underscores.

Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I6e1186b156116249ee53a3845ae99ba87db5122b
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17868
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2023-05-02 18:48:27 +00:00
Konrad Sztyber
7cb6475ab1 bdev: add _bdev_io_increment_outstanding()
In the next patches we'll need to increment the io_outstanding from a
few more places, so it'll be good to have a dedicated function for that.
Also, move _bdev_io_decrement_outstanding() up, so that both functions
are near each other.

Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I1af5dbe288f7f701c8ba5e85406f02330ae21a39
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17759
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Community-CI: Mellanox Build Bot
2023-05-02 18:48:27 +00:00
Konrad Sztyber
7c528fdbe1 bdev: add common sequence finish callback
There are some common operations that need to be done each time a
sequence is executed (and more will be added in the following patches),
so it makes sense to have a common callback. data_transfer_cpl is used
for executing user's callbacks since it's unused at this point.

Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I4570acbdbe158512d13c31c0ee0c7bb7bf62d18c
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17678
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2023-05-02 18:48:27 +00:00
Konrad Sztyber
d704e6a025 bdev: keep IOs on the io_memory_domain queue during pull/push
The IOs are now kept on the io_memory_domain queue only if they have an
outstanding pull/push operation.  It'll make it easier to support
retrying pull/push in case of ENOMEM.

Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: If5a54fac532206ee8472bacf364a5ef6cde8edea
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17677
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2023-05-02 18:48:27 +00:00
Konrad Sztyber
168bc2673e bdev: allow different ways of handling nomem IOs
This is a preparation for reusing the code handling nomem_io for
other type of NOMEM errors (e.g. from pull/push/append_copy).  This
patch doesn't actually change anything functionally - only IOs completed
by a module with SPDK_BDEV_IO_STATUS_NOMEM status are retried.

Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I12ecb2efcf2d2cdf75b302f9f766b4c16ac99f3e
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17676
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Community-CI: Mellanox Build Bot
2023-05-02 18:48:27 +00:00
Konrad Sztyber
252aea5fad bdev: move adding IOs to the nomem_io queue to functions
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I0da93c55371652c5725da6cf602fa40391670da3
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17867
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2023-05-02 18:48:27 +00:00
Konrad Sztyber
f6339ffdb7 bdev: push bounce data only for successful IOs
The actual memory domain push already only happened for successfully
completed requests, but the code would go still go through
_bdev_io_push_bounce_data_buffer(), which could cause issues for IOs
completed with NOMEM, because the bounce buffer would be released in
_bdev_io_complete_push_bounce_done().

Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: Id1af1e31cb416e91bf11101a5ce7919530245e1e
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17866
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2023-05-02 18:48:27 +00:00
Konrad Sztyber
13b801bf37 bdev: use parent_io when executing sequence for split IOs
The sequence is associated with parent IO, so that's the IO that should
be used when executing a sequence.

Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: Ifcdb06094b38a5eaee1691e5aa8de1c8dc9d01a6
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17865
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2023-05-02 18:48:27 +00:00
Konrad Sztyber
f20fbfe65b bdev: move pulling md_buf to a function
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I935983a14bedc386ffe31abacc8fa200cd79f750
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17675
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2023-05-02 18:48:27 +00:00
Konrad Sztyber
72a6fff8bb bdev: move pulling data to bounce buffer to a function
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: Idbabcd5bd812cede6f5159ba0691b2dc28a4022a
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17674
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2023-05-02 18:48:27 +00:00
Konrad Sztyber
eb8f9bbc99 bdev: move resubmitting nomem IOs to a function
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I9f91af30ee1dd5f2568d9f76a30f00497aff6bbc
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17673
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2023-05-02 18:48:27 +00:00
Jim Harris
62c399ffaf test: clarify GPT related comment in blockdev.sh
After we create the GPT, we change the partition type
GUID to the associated SPDK value.  The current
comment just says "change the GUID" which is
ambiguous because there are multiple GUIDs associated
with each partition.

Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: Id821c5c5bbd7a72d84d5ddf4d91d633307f2235b
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17855
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2023-05-02 18:46:00 +00:00
Jim Harris
2e56512236 nvmf: fix comparison in nvmf_stop_listen_disconnect_qpairs
This function disconnects any qpairs that match both
the listen trid and the subsystem pointer.  If the
specified subsystem is NULL, it will just disconnect
all qpairs matching the listen trid.

But there are cases where a qpair doesn't yet have an
associated subsystem - for example, before a CONNECT
is received.

Currently we would always disconnect such a qpair, even
if a subsystem pointer is passed.  Presumably this check
was added to ensure we don't dereference qpair->ctrlr
when it is NULL but it was added incorrectly.

Also while here, move and improve the comment about
skipping qpairs.

Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I8b7988b22799de2a069be692f4a5b4da59c2bad4
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17854
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
2023-05-02 18:45:32 +00:00
Konrad Sztyber
4c7b504620 accel_perf: add shutdown callback
Otherwise, it's impossible to stop the app before its run time expires,
because the accel library waits until its IO channels are released which
would only happen at the end.

Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I7438b474f4f6d6bcb4bf6aad02ccae9f511f1b51
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17768
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2023-05-02 18:44:12 +00:00
Jim Harris
7c30df4ece usdt: add SPDK_DTRACE_PROBE variants that don't collect ticks
While userspace probes have a high overhead when enabled due
to the trap, it is still cleaner and slightly more efficient
to not have all of the SPDK_DTRACE_PROBE macros implicitly
capture the tsc counter as an argument.

So rename the existing SPDK_DTRACE_PROBE macros to
SPDK_DTRACE_PROBE_TICKS, and create new SPDK_DTRACE_PROBE
macros without the implicit ticks argument.

Note this does cause slight breakage if there is any
out-of-tree code that using SPDK_DTRACE_PROBE previously,
and programs written against those probes would need to
adjust their arguments.  But the likelihood of such code
existing is practically nil, so I'm just renaming the
macros to their ideal state.

All of the nvmf SPDK_DTRACE_PROBE calls are changed to
use the new _TICKS variants.  The event one is left
without _TICKS - we have no in-tree scripts that use
the tsc for that event.

Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: Icb965b7b8f13c23d671263326029acb88c82d9df
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17669
Community-CI: Mellanox Build Bot
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Mike Gerdts <mgerdts@nvidia.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
2023-05-02 18:43:44 +00:00
Alexey Marchuk
4045068a32 lib/nvmf: Defer port removal while qpairs exist in poll group
The following heap-use-after-free may happen when RDMA listener
is removed:
1. At least 2 listeners exist, at least 1 qpair is created
on each listening port
2. Listener A is removed, in nvmf_stop_listen_disconnect_qpairs
we iterate all qpair (let's say A1 and B1) and we check if qpair's
source trid matches listener's trid by calling
nvmf_transport_qpair_get_listen_trid. Trid is retrieved from
qpair->listen_id which points to the listener A cmid. Assume that
qpair's A1 trid matches, A1 starts the disconnect process
3. After iterating all qpairs on step 2 we switch to the next
IO channel and then complete port removal on RDMA transport
layer where we destroy cmid of the listener A
4. Qpair A1 still has IO submitted to bdev, destruction is postponed
5. Listener B is removed, in nvmf_stop_listen_disconnect_qpairs
we iterate all qpairs (A1 and B1) and try to check A1's listen trid.
But listener A is already destroyed, so RDMA qpair->listen_id points
to freed memory chunk

To fix this issue, nvmf_stop_listen_disconnect_qpairs was modified
to ensure that no qpairs with listen_trid == removed_trid exist
before destroying the listener.

Fixes issue #2948

Signed-off-by: Alexey Marchuk <alexeymar@nvidia.com>
Change-Id: Iba263981ff02726f0c850bea90264118289e500c
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17287
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2023-05-02 18:42:44 +00:00
Ankit Kumar
86bc8a5f13 nvme/fio_plugin: add fdp support to fio plugin
This adds support for FDP device described by TP4146.

spdk_fio_fdp_fetch_ruhs() fetches the reclaim unit handle
descriptors, used by fio for placement identifiers. This function
also informs fio whether device has fdp capability or not.

spdk_fio_queue() has been modified to submit write with
extended IO arguments. This can only work if sgl is enabled.

Note, a guard FIO_HAS_FDP checks for the required io-engine ops
version.

Change-Id: I91d0d02d3147357a66a831ef9fb82e6b7250be3d
Signed-off-by: Ankit Kumar <ankit.kumar@samsung.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17605
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2023-05-02 18:39:34 +00:00
Mike Gerdts
76f4b77726 lvol: esnap clones must end on cluster boundary
When regular lvols are created, their size is rounded up to the next
cluster boundary. This is not acceptable for esnap clones as this means
that the clone may be silently grown larger than external snapshot. This
can cause a variety of problems for the consumer of an esnap clone lvol.

While the better long-term solution is to allow lvol sizes to fall on
any block boundary, the implementation of that needs to be suprisingly
complex to support creation and deletion of snapshots and clones of
esnap clones, inflation, and backward compatibility.

For now, it is best to put in a restriction on the esnap clone size
during creation so as to not hit problems long after creation. Since
lvols are generally expected to be large relative to the cluster size,
it is somewhat unlikely that this restriction will be a significant
limitation.

Signed-off-by: Mike Gerdts <mgerdts@nvidia.com>
Change-Id: Id7a628f852a40c8ec2b7146504183943d723deba
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17607
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2023-05-02 18:32:19 +00:00
Mike Gerdts
54d4e7a631 vbdev_lvol: esnap memdomain support
Return the total number of memory domains supported by the blobstore and
any external snapshot bdev.

Signed-off-by: Mike Gerdts <mgerdts@nvidia.com>
Change-Id: I2f8afba6b31e689b8f942e2cf36906a0a30f38c8
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16430
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2023-05-02 18:32:19 +00:00
Mateusz Kozlowski
ca0c4dcde8 lib/ftl: Give correct type for seq_id variables/return types
Change-Id: I7d2fd31620481cf66f5f4400e6de4fc736ee3dad
Signed-off-by: Mateusz Kozlowski <mateusz.kozlowski@solidigm.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17608
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2023-04-28 09:48:18 +00:00
Anil Veerabhadrappa
831773b220 nvmf/fc: delegate memory object free to LLD
'args' object in nvmf_fc_adm_evnt_i_t_delete() is actually allocated in
the FC LLD driver and passed to nvmf/fc in nvmf_fc_main_enqueue_event() call.
So this object should be freed in the LLD's callback function.

Change-Id: I04eb0510ad7dd4bef53fc4e0f299f7226b303748
Signed-off-by: Anil Veerabhadrappa <anil.veerabhadrappa@broadcom.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17836
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
2023-04-28 09:08:48 +00:00
Michal Berger
abf212ea6a test/spdkcli: Wait until spdkcli_clear_config settled
Similarly to https://review.spdk.io/gerrit/c/spdk/spdk/+/17559, this
part may also be affected by:

Error in cmd: load_config /root/spdk/test/spdkcli/config.json (
load_config /root/spdk/test/spdkcli/config.json
request:
{
  "action_on_timeout": "none",
  "timeout_us": 0,
  "timeout_admin_us": 0,
  "keep_alive_timeout_ms": 10000,
  "transport_retry_count": 4,
  "arbitration_burst": 0,
  "low_priority_weight": 0,
  "medium_priority_weight": 0,
  "high_priority_weight": 0,
  "nvme_adminq_poll_period_us": 10000,
  "nvme_ioq_poll_period_us": 0,
  "io_queue_requests": 1024,
  "delay_cmd_submit": true,
  "bdev_retry_count": 3,
  "transport_ack_timeout": 0,
  "ctrlr_loss_timeout_sec": 0,
  "reconnect_delay_sec": 0,
  "fast_io_fail_timeout_sec": 0,
  "generate_uuids": false,
  "transport_tos": 0,
  "io_path_stat": false,
  "method": "bdev_nvme_set_options",
  "req_id": 31
}
Got JSON-RPC error response
response:
{
  "code": -1,
  "message": "RPC not permitted with nvme controllers already attached"
}

so make sure the nvme controller was fully detached.

Change-Id: Iaebed7b640d96fbabfc694dfebc5a725902caad2
Signed-off-by: Michal Berger <michal.berger@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17850
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2023-04-28 09:05:36 +00:00
Michal Berger
18dfb63389 test/nvme: Lock FDP test to FDP-capable nvme only
Change-Id: I199394c54914c99448153134ceb6e67a9c003f94
Signed-off-by: Michal Berger <michal.berger@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17773
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2023-04-28 09:03:36 +00:00
Michal Berger
258b7fbff2 test/nvme: Add helper functions to detect FDP-capable nvme
Change-Id: I817ffbfcb3bca154cad86ca70465a923610cbabb
Signed-off-by: Michal Berger <michal.berger@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17772
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2023-04-28 09:03:36 +00:00
Michal Berger
29ba5b1b43 scripts/vagrant: Add support for configuring FDP per nvme
Change-Id: Id647a02b82f7ede25496bbbbc420ef7d13d8f9af
Signed-off-by: Michal Berger <michal.berger@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17771
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2023-04-28 09:03:36 +00:00
Michal Berger
8b8a7a80f6 pkgdep/git: Bump QEMU to latest 8.0.0 release
Needed for testing latest FDP feature.

Change-Id: I83a0b46c716d6658efa4f2723c4c40e617f40cf7
Signed-off-by: Michal Berger <michal.berger@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17770
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2023-04-28 09:03:36 +00:00
Michal Berger
e9d44aa39b scripts/vagrant: Remove support for spdk-5.0.0 fork ns config
But leave the shortcut for configuring nvme with a single namespace.

Change-Id: I0e5745db481b24ab813ec1e98426d709cde216fd
Signed-off-by: Michal Berger <michal.berger@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17769
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Pawel Piatek <pawelx.piatek@intel.com>
2023-04-28 09:03:36 +00:00
Michal Berger
fa3f818b4e tests: Skip block devices marked as hidden
These devices don't come with their major:minor dev, hence they won't
pop up under /dev, i.e. are not really usable.

Change-Id: I49b39ccbedcdd1bfe37964819e15b769af22cab6
Signed-off-by: Michal Berger <michal.berger@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17774
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
2023-04-28 09:03:36 +00:00
Ankit Kumar
c976353be8 test/nvme: add test application to verify fdp functionality
TP4146 introduced support for flexible data placement, which is
a data placement directive.

This application will test the new I/O management commands,
write with directives, log pages and set/get features.

Signed-off-by: Ankit Kumar <ankit.kumar@samsung.com>
Change-Id: I2d68625d9a180afb5a6e85e59738c2713ce965a8
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16521
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Michal Berger <michal.berger@intel.com>
2023-04-28 09:03:36 +00:00
Richael Zhuang
df4600f4c9 nvme_tcp: fix memory leak when resetting controllor
In failover test, it reports memory leak about tqpair->stats when
detaching a tcp controller and it failover to the other controller.

Because during resetting the controller, we disconnect the controller
at first and then reconnect. when disconnecting, the adminq is not
freed which means the corresponding tqpair and tqpair->stats are not
freed. But when reconnecting, nvme_tcp_ctrlr_connect_qpair will
allocate memory for tqpair->stats again which causes memory leak.

So this patch fix the bug by not reallocating memory for tqpair->stats
if it's not NULL. We keep the old stats because from user perspective,
the spdk_nvme_qpair is the same one.

Besides, when destroying a qpair, the qpair->poll_group is set as
NULL which means if qpair->poll_group is not NULL, it should be a
new qpair. So there's no need to check if stats is NULL or not if
qpair->poll_group is not NULL. So adjusting the if...else... in
_nvme_pcie_ctrlr_create_io_qpair.

Change-Id: I4108a980aeffe4797e5bca5b1a8ea89f7457162b
Signed-off-by: Richael Zhuang <richael.zhuang@arm.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17718
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Krzysztof Karas <krzysztof.karas@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2023-04-27 11:00:03 +00:00
Ziye Yang
cb97b86081 env_dpdk: optimizing spdk_call_unaffinitized
Purpose: Reduce unnecessary affinity setting.

For some usage cases, the app will not use spdk
framework and already call spdk_unaffinitize_thread
after calling spdk_env_init().

Change-Id: I5fa8349913c4567ab63c5a01271e7b2755e53257
Signed-off-by: Ziye Yang <ziye.yang@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17720
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
2023-04-27 09:38:49 +00:00
Ziye Yang
c441e27023 bdev/rbd: Do not submit IOs through thread sending.
Currently, we send IOs to the main_td thread.
It is not needed, because all the read/write functions
provided by librbd are thread safe, so we can eliminate the
thread send messaging policy for read/write related functions.

And with this patch, users can observe the load balance
distribution of I/Os on each CPU core owned by spdk applications
through spdk_top tool.

In this patch, we did the following work:

1 Move rbd_open when create the bdev since we will create once.
2 Simplify the channel management.
3 Do not use thread send messaging to do the read/write I/Os.

According to our experiment results showed in
https://github.com/spdk/spdk/issues/2204

There will be more than 15% performance improvment in IOPS aspect
for different write I/O patterns, and it also addresses the I/O Load
balance issues.

Fixes issue: #2204

Change-Id: I9d2851c3d772261c131f9678f4b1bf722328aabb
Signed-off-by: Ziye Yang <ziye.yang@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17644
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2023-04-27 09:35:04 +00:00
Anil Veerabhadrappa
3a5ebfb06d ut/fc: Cleanup transport cleanup tests
This change fixes the following assert in FC UT,

   transport.c:329: spdk_nvmf_transport_create: Assertion `nvmf_get_transport_ops(transport_name) && nvmf_get_transport_ops(transport_name)->create' failed.

Change-Id: I57a9c6e83f07656b207d74bbadeb82e5efb5fa32
Signed-off-by: Anil Veerabhadrappa <anil.veerabhadrappa@broadcom.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17717
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2023-04-27 09:33:39 +00:00
Anil Veerabhadrappa
665e3805f8 ut/fc: add missing spdk_mempool_lookup stub
This patch fixes the following error:
        fc_ut.o: In function `nvmf_transport_create_async_done':
        spdk/lib/nvmf/transport.c:203: undefined reference to `spdk_mempool_lookup'
        collect2: error: ld returned 1 exit status

Change-Id: I6e81a8d62cfcc70bed6efe6ac807739d77ef89aa
Signed-off-by: Anil Veerabhadrappa <anil.veerabhadrappa@broadcom.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17716
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2023-04-27 09:33:39 +00:00
Krzysztof Karas
3edc534216 vhost_blk: make sure to_blk_dev() return value is not NULL
Assert that return pointer of to_blk_dev() is not NULL,
before dereferencing it.

Change-Id: I15adeac0926f23f84fdb3af88fc15ac07c580d91
Signed-off-by: Krzysztof Karas <krzysztof.karas@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17536
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
2023-04-27 09:30:42 +00:00
Krzysztof Karas
50e3b7bf31 nvme_transport: return NULL if transport does not exist
spdk_nvme_ctrlr_get_registers() calls nvme_get_transport()
to get a reference for a transport, whose registers should
be returned, but nvme_get_transport() explicitly returns
NULL, if the transport does not exist. This would result
in dereferencing a NULL pointer on line 862.

To remedy that, if no transport was found, return NULL.

Additionally change "THis" to "This" on line 46.

Change-Id: I3944925659991e9424e2177b5c940b2e2626d1f4
Signed-off-by: Krzysztof Karas <krzysztof.karas@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17532
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
2023-04-27 09:29:59 +00:00
Jim Harris
4fafd3fe65 test/nvmf: fully re-enable host/timeout.sh
Now that the bug with the remove_listener path has been
fixed, we can re-enable this part of the test.

Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I249011b20ffe468ed499766e4333e7bf9007a962
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17797
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: <sebastian.brzezinka@intel.com>
2023-04-27 09:28:38 +00:00
Jim Harris
baf250e5e4 nvmf: initialize trid param in get_***_trid paths
When removing a listener, for example with
nvmf_subsystem_remove_listener RPC, we use the concept of a
"listen trid" to determine which existing connections
should be disconnected.

This listen trid has the trtype, adrfam, traddr and trsvcid
defined, but *not* the subnqn.  We use the subsystem pointer
itself to match the subsystem.

nvmf_stop_listen_disconnect_qpairs gets the listen trid
for each qpair, compares it to the trid passed by the
RPC, and if it matches, then it compares the subsystem
pointers and will disconnect the qpair if it matches.

The problem is that the spdk_nvmf_qpair_get_listen_trid
path does not initialize the subnqn to an empty string,
and in this case the caller does not initialize it either.
So sometimes the subnqn on the stack used to get the
qpair's listen trid ends up with some garbage as the subnqn
string, which causes the transport_id_compare to fail, and
then the qpair won't get disconnected even if the other
trid fields and subsystem pointers match.

For the failover.sh test, this means that the qpair doesn't
get disconnected, so we never go down the reset path
on the initiator side and don't see the "Resetting" strings
expected in the log.

This similarly impacts the host/timeout.sh test, which is
also fixed by this patch.  There were multiple failing
signatures, all related to remove_listener not working
correctly due to this bug.

While the get_listen_trid path is the one that caused
these bugs, the get_local_trid and get_peer_trid paths
have similar problems, so they are similarly fixed in
this patch.

Fixes issue #2862.
Fixes issue #2595.
Fixes issue #2865.
Fixes issue #2864.

Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I36eb519cd1f434d50eebf724ecd6dbc2528288c3
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17788
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Mike Gerdts <mgerdts@nvidia.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: <sebastian.brzezinka@intel.com>
2023-04-27 09:24:18 +00:00
Mike Gerdts
c0ea96cf5e vbdev_lvol: allow degraded lvols to be deleted
An esnap clone is now deletable when its external snapshot is missing.
Likewise, the tree of degraded lvols rooted at a degraded esnap clone
can also be deleted, subject to the normal restrictions.

Signed-off-by: Mike Gerdts <mgerdts@nvidia.com>
Change-Id: I711ae25d57f5625a955d1f4cdb2839dd0a6cb095
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17549
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2023-04-26 17:32:13 +00:00
Mike Gerdts
5b250c0836 vbdev_lvol: load esnaps via examine_config
This introduces an examine_config callback that triggers hotplug of
missing esnap devices.

Signed-off-by: Mike Gerdts <mgerdts@nvidia.com>
Change-Id: I5ced2ff26bfd393d2df4fd4718700be30eb48063
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16626
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2023-04-26 17:32:13 +00:00
Mike Gerdts
5e79e84e78 include: add libgen.h to stdinc.h
A subsequent patch will need to use dirname(3), declared in libgen.h.
Because libgen.h is a POSIX header, the SPDK build requires that it is
defined in spdk/stdinc.h, not in the file that needs it.

libgen.h also declares basename() which has a conflicting declaration in
string.h. A small change is required in bdev_uring_read_sysfs_attr() to
accommodate this.

Signed-off-by: Mike Gerdts <mgerdts@nvidia.com>
Change-Id: Ib4ded2097881668aabdfd9f1683f933ce418db2e
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17557
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2023-04-26 17:32:13 +00:00
Mike Gerdts
d453aaa360 vbdev_lvol: degraded open of esnap clones
If an esnap clone is missing its snapshot the lvol should still open in
degraded mode. A degraded lvol will not have a bdev registered and as
such cannot perform any IO.

Change-Id: I736194650dfcf1eb78214c8896c31acc7a946b54
Signed-off-by: Mike Gerdts <mgerdts@nvidia.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16425
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2023-04-26 17:32:13 +00:00
Mike Gerdts
a045d8d2fc vbdev_lvol: early return in _vbdev_lvs_remove
This replaces nested if statements with equivalent logic that uses
early returns. Now the code fits in 100 columns and will allow the next
patch in this series to avoid adding a fifth level of indentation.

Signed-off-by: Mike Gerdts <mgerdts@nvidia.com>
Change-Id: Ief74d9fd166b2fe1042c78e12fe79d5f325aa502
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17548
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2023-04-26 17:32:13 +00:00
Mike Gerdts
f3c14b8dee vbdev_lvol: add bdev_lvol_get_lvols RPC
This provides information about logical volumes without providing
information about the bdevs. It is useful for listing the lvols
associated with specific lvol stores and for listing lvols that are
degraded and have no associated bdev.

Signed-off-by: Mike Gerdts <mgerdts@nvidia.com>
Change-Id: I795161ac88d9707831d9fcd2079635c7e46ecc42
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17547
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2023-04-26 17:32:13 +00:00
Mike Gerdts
a67e0eb37e vbdev_lvol: external snapshot rpc interface
Add RPC interfaces for creation of esnap clone lvols. This also
exercises esnap clone creation and various operations involving
snapshots and clones of esnap clones to ensure that bdev_get_bdevs
reports state correctly.

Change-Id: Ib87d01026ef6e45203c4d9451759885a7be02d87
Signed-off-by: Mike Gerdts <mgerdts@nvidia.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14978
Reviewed-by: Michal Berger <michal.berger@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2023-04-26 17:32:13 +00:00
Mike Gerdts
3f52d2659e test/common: allow tests to use set -u
Now autotest_common.sh is tolerant of tests that use "set -u" so that
they quickly generate useful errors when variables are used but not set.

Signed-off-by: Mike Gerdts <mgerdts@nvidia.com>
Change-Id: I5d7709f3029fa8f52affecf68a4b9da97a84589d
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16703
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Michal Berger <michal.berger@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2023-04-26 17:32:13 +00:00
Mike Gerdts
9b8f2ef354 test/lvol: test esnap clones with real bdevs
This adds test/lvol/esnap for functional tests lvol esnap clone bdevs
without RPCs or reactors.

Change-Id: If62b1bde2b19343af51ba4c11599623556484b0d
Signed-off-by: Mike Gerdts <mgerdts@nvidia.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16705
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2023-04-26 17:32:13 +00:00
Mike Gerdts
54b4f4dd4b vbdev_lvol: allow creation of esnap clones
This adds the ability for create esnap clone lvol bdevs.

Signed-off-by: Mike Gerdts <mgerdts@nvidia.com>
Change-Id: Ifeef983430153d84d896d282fe914c6671283762
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16590
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2023-04-26 17:32:13 +00:00
Mike Gerdts
0c31b86a6f vbdev_lvol: create esnap blobstore device
Register an spdk_bs_esnap_dev_create callback when initializing or
loading an lvstore. This is the first of several commits required to add
support enable lvol bdevs to support external snapshots and esnap
clones.

Signed-off-by: Mike Gerdts <mgerdts@nvidia.com>
Change-Id: I35c4e61fdbe5b93d65b9374e0ad91cb7fb94d1f4
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16589
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2023-04-26 17:32:13 +00:00
Mike Gerdts
0cea6b57f6 lvol: add spdk_lvol_get_by_* API
spdk_lvol_get_by_uuid() allows lookup of lvols by the lvol's uuid.

spdk_lvol_get_by_names() allows lookup of lvols by the lvol's lvstore
name and lvol name.

Signed-off-by: Mike Gerdts <mgerdts@nvidia.com>
Change-Id: Id165a3d17b76e5dde0616091dee5dff8327f44d0
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17546
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2023-04-26 17:32:13 +00:00
Mike Gerdts
b7d84562cb lvol: add spdk_lvol_iter_immediate_clones()
Add an interator that calls a callback for each clone of a snapshot
volume. This follows the typical pattern of stopping iteration when the
callback returns non-zero.

Signed-off-by: Mike Gerdts <mgerdts@nvidia.com>
Change-Id: If88ad769b72a19ba0993303e89da107db8a6adfc
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17545
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2023-04-26 17:32:13 +00:00
Mike Gerdts
30399f312c lvol_ut: test esnap hotplug
This exercises spdk_lvs_esnap_notify_hotplug() under a variety of happy
and not-so-happy paths.

Change-Id: I1f4101a082b113dacc7d03f81ca16069acfb458d
Signed-off-by: Mike Gerdts <mgerdts@nvidia.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17602
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2023-04-26 17:32:13 +00:00
Mike Gerdts
712f9aa452 lvol: hotplug of missing esnaps
This introduces spdk_lvs_notify_hotplug() to trigger the lvstore to call
the appropriate lvstore's esnap_bs_dev_create() callback for each esnap
clone lvol that is missing the device identified by esnap_id.

Change-Id: I0e2eb26375c62043b0f895197b24d6e056905aa2
Signed-off-by: Mike Gerdts <mgerdts@nvidia.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16428
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2023-04-26 17:32:13 +00:00
Mike Gerdts
f2dbb50516 lvol: keep track of missing external snapshots
If an lvol is opened in degraded mode, keep track of the missing esnap
IDs and which lvols need them. A future commit will make use of this
information to bring lvols out of degraded mode when their external
snapshot device appears.

Change-Id: I55c16ad042a73e46e225369bfff2631958a2ed46
Signed-off-by: Mike Gerdts <mgerdts@nvidia.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16427
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2023-04-26 17:32:13 +00:00
Mike Gerdts
87666f5286 blob: esnap clones are not clones
spdk_blob_is_clone() should return true only for normal clones. To
detect esnap clones, use spdk_blob_is_esnap_clone(). This also clarifies
documentation of spdk_blob_is_esnap_clone() to match the implementation.

Signed-off-by: Mike Gerdts <mgerdts@nvidia.com>
Change-Id: I9993ab60c1a097531a46fb6760124a632f6857cd
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17544
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2023-04-26 17:32:13 +00:00
Mike Gerdts
8b3dcd6191 blob: add is_degraded() to spdk_blob_bs_dev
The health of clones of esnap clones depends on the health of the esnap
clone. This allows recursion through a chain of clones so that degraded
state propagates up from any back_bs_dev that is degraded.

Signed-off-by: Mike Gerdts <mgerdts@nvidia.com>
Change-Id: Iadd879d589f6ce4d0b654945db065d304b0c8357
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17517
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2023-04-26 17:32:13 +00:00
Mike Gerdts
09bf2b2092 blob: add spdk_blob_is_degraded()
In preparation for supporting degraded lvols, spdk_blob_is_degraded() is
added. To support this, bs_dev gains an optional is_degraded() callback.
spdk_blob_is_degraded() returns false so long as no bs_dev that the blob
depends on is degraded. Depended upon bs_devs include the blobstore's
device and the blob's back_bs_dev.

Signed-off-by: Mike Gerdts <mgerdts@nvidia.com>
Change-Id: Ib02227f5735b00038ed30923813e1d5b57deb1ab
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17516
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2023-04-26 17:32:13 +00:00
Mike Gerdts
1db33a8f74 blob: add spdk_blob_get_esnap_bs_dev()
While getting memory domains, vbdev_lvol will need to be able to access
the bdev that acts as the lvol's external snapshot. The introduction of
spdk_blob_get_esnap_bs_dev() facilitates this access.

Signed-off-by: Mike Gerdts <mgerdts@nvidia.com>
Change-Id: I604c957a468392d40b824c3d2afb00cbfe89cd21
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16429
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2023-04-26 17:32:13 +00:00
Konrad Sztyber
e3babb2be1 accel_perf: use accel stats when dumping results
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: Iae1128ce01c16731bced8f97c08f44e1b0bc83f2
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17626
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
2023-04-26 11:15:40 +00:00
Konrad Sztyber
55d6cc0eae accel: add method for getting per-channel opcode stats
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: Ic3cc0ddc5907e113b6d9d752c9bff0f526458a11
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17625
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
2023-04-26 11:15:40 +00:00
Konrad Sztyber
d7b29fb9d5 accel: collect stats on the number of processed bytes
For operations that have differently sized input/output buffers (e.g.
compress, decompress), the size of the src buffer is recorded.

Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I1ee47a2e678ac1b5172ad3d8da6ab548e1aa3631
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17624
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
2023-04-26 11:15:40 +00:00
Konrad Sztyber
7c621ff206 accel: specify number of events when updating stats
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I5b611c8978b581ac504b033e1f335a2e10a9315b
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17623
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2023-04-26 11:15:40 +00:00
Konrad Sztyber
0de931dc6b accel: move accel_get_iovlen() up
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I6117057a1e3812386a0fb7a10e07978415a48261
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17622
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
2023-04-26 11:15:40 +00:00
Konrad Sztyber
9a377ecb22 accel: append support for crc32c
It is now possible to append an operation calculating crc32c to an accel
sequence.  A crc32c operation needs special care when it's part of a
sequence, because it doesn't have a destination buffer.  It means that
we can remove copy operations following crc32c only when it's possible
to change the dst buffer of the operation preceding crc32c.

Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I29204ce52d635162d2202136609f8f8f33db312d
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17427
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
2023-04-26 11:15:40 +00:00
Konrad Sztyber
2b1ad70c4c accel: check operation type in accel_task_set_dstbuf()
This will reduce the amount of changes in the following patch which
makes this function recursive.

Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: If8da6ae52d78358b66b2d9303413a9723687a767
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17568
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
2023-04-26 11:15:40 +00:00
Mike Gerdts
b0c93eb3fb accel: destroy g_stats_lock during finish
g_stats_lock is an spdk_spin_lock that is initialized as the module is
loading. With this change, it is destroyed as the module finishes.

Signed-off-by: Mike Gerdts <mgerdts@nvidia.com>
Change-Id: I5263547f6d0e8981765d59665bd826cf07a6f83e
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17681
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
2023-04-26 11:06:02 +00:00
Konrad Sztyber
bade2d8db5 accel: delay finish until all IO channels are released
This ensures that there are no more outstanding operations, so we can
safely free any global resources.

Fixes #2987

Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: Iac423b4f2a1183278d1db20f96c1a3b1bb657f85
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17767
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Mike Gerdts <mgerdts@nvidia.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2023-04-26 11:06:02 +00:00
Jim Harris
e407385e03 env_dpdk: add ERRLOGs to help debug issue #2983
Issue #2983 shows a case where we seem to get a
device remove notification from DPDK (via vfio
path) after we have already detached the device
explicitly by SPDK.

This issue has proven difficult to reproduce
outside of the one observed failure so far, so
adding a couple of ERRLOGs into this path to help
confirm the this theory should it happen again.

Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I0fda4229fe150ca17417b227e8587cd7fbda6692
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17631
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2023-04-25 16:54:59 +00:00
Michal Berger
aadd13f444 scripts/pkgdep: Add support for rocky|centos 9
Also, shuffle DAOS pieces a bit to keep repo handling in one place.
Also, also switch ceph repo to an actively supported release, common
and available for both centos|rocky 8|9 (i.e. pacific).

Change-Id: Idb19e4a5ff80770c7d6f9e6db85f983e163958e6
Signed-off-by: Michal Berger <michal.berger@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17661
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
2023-04-25 11:26:35 +00:00
Pawel Piatek
64c27c8dcc scripts/vagrant: upload optional directories
Sometimes we need to copy additional directories with
sources into VM. Currently, two cases are known:
- spdk-abi
- dpdk (for CI vs-dpdk jobs)

Signed-off-by: Pawel Piatek <pawelx.piatek@intel.com>
Change-Id: I242838364d649b29a5a9dc720c6920493b061fa7
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17645
Reviewed-by: Krzysztof Karas <krzysztof.karas@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Michal Berger <michal.berger@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2023-04-25 10:13:18 +00:00
Denis Barakhtanov
b16a4c22c4 bdev/daos: using SPDK_CONTAINEROF instead of container_of
DAOS bdev was implicitly expecting `container_of` to be in daos_event.h
With upcoming DAOS release the location of `container_of` has changed.
`SPDK_CONTAINEROF` is now used in the module.

Signed-off-by: Denis Barakhtanov <denis.barahtanov@croit.io>
Change-Id: Ia88365322fef378af6b1708b8704827bca1b828d
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17719
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2023-04-25 10:10:48 +00:00
Karol Latecki
4870695014 test/vhost: increase memory in virtio tests
Increase the memory for spdk virtio initiator
processes using "-s" option.

See https://review.spdk.io/gerrit/c/spdk/spdk/+/17371
22fa84f77a

Signed-off-by: Karol Latecki <karol.latecki@intel.com>
Change-Id: I2f425cb547e72e1ac6748e777158427dcf57b9f0
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17662
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2023-04-24 09:33:32 +00:00
Richael Zhuang
953b74b9b0 bdev_nvme: fix heap-use-after-free when detaching controller
There is heap-use-after-free error when detaching a controller
when "io_path_stat" option set as true.
(if build spdk without asan ubsan, error is free(): corrupted
unsorted chunks)

It's because io_path is accessed in bdev_nvme_io_complete_nvme_status
after the io_path is freed.

io_path is freed when we detach the controller in function
_bdev_nvme_delete_io_path, this function will execute 1 and 2.
And before 4 is executed, 3 may be executed which accesses io_path.

1.spdk_put_io_channel() is called. bdev_nvme_destroy_ctrlr_channel_cb
has not been called.
2.free(io_path->stat); free(io_path);
3.bdev_nvme_poll; nbdev_io1 is success; bdev_nvme_io_complete_nvme_status()
access nbdev_io1->io_path.
4.bdev_nvme_destroy_ctrlr_channel_cb disconnect qpair and abort nbdev_io1.

This patch fixed this by moving 2 down under 4. We don't free io_path in
_bdev_nvme_delete_io_path but just remove from the nbdev_ch->io_path_list.

The processes to reproduce the error:
target: run nvmf_tgt
initiator: (build spdk with asan,ubsan enabled)
sudo ./build/examples/bdevperf --json bdevperf-multipath-rdma-active-active.json  -r tmp.sock -q 128 -o 4096  -w randrw -M 50 -t 120
sudo ./scripts/rpc.py -s tmp.sock  bdev_nvme_detach_controller -t rdma -a 10.10.10.10 -f IPv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 NVMe0

========
bdevperf-multipath-rdma-active-active.json

{
  "subsystems": [
  {
    "subsystem": "bdev",
    "config": [
       {
         "method":"bdev_nvme_attach_controller",
         "params": {
           "name": "NVMe0",
           "trtype": "tcp",
           "traddr": "10.169.204.201",
           "trsvcid": "4420",
           "subnqn": "nqn.2016-06.io.spdk:cnode1",
           "hostnqn": "nqn.2016-06.io.spdk:init",
           "adrfam": "IPv4"
        }
      },
      {
        "method":"bdev_nvme_attach_controller",
        "params": {
        "name": "NVMe0",
        "trtype": "rdma",
         "traddr": "10.10.10.10",
           "trsvcid": "4420",
           "subnqn": "nqn.2016-06.io.spdk:cnode1",
           "hostnqn": "nqn.2016-06.io.spdk:init",
           "adrfam": "IPv4",
           "multipath": "multipath"
        }
    },
    {
       "method":"bdev_nvme_set_multipath_policy",
       "params": {
         "name": "NVMe0n1",
         "policy": "active_active"
       }
    },
    {
       "method":"bdev_nvme_set_options",
         "params": {
           "io_path_stat": true
         }
    }
    ]
    }
  ]
}
======

Change-Id: I8f4f9dc7195f49992a5ba9798613b64d44266e5e
Signed-off-by: Richael Zhuang <richael.zhuang@arm.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17581
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
2023-04-24 09:20:33 +00:00
Ben Walker
e351b19055 sock/posix: Fix sendmsg_idx rollover for zcopy
If the idx gets to UINT32_MAX we need to ensure it doesn't wrap around
before we check if we're done iterating.

Fixes #2892

Change-Id: I2c57ed2a6f6eda16e2d1faa63e587dca0b380a17
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17687
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
2023-04-24 09:00:35 +00:00
Jim Harris
1922700ea7 test/unit: disable sock unit tests on FreeBSD
There are several failing signatures observed as
part of issue #2943.  So disable the unit tests for
now until they are debugged.

Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: Iae54f8bfcd7883c02152abee37410a998da81dd7
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17573
Reviewed-by: Pawel Piatek <pawelx.piatek@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
2023-04-24 08:30:10 +00:00
Ben Walker
fb37b8d941 idxd: In perf tool, correctly pass fill pattern as a uint64_t
The pattern is 64 bits but we were only passing in 8.

Fixes #2821

Change-Id: I4a4c3f7c18bcb610df9c37edee549255f93f2632
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17686
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
2023-04-24 08:29:32 +00:00
Sebastian Brzezinka
737667e155 lib/env_ocf: place allocator variable on hugepages
When using `__lsan_do_recoverable_leak_check` (e.g when fuzzing),
to check for leaks during runtime. Leak sanitizer can not follow
reference of memory that is allocated on heap (e.g. calloc)
and then stored on hugepage causing lsan to incorrectly report
direct leak.

Fixes #2967

Signed-off-by: Sebastian Brzezinka <sebastian.brzezinka@intel.com>
Change-Id: I3511e117a07ca8daa96f19bf1437c0d788b64cb1
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17682
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Amir Haroush <amir.haroush@huawei.com>
2023-04-21 23:49:28 +00:00
Shuhei Matsumoto
26b9be752b bdev/nvme: Add max_bdevs parameter for attach_controller RPC
The target subsystem may expose more than 128 namespaces. To support
such subsystem, add a new parameter max_bdevs for the
bdev_nvme_attach_controller RPC.

Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: I8fab20b9c4d52818205e05de6a31dbe0d31a10fe
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17651
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
2023-04-20 11:33:14 +00:00
Shuhei Matsumoto
f0a2538c04 bdev/nvme: Alloc bdev name array dynamically for attach_controller RPC
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: I9c1822421563210f6a656553355e29e75c8b0c21
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17650
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2023-04-20 11:33:14 +00:00
Shuhei Matsumoto
d33d418742 bdev/nvme: Aggregate req and ctx free for attach_controller RPC
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: Iba2091f67a97a59ecad7f0c853491d9cfcad736d
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17649
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
2023-04-20 11:33:14 +00:00
Jim Harris
0ca5304550 examples/nvme/perf: increase opts.num_io_queues when needed
By default we specify 1024 max_io_queues per controller.
But it's possible we need more for high connection count
use cases (i.e. -c 0xFF -P 512 which is 8 * 512 = 4096).
So dynamically configure opts.num_io_queues based on
the corresponding values.

Note: we have to change a couple of globals from int to
uint32_t to avoid signed v. unsigned comparison warnings.
Let's just do that in this patch instead of a separate
one.

Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: Iba2d670c224a91e50377e622b154ce43eed94002
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17621
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
2023-04-20 10:56:42 +00:00
Jim Harris
982ae8f46c examples/nvme/perf: pick num_requests based on qpairs per ns
If we want to test something like 512 qpairs, with qd = 8 for
each, you need to specify -q 4096 -P 512.  Then those 4096
I/O are spread across the 512 qpairs, to get qd = 8
for each qpair..

But currently it ends up also allocating 4096 num_io_requests
for each qpair which is a huge waste.  We need to instead
base the num_io_requests on the effective queue depth for
each of the qpairs.

Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I3ec0f4d9ab94388bf980c0b0439790847161ec12
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17620
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
2023-04-20 10:56:42 +00:00
Jim Harris
672710c8fc nvme/tcp: increase timeout for async icreq response
This was arbitrarily picked as 2 seconds in commit
0e3dbd. But for extremely high connection count
use cases, such as nvme-perf with several cores
and high connection count per core, this 2 second
time window can get exceeded.

So increase this to 10 seconds, but only for qpairs
that are being connected asynchronously.

Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I906ca9e6561b778613c80b739a20bd72c807216c
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17619
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
2023-04-20 10:56:42 +00:00
Jim Harris
46cfc0484f nvme: fix async_mode comment
async_mode is now supported on PCIe, RDMA and TCP
transports.  So remove the comment about it only
being supported on PCIe transport.

Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I059e226aa98e702c9caa2886a10ec1212b6f1ada
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17577
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2023-04-20 10:56:42 +00:00
Marcin Spiewak
73293d73eb ./configure: add 'detect' value to --max-lcores
This patch adds suport for 'detect' option in SPDK's
./configure, allowing configuring of DPDK to detect
current number of cores during SPDK compilation.
This is done by providing --max-lcores=detect as
a parameter to ./configure, which triggers setting
of '-Dmax_lcores=detect' in DPDK_OPTS passed to
dpdkbuild/Makefile.
DPDK then do detection of number of cores in the
system during compilation, and sets RTE_MAX_LCORE
to that value. Meson build system also generates
a message displaying information about number of
cores detected. E.g. for my system:
"
Message: Found 72 cores
"

Example usages:
1) use default value for RTE_MAX_LCORE:
	./configure
2) detect the core number:
	./configure --max-lcores=detect
3) Set RTE_MAX_LCORE to 256:
	./configure --max-lcores=256

Change-Id: I2103c2d917f210aee4d1ef43584b1bd40dbfe43b
Signed-off-by: Marcin Spiewak <marcin.spiewak@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17555
Community-CI: Mellanox Build Bot
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2023-04-19 09:59:45 +00:00
Michal Berger
962671f711 test/vhost: Create wrapper around wipefs
Call sync each time, as an extra step, to make sure all the writes on the underlying device completed. This is needed, as on occasion parted (called right after wipefs) fails to create pt complaining that the target device (and its partitions) are still in use.

Change-Id: I959d9b36a1588ec3754335995e3e8bc5057bfeb7
Signed-off-by: Michal Berger <michal.berger@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17498
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
2023-04-19 09:52:25 +00:00
Marcin Spiewak
9ab5c8b67a lvol_ut: add test for invalid options
Add unit test for calling spdk_lvs_load_ext()/lvs_load()
with invalid options (opts_size is 0).

Change-Id: I9c48b972066cf977304e3efa936827d1ef1b5250
Signed-off-by: Marcin Spiewak <marcin.spiewak@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17584
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Reviewed-by: Mike Gerdts <mgerdts@nvidia.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2023-04-19 06:37:29 +00:00
Marcin Spiewak
324e3261e6 lib/lvol: lvs_load() shall return, if options are invalid
lvs_load() function verifies if options passed to it
are valid, but doesn't return, if they are not (only error
is logged and callback is called with -EINVAL code). Now
it is corrected and the function ends after the error
is reported.

Change-Id: I19b0b22466b6980345477f62084d27ef13414752
Signed-off-by: Marcin Spiewak <marcin.spiewak@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17582
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Mike Gerdts <mgerdts@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
2023-04-19 06:37:29 +00:00
Konrad Sztyber
c5efdd55c2 accel: move merging dst buffer to a function
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I62b73f1802a9de35767b72c2cc4ee115e895c538
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17426
Community-CI: Mellanox Build Bot
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2023-04-19 06:36:20 +00:00
Konrad Sztyber
3e0e077939 accel: copy memory domain context when merging tasks
When changing src/dst buffers, we copied memory domain pointers, but we
didn't copy memory domain context, which is obviously incorrect.  It was
probably missed, because we never append a copy with non-NULL memory
domain.  Added a unit test case to verify this behavior.

Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: Ic174e0e72c33d3f437f0faddd3405638049f0c74
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17425
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2023-04-19 06:36:20 +00:00
Konrad Sztyber
5d2d59be8d accel: move accel_module.h to include/spdk
This file should be external to enable out-of-tree accel modules.

Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I2e973d0e88d7145d0fc9714f56db48486b00f3b7
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17419
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Jacek Kalwas <jacek.kalwas@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2023-04-19 06:36:20 +00:00
Konrad Sztyber
3824f6e39b bdev/crypto: complete IOs on ENOMEM from accel
spdk_bdev_queue_io_wait() can only be used when one of bdev submission
functions returns ENOMEM (i.e. there are no more spdk_bdev_ios on that
IO channel).  Using it in any other case, e.g. on spdk_accel_append_*()
returning ENOMEM, will most likely result in failure.  Therefore, to
avoid that, the IOs are completed with NOMEM status relying on the bdev
layer to retry them.

Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: Ie0f03496e5d3180c481815b3f1b021e74ae2f46d
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17319
Community-CI: Mellanox Build Bot
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
2023-04-19 06:36:20 +00:00
Konrad Sztyber
31bfcb45b7 accel: make number of tasks/seqs/bufs configurable
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I07ebf37ff31ddb888e68e98cf7b9b425c7a4d128
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17318
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2023-04-19 06:36:20 +00:00
Konrad Sztyber
97ce07c261 bdev/malloc: report accel sequence support
This actually allows malloc bdev to chain multiple accel operations
together.  And, since the last operation will always be a copy, accel
should remove that copy by modifying previous operation's dst/src.

On my system, it improved bdevperf performance (single core, qd=4,
bs=128k, bdev_crypto on top of bdev_malloc, crypto_sw):

randread: 5668M/s -> 8201M/s
randwrite: 5148M/s -> 7856M/s

Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I5b9173fa70a42ee56f56c496a34037d46d2f420f
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17202
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2023-04-19 06:36:20 +00:00
Konrad Sztyber
9276019077 bdev/malloc: report memory domain support
Because the copying is handled by accel, which will do push/pull when
necessary, we can report support for each registered memory domain.

Also, since verifying PI information would require doing a push/pull, we
don't report support for memory domains if bdev has DIF enabled.

Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: Id80f82aaac68e9dec2a6cae81d96a460105161d6
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17201
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2023-04-19 06:36:20 +00:00
Konrad Sztyber
06fd87e4e9 bdev/malloc: use appends for write requests
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: Ief6c873a5f65274a25b67bc3f2811d8f3e4a33b3
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17200
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2023-04-19 06:36:20 +00:00
Konrad Sztyber
ad154aaef2 bdev/malloc: pass bdev_io to bdev_malloc_writev()
Same reason for the change as in bdev_malloc_readv().

Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: Id52d8639df6a488342346283c90f12a2ba6f5736
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17199
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2023-04-19 06:36:20 +00:00
Konrad Sztyber
560164d8da bdev/malloc: use appends for read requests
This only changes the interface bdev_malloc uses for scheduling the copy
to appends, but it won't chain those copies to an existing sequence, as
bdev_malloc doesn't report support for accel sequences yet.  That will
be changed in one of the subsequent patches.

Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I6db2c79b15cb96a1b07c6cf5514004c76b9d2e92
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17198
Community-CI: Mellanox Build Bot
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
2023-04-19 06:36:20 +00:00
Konrad Sztyber
7ce4205ab0 bdev/malloc: pass bdev_io to bdev_malloc_readv()
It reduces the size of the parameter list, which was already pretty
long, and will make it easier to use other bdev_io's fields (e.g. memory
domain, accel sequence).

Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I43a9d3a7cbb77915c00879c43540c9ec725c52d2
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17197
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2023-04-19 06:36:20 +00:00
Konrad Sztyber
5d25e3fcc8 bdev/malloc: don't retry failed requests
If a request was marked as failed, we don't want to retry it, so we
shouldn't override its status with NOMEM.

Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I85a522a7934d2d6f415620b9a323effefb91f299
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17196
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2023-04-19 06:36:20 +00:00
Konrad Sztyber
d8f63f392a bdev/malloc: declare malloc task/disk variables
It gets rid of lots of casts to malloc_task/malloc_disk and makes the
code more readable.

Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: Id50f0cbfa18adf5e7baafd58da03d290d6ba62c6
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17195
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2023-04-19 06:36:20 +00:00
Konrad Sztyber
63524340a3 accel: make spdk_accel_sequence_finish() void
It always returns 0 and any errors are reported in the callback.  Making
it void simplifies error handling.

Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I0d4299a2789a688eae38d76de46d1baf27cbbd8f
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17194
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Community-CI: Mellanox Build Bot
2023-04-19 06:36:20 +00:00
Konrad Sztyber
6060669e1a test/bdev: accel chaining test
This test sends several read/write requests and verifies the expected
number of accel operations have been executed.

Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: Idda46ef00dc5bcc0a176d3dfb39f3f3861964741
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17193
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
2023-04-19 06:36:20 +00:00
Michal Berger
fc0214d539 test/packaging: Export LD_LIBRARY_PATH, PKG_CONFIG_PATH setup
095f40630e missed the autobuild dependencies while enabling the rpm
test against the external DPDK build. Without it, DPDK is not able
to properly configure itself against ipsec and isa-l libs.

Change-Id: Ia4307f0d0f9c1f82f6f80ca06113a5289c2916ed
Signed-off-by: Michal Berger <michal.berger@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17576
Reviewed-by: Karol Latecki <karol.latecki@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Reviewed-by: Kamil Godzwon <kamilx.godzwon@intel.com>
2023-04-18 08:42:27 +00:00
Michal Berger
6732946d0a test/spdkcli: Include errors from a failed command
Dump it to stdin to make the debugging easier.

Change-Id: I9b13d0a77e45aa84ec2a55b7b982225592f2566d
Signed-off-by: Michal Berger <michal.berger@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17560
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2023-04-18 08:42:09 +00:00
Michal Berger
162bf435cb test/spdkcli: Wait long enough for the nvme ctrl to be gone
Some nvmes need more time to attach|detach to|from, hence having a
static sleep is not ideal depending on what type of the nvme was
picked up for the test. Instead, simply wait until the list of
nvme ctrls is empty after the cleanup.

Change-Id: I2fc2630020436d0e1f6b01a5ce60aea56e7bf8ec
Signed-off-by: Michal Berger <michal.berger@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17559
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Krzysztof Karas <krzysztof.karas@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
2023-04-18 08:42:09 +00:00
Jim Harris
0fa85da9c4 test: wait_for_examine and delay in reap_unregistered_poller.sh
We need to give the thread library some time to reap
the unregistered poller - it is a bit of a delayed
process.  We have to wait for examine to finish on
the aio bdev, then the poll group gets destroyed and
the pollers unregistered.  This marks the pollers as
UNREGISTERED, but they aren't actually reaped until
next time the thread is polled.

Fixes issue #2980.

Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I1e32c50ea9b28ea2d5560ddc9b2f68fa81e708d9
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17575
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Mike Gerdts <mgerdts@nvidia.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2023-04-18 04:57:02 +00:00
Artur Paszkiewicz
72672d4982 module/raid: specify memory domain support per raid module
Not all raid modules may support memory domains - raid5f currently does
not. Add a parameter to struct raid_bdev_module to specify that.

Change-Id: I3285c118db846d290837606b3f85ac4b5277de97
Signed-off-by: Artur Paszkiewicz <artur.paszkiewicz@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17601
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
2023-04-17 09:36:34 +00:00
Marcin Spiewak
1a526000d0 libreduce: removing deprecation messages for pmem
Deprecation notice for pmem was removed, as libreduce will
still use it until pmem is supported.

Change-Id: I7555dbf20a408a67fac8a6e7b2eaa23edf985eec
Signed-off-by: Marcin Spiewak <marcin.spiewak@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17538
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Community-CI: Mellanox Build Bot
2023-04-17 09:36:07 +00:00
Konrad Sztyber
ee06693c3d accel: keep track of destroyed channels' statistics
To make sure we don't lose statistics of destroyed channels, they're now
added to a global stats structure when a channel is destroyed.

Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: Ic3b4d285b83267ac06fad1e83721c1b15cc8ec8a
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17567
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
2023-04-14 17:05:40 +00:00
Konrad Sztyber
f2459a2f26 accel: add accel_get_stats
The RPC allows the user to retrieve accel framework's statistics.

Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I5cd1b45686504c08eda50513ad1dae2f8d65013b
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17191
Community-CI: Mellanox Build Bot
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2023-04-14 17:05:40 +00:00
Konrad Sztyber
f135f5ff7f accel: collect statistics
This patch adds support for collecting statistics in accel framework.
Currently, it counts the following events:
 1. The number and the type of executed/failed operations.
 2. The number of executed/failed accel sequences.

For now, these statistics are only collected and there's no way of
retrieving (or resetting) them - that will be added in the following
patches.

Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: Id211067eb810e7b7d30c756a01b35eb5019c57e7
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17190
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
2023-04-14 17:05:40 +00:00
Konrad Sztyber
f61e421b05 accel: extract submitting task to a function
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I7d24ab571fb3217917aee53276ccd3d13e1e76c4
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17189
Community-CI: Mellanox Build Bot
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2023-04-14 17:05:40 +00:00
Konrad Sztyber
688f7fb810 accel: add accel_set_options
It'll allow for setting accel-specific options.  For now, it makes the
size of iobuf caches configurable.

Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: Iaf505cc5e98dc6411453d9964250a4ba22267d79
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17188
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2023-04-14 17:05:40 +00:00
Michal Berger
4f19ab4d4b scripts/qat_setup: Add support for dh895xcc devices
These can be found under CYP platform.

To that end, refactor qat_setup.sh so it can support devices based on
their dedicated driver rather than the specific device ID - this will
allow for easier addition of new devices in the future.

Also, configure number of VFs based on total number given ctrl
supports - this is exactly what qat_service is doing while enabling
VFs.

Drop warning about old bug in the uio_pci_generic driver - in
practice we don't hit that under CI since quite a long time.

Slap errexit on top to make sure we exit immediately when writing
to sysfs fails.

Last but not least, run blockdev_crypto_qat test without any top
condition - if qat_setup.sh succeeds, the test should be able to pass.

Signed-off-by: Michal Berger <michal.berger@intel.com>
Change-Id: I37c4fd319ad7002017f9baf9fdcf3890429aac75
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17086
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Pawel Piatek <pawelx.piatek@intel.com>
2023-04-14 17:04:37 +00:00
Michal Berger
0945b976df test/check_so_deps: Adjust printout when $SPDK_ABI_DIR is not set
There's no default defined so it doesn't have to be available in the
env at all. Adjust the echo so we don't include an empty string.

Change-Id: Icaa75915544f9da1adcdcdeafce29f5ae97149ab
Signed-off-by: Michal Berger <michal.berger@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17428
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Pawel Piatek <pawelx.piatek@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2023-04-14 17:03:42 +00:00
Michal Berger
8ad609857f autopackage: Move packaging test to autobuild
Signed-off-by: Michal Berger <michal.berger@intel.com>
Change-Id: Ifbe4d98f3d7a4b9970f923acd6d299d9cc02d350
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17206
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2023-04-14 17:03:42 +00:00
Michal Berger
095f40630e test/packaging: Move tests out of nightly
Packaging tests will be done under a separate docker job, hence there
will be plenty of time to run them together. Keep DPDK-related builds
in nightly as they are quite sensitive to any changes (especially API
related), hence not very fit for per-patch testing.

Signed-off-by: Michal Berger <michal.berger@intel.com>
Change-Id: Ia1af5b0e86a503f540c32d2e030088d8a24f8847
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16046
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Reviewed-by: Pawel Piatek <pawelx.piatek@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2023-04-14 17:03:42 +00:00
Michal Berger
6a4b678402 test/autobuild: Source $spdk_conf before autotest_common.sh
This is to make sure we export all SPDK_* with proper values.

Signed-off-by: Michal Berger <michal.berger@intel.com>
Change-Id: I2f01af1a051edcec6a75f99b25b765080abf2a5d
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17212
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Pawel Piatek <pawelx.piatek@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2023-04-14 17:03:42 +00:00
Michal Berger
535543d4f1 lib/env_dpdk: Make sure linker finds $DPDK_LIB_DIR
In case SPDK is build with shared libraries and there's no
LD_LIBRARY_PATH around, linker will complain about missing .sos
similar to:

/usr/bin/ld.bfd: warning: librte_meter.so.23, needed by
/root/spdk/dpdk/build/lib/librte_ethdev.so, not found (try using -rpath
or -rpath-link)

We can't see that under CI since autotest_common.sh always makes sure
the LD_LIBRARY_PATH is properly set.

Add the -rpath to make the build less spammy.

Change-Id: I1d9d1775b2aa24e65cc4b776c2549457b0d7aac3
Signed-off-by: Michal Berger <michal.berger@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17492
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2023-04-13 21:41:09 +00:00
Michal Berger
54a78f41d2 mk/spdk.common: Use -flto=auto for the LTO builds
This tells lto-wrapper to either use make's jobserver or fallback to
auto guessing number of cpu threads used for the build. Mainly, this
should silence the following warning:

lto-wrapper: warning: using serial compilation of N LTRANS jobs
lto-wrapper: note: see the ‘-flto’ option documentation for more
information

Change-Id: Ib848319c858f4371b94f9264d22449535d25d6da
Signed-off-by: Michal Berger <michal.berger@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17491
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2023-04-13 21:41:09 +00:00
Michal Berger
8ad5671fbb test/common: Silence vhost/commo.sh during cleanup
It's too verbose and may send confusing (in context of the actual
cleanup) messages.

Change-Id: I9e86e20afcf567fb54fec3a6cfb9008ad2080a12
Signed-off-by: Michal Berger <michal.berger@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17485
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2023-04-13 21:39:59 +00:00
Changpeng Liu
544e2a273b lib/vhost: register VQ interrupt handler when enable VQ
In commit 23baa67, we will start virtio device only once,
and update the VQ's information in SET_VRING_KICK message
context, so when multi-queues are enabled, SPDK doesn't
register VQ's interrupt handler, here we add it when enable
VQ.

Fix issue #2940.

Change-Id: I29dbd7bf0b81b23c2e47e37c467952cc5887b5bf
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17354
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2023-04-13 21:39:33 +00:00
Jim Harris
89188e84f1 bdev: assert that internal status is PENDING for completed IO
bdev modules should have call spdk_bdev_io_complete twice
for the same IO.  We can help find cases where this happens
by adding an assert in spdk_bdev_io_complete - confirming
that the current status is still PENDING, before changing
it to the status passed by the caller.

Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: Id8a044a94113f1ac5e3c8d86e426654bfa8d5c5a
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17330
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
2023-04-13 21:38:10 +00:00
Jim Harris
42567ba294 bdev: reset status immediately to PENDING for nomem_io queue
Reset the status for a bdev_io that fails with NOMEM status
back to PENDING immediately when it is put on the nomem_io
list, instead of waiting until it gets submitted again.

This helps keep the bdev_io states consistent, so that if
we need to complete these IO for abort reasons later, that
we aren't completing IO that already have a non-PENDING
state.

Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I9532095141209ed6f7af362b52c689da62e755ce
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17335
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Community-CI: Mellanox Build Bot
2023-04-13 21:38:10 +00:00
Alexey Marchuk
79e1c3f298 test/nvmf: Add more Nvidia NIC IDs
Even though these NICs are not used by Community CI,
all tests fail if to run on a system with
CX6 Dx, CX7, BF2 and BF3.

Signed-off-by: Alexey Marchuk <alexeymar@nvidia.com>
Change-Id: I23aaf8ddbc5b165f0a4372108d1f4b34f0b2ccf7
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17166
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Karol Latecki <karol.latecki@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2023-04-13 21:35:30 +00:00
Alexey Marchuk
0de1c21570 lib/nvmf: Deprecate cb_fn in spdk_nvmf_qpair_disconnect
Handling this callback is quite complex and may lead to
various problems. In most of places, the actual event
when qpair is dosconnected is not importnat for the
app logic. Only in shutdown path we need to be sure
that all qpairs are disconnected, it can be achieved
by checking poll_group::qpairs list

Signed-off-by: Alexey Marchuk <alexeymar@nvidia.com>
Change-Id: I453961299f67342c1193dc622685aefb46bfceb6
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17165
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Michael Haeuptle <michaelhaeuptle@gmail.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2023-04-13 21:35:30 +00:00
Alexey Marchuk
d478b20ddf lib/nvmf: Update spdk_nvmf_qpair_disconnect return value
If the qpair is already in the process of disconnect,
the spdk_nvmf_qpair_disconnect API now return -EINPROGRESS
and doesn't call the callback passed by the user.

Signed-off-by: Alexey Marchuk <alexeymar@nvidia.com>
Change-Id: If996b0496bf15729654d18771756b736e41812ae
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17164
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Michael Haeuptle <michaelhaeuptle@gmail.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2023-04-13 21:35:30 +00:00
Alexey Marchuk
0496a2af3b lib/nvmf: Do not use cb_fn in spdk_nvmf_qpair_disconnect
Current implementation of spdk_nvmf_qpair_disconnect
saves and calls user's callback correctly only on
the first call. If this function is called when
qpair is already in the process of disconnect, the
cb_fn is called immediately, that may lead to stack
overflow.

In most of places this function is called with
cb_fn = NULL, that means that the real qpair disconnect
is not important for the app logic. Only in several
places (nvmf tgt shutdown flow) that is important to
wait for all qpairs to be disconnected.

Taking into account complexity related to possible stack
overflow, do not pass the cb_fn to spdk_nvmf_qpair_disconnect.
Instead, wait until a list of qpairs is empty in shutdown path.

Next patches will change spdk_nvmf_qpair_disconnect behaviour
when disconnect is in progress and deprecate cb_fn and ctx
parameters.

Fixes issue #2765

Signed-off-by: Alexey Marchuk <alexeymar@nvidia.com>
Change-Id: Ie8d49c88cc009b774b45adab3e37c4dde4395549
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17163
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Michael Haeuptle <michaelhaeuptle@gmail.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2023-04-13 21:35:30 +00:00
Alexey Marchuk
bcd0ea8c1c nvmf/vfio_user: Post SQ delete cpl when qpair is destroyed
This patch removes usage of cb_fn argument of
spdk_nvmf_qpair_disconnect API. Instead of relying
on the callback, post a completion on delete SQ
command when transport qpair_fini is called.

Signed-off-by: Alexey Marchuk <alexeymar@nvidia.com>
Change-Id: I68dec97ea94e89f48a8667da82f88b5e24fc0d88
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17168
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
2023-04-13 21:35:30 +00:00
Michal Berger
6693862f9e autotest: Consider processes from deleted workspaces during cleanup
Common practice is to purge workspace on the jenkins side when the
job is done. When that happens, stuck processes may still linger,
but readlink -f will fail to resolve exe link as the target binary
won't exist anymore. Instead, just see what the link points at
and include it in the list.

Signed-off-by: Michal Berger <michal.berger@intel.com>
Change-Id: I437a720e12e43e33fbf04345a6b77987167864fe
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17050
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Karol Latecki <karol.latecki@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2023-04-13 21:27:08 +00:00
Alexey Marchuk
6b7cca1542 accel/dpdk_cryptodev: Handle OP_STATUS_SUCCESS
SW PMD might process a crypto operation but failed
to submit it to a completions ring.
Such operation can't be retried if crypto operation
is inplace.
Handle such crypto op as a completed.
Verified by integrating rte openssl driver and
adding additional logs to check that SUCCESS
status received and completed as expected.

Signed-off-by: Alexey Marchuk <alexeymar@nvidia.com>
Change-Id: Ida161cec045167af752ebd5b57f41b2bbfe8b97c
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16995
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2023-04-13 21:26:20 +00:00
Sebastian Brzezinka
56eced4280 fuzz/llvm: move coverage data to llvm/coverage
There is no access to fuzzer logs if `index.html` is in the same dir,
move covrage to `$output_dir/llvm/coverage`.

Signed-off-by: Sebastian Brzezinka <sebastian.brzezinka@intel.com>
Change-Id: I139a6d780754aaf5b1333a2e5b0183bd24488bfa
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16341
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Michal Berger <michal.berger@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2023-04-13 21:23:07 +00:00
Sebastian Brzezinka
7cc7d52830 fuzz/llvm: provide a prefix to use when saving artifacts
Save crash files and other artifacts in `$output_dir/llvm/`

Signed-off-by: Sebastian Brzezinka <sebastian.brzezinka@intel.com>
Change-Id: I2ff82b414592cc492b79c9178b7257b2e87440b5
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15827
Reviewed-by: Michal Berger <michal.berger@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2023-04-13 21:23:07 +00:00
Sebastian Brzezinka
1fa3b4f72d llvm_nvme_fuzz: enable running llvm nvmf test in parallel
Signed-off-by: Sebastian Brzezinka <sebastian.brzezinka@intel.com>
Change-Id: Iad129c1bc62116a93701a5f68c78351f01a4c878
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16249
Reviewed-by: Michal Berger <michal.berger@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2023-04-13 21:23:07 +00:00
Sebastian Brzezinka
4f7ab50650 llvm_vfio_fuzz: start fuzzer tests in parallel
With corpus files persistent between fuzzer weekend run it may be
better to start all test for fraction of time instead of different
test every week.

Remove `poll_groups_mask` from config, this patch run every test on
single core and since then there is no need to specify another mask.

Signed-off-by: Sebastian Brzezinka <sebastian.brzezinka@intel.com>
Change-Id: I4448724801bdf1a3c496f829fd168b840c2efa67
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15384
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Michal Berger <michal.berger@intel.com>
2023-04-13 21:23:07 +00:00
Jaylyn Ren
6b6101c1e7 spdk_top: fix the cpu usage display issue in thread tab
Fix the issue that the cpu usage in thread tab shows empty when the CPUMASK does not start from zero.

Signed-off-by: Jaylyn Ren <jaylyn.ren@arm.com>
Change-Id: Ifd22feefd22a5dd0f87b20ff6c47bd196eb1a39a
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17289
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2023-04-13 12:19:37 +00:00
Krzysztof Karas
ed1b4c926c bdev: delete UUID generation from ephemeral bdevs
Ensure no ephemeral bdev will generate its own UUID,
unless this value has been specified via RPC.
Generation is now being done by the bdev layer itself.

Change-Id: I11efe819a28a137b738959a96a7bdf8c79cfaf64
Signed-off-by: Krzysztof Karas <krzysztof.karas@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17109
Community-CI: Mellanox Build Bot
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Artur Paszkiewicz <artur.paszkiewicz@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2023-04-13 12:12:58 +00:00
Krzysztof Karas
1db41324f7 bdev/raid: add RPC option to provide UUID
Make sure UUID can be passed to raid bdev type during
its creation.

Change-Id: I5fa9ca2d18d435fa882e1cb388b2e1918d821540
Signed-off-by: Krzysztof Karas <krzysztof.karas@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17136
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Artur Paszkiewicz <artur.paszkiewicz@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
2023-04-13 12:12:58 +00:00
Krzysztof Karas
91ea8102b6 bdev/error: add option to provide UUID for error bdev
Make sure UUID can be passed to error bdev type during
its creation.

Change-Id: I80b9c1b938a464c0cc8c61f871ae2044d8e09dfd
Signed-off-by: Krzysztof Karas <krzysztof.karas@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17107
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Artur Paszkiewicz <artur.paszkiewicz@intel.com>
2023-04-13 12:12:58 +00:00
Krzysztof Karas
11dc297c1b bdev: always generate UUIDs
Make sure UUID is present for every bdev, even ephemeral ones.
Furthermore, this change removes assumption that bdev UUID
may remain empty.

Change-Id: I924c1ba9dedfe88a05044bb1073f28085735b1c1
Signed-off-by: Krzysztof Karas <krzysztof.karas@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17106
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Artur Paszkiewicz <artur.paszkiewicz@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2023-04-13 12:12:58 +00:00
Sebastian Brzezinka
0cd5af7143 fuzz/llvm: add common.sh for llvm fuzzers
`common.sh` - add common function to start fuzzers in
parallel and quick sequential run

add `get_testn` - get number of test to run in parallel

Signed-off-by: Sebastian Brzezinka <sebastian.brzezinka@intel.com>
Change-Id: I7c70b5221887c29b495a1632545877ca7cca0945
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16323
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2023-04-13 12:09:53 +00:00
Sebastian Brzezinka
c019eb4d67 llvm_vfio_fuzz: handle thread create failure
In case of `pthread_create` or `spdk_thread_create` failed stop
spdk app with `-1` error code

Signed-off-by: Sebastian Brzezinka <sebastian.brzezinka@intel.com>
Change-Id: Id5d0f6716917f42e06fbda7db9285deb320e309a
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16338
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2023-04-13 12:09:53 +00:00
Jim Harris
943806efab init: rewrite subsystem_sort
Commit aaba5d introduced a build warning with some
compilers. While fixing it, I realized the function was
difficult to immediately understand. So in addition to fixing
the build warning, I also made the following changes:

* Improved names for local variables
* Use TAILQ_INIT for local TAILQ instead of TAILQ_HEAD_INITIALIZER.
* Add comments explaining more clearly what the nested loops are
  doing.
* Use TAILQ_SWAP instead of a FOREACH + REMOVE + INSERT.

Fixes: aaba5d ("subsystem: Gather list changed conditions.")
Fixes issue #2978.

Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: Ic8740b5706537938d62a0acfac62625b2424b85f
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17496
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Mike Gerdts <mgerdts@nvidia.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
2023-04-13 04:58:05 +00:00
Jacek Kalwas
de2609a241 env_dpdk: put rte_rcu on DPDK_LIB_LIST unconditionally
rte_rcu is available on all versions of DPDK supported by SPDK. It is
also required by quite a few DPDK libraries. So just include
it always, it's a small library so let's not try to over-complicate by
trying to figure out exactly when it's needed.

This change fixes linking issue when crypto enabled (and vhost not).

Signed-off-by: Jacek Kalwas <jacek.kalwas@intel.com>
Change-Id: Ibdd6acb5a25c401b462022bbd94bd380690640d0
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17514
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
2023-04-12 16:54:05 +00:00
Jacek Kalwas
13a2c5589c bdev: fix return value of bdev_io_get_max_buf_len
Fixed function is used to determine if it is possible to get iobuf
from the pool. To make sure that buf size alignment requirement is
satisifed value returned shall include alignment value but subtracted
by one.

e.g.
transaction size length = 64k
buffer alignment = 1 byte (no alignment requirement)
metadata length = 0

Without the fix the function returned 64k + 1, now it returns 64k
which is correct behavior and allows to proceed with further command
processing (if max buffer size limit is set to 64k only).

Signed-off-by: Jacek Kalwas <jacek.kalwas@intel.com>
Change-Id: I09104ad21b3652ba1aa5c3805a04b1c6549d04ac
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17513
Community-CI: Mellanox Build Bot
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
2023-04-12 16:54:05 +00:00
Karol Latecki
9754119ac9 test/vfio-user: reduce spdk_tgt memory allocation
Limit spdk_tgt app to 512MB of memory. This should
be sufficient for tests in this suite provided we
also reduce the size of created malloc bdevs.

Signed-off-by: Karol Latecki <karol.latecki@intel.com>
Change-Id: Iaaba1e13899d37232f7acf842b7deed05935f78f
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17365
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Jaroslaw Chachulski <jaroslawx.chachulski@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2023-04-12 16:48:13 +00:00
Karol Latecki
22fa84f77a test/vhost: increase memory in virtio tests
Increase the memory for spdk processes using
"-s" option. When built with additional options
(like --with-ocf) processes have more memory
requirements.

See:
https://review.spdk.io/gerrit/c/spdk/spdk/+/17265
https://github.com/spdk/spdk/issues/2951
for details.

Signed-off-by: Karol Latecki <karol.latecki@intel.com>
Change-Id: Ia4fc37787861e2aef28392eaddf389f27bdf7200
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17371
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Krzysztof Karas <krzysztof.karas@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2023-04-12 16:48:13 +00:00
Krzysztof Karas
7d5f0ade61 spdk_top: move core_busy_period and core_idle_period
Move these two variables below check for core_num boundary.
This ensures core_num's value can be used as index for g_cores_info
array.

Change-Id: I118a4b3a3ec61c9ccd818f3f3bd2ff013d2d7b14
Signed-off-by: Krzysztof Karas <krzysztof.karas@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17175
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2023-04-12 16:45:03 +00:00
Krzysztof Karas
6d7b690584 bdevperf: avoid writing outside "out" array boundary
Currently variables "i" and "k" in config_filename_next()
function may increase at the same speed. When repeating
"for" loop at line 1862 both "i" and "k" are being incremented:
 + i by the for loop,
 + k by the "out[k++]" instruction.
This means that there may be a case, where for loop ends with
"i < BDEVPERF_CONFIG_MAX_FILENAME" condition, as value of "i"
is equal to BDEVPERF_CONFIG_MAX_FILENAME, and at the same time
value of "k" is also equal to BDEVPERF_CONFIG_MAX_FILENAME,
because after writing to out[BDEVPERF_CONFIG_MAX_FILENAME - 1]
element, we increment it one last time.
This results in writing "0" value at line 1873 to memory outside
"out" array boundary.
To amend this problem, compare k against
BDEVPERF_CONFIG_MAX_FILENAME, insted of i.

Change-Id: Ia45778c1f267d2b9dcd676cd9b6c662d09f6f94e
Signed-off-by: Krzysztof Karas <krzysztof.karas@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17176
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2023-04-12 16:44:42 +00:00
Michal Berger
982a1bb7ed test/vhost: Make sure $disk_map exists
Also, simplify the way how it's read. As a benefit, this gets rid of
the xargs complaining about a NUL being sent out to its input which
was quite verbose.

Signed-off-by: Michal Berger <michal.berger@intel.com>
Change-Id: Iaeb09298c2255404273bb3fc6910bc6b93c2d7eb
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16892
Reviewed-by: Karol Latecki <karol.latecki@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2023-04-12 16:44:21 +00:00
Michal Berger
47c2ee6435 test/vhost: Remove cpuset limit enforced upon vhost cpus
This limit didn't do much in a first place. It was creating separate
cgroup with mem nodes already set to 0-1 nodes - in practice these are
all the NUMA nodes available for mem allocation by the very default.
Regarding cpus, vhost is already limited by its own cpumask's affinity
hence there's no need to enforce this limit via a dedicated cgroup.
Lastly, this was not taking into consideration the fact that other
processes may still be scheduled on the vhost cpus as the cgroups
they belong to were not modified in any way (like in case of the
scheduler tests for instance). That said, the amount of jitter coming
from these 3rd party processes would not have much bearing on vhost
anyway - the only processes that could be more noisy are QEMU's but
each VM instance is already put into a separate cgroup (see
test/vhost/common.sh).

Signed-off-by: Michal Berger <michal.berger@intel.com>
Change-Id: I1de24bfc9e24f8f6391207e579cc599ea5c82094
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16890
Reviewed-by: Karol Latecki <karol.latecki@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
2023-04-12 16:44:21 +00:00
Michal Berger
98d98ecb57 test/vhost: Switch from msdos to gpt
Disks used under the vhost benchmarks can be > 2TB so the msdos pt
is not very suitable here. Use something more robust like gpt.

Signed-off-by: Michal Berger <michal.berger@intel.com>
Change-Id: I3e98bcb655c2f55a515f4000b0668b26d71c8fca
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16889
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Karol Latecki <karol.latecki@intel.com>
2023-04-12 16:44:21 +00:00
Michal Berger
905c4dcf6f test/vhost: Make sure all TCP ports allocated for QEMU are available
This may become problematic in case of bigger number of VMs. In
particular, it was noticed that the vnc port may overlap with ssh's
X forwarding ports (starting at 6010). To make sure QEMU does not
fail while attempting to bind to already existing port, we first
check if target port is in use.

Signed-off-by: Michal Berger <michal.berger@intel.com>
Change-Id: I525aa2a1cc52c6aa1d8d4ade8924ad684fe8af50
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16337
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Karol Latecki <karol.latecki@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2023-04-12 16:44:21 +00:00
Michal Berger
ce6550a935 test/vhost: Add VM's id to fio config's description
Since we are sending fio configuration to potentially dozens of
VMs, proper description allows to identify final results on per-VM
basis - this is helpful during debugging.

Signed-off-by: Michal Berger <michal.berger@intel.com>
Change-Id: Ifc38d9cb60879f8b7f6e178f23e3f451a73765f0
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15895
Reviewed-by: Karol Latecki <karol.latecki@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
2023-04-12 16:44:21 +00:00
Michal Berger
c1d2bdfeb3 test/vhost: Add perf collection support
Signed-off-by: Michal Berger <michal.berger@intel.com>
Change-Id: I059658c477be4122e7b04f33a796f732746b7c90
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15603
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Karol Latecki <karol.latecki@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2023-04-12 16:44:21 +00:00
Michal Berger
7c764edf85 test/vhost: Gather IRQ stats from the VM
Signed-off-by: Michal Berger <michal.berger@intel.com>
Change-Id: I4351d812b9b9da127b6daf46b0f44ce237e33ee9
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15460
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Karol Latecki <karol.latecki@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2023-04-12 16:44:21 +00:00
Michal Berger
f8a085a2d5 test/vhost: Add helper functions for extracting IRQ data
Signed-off-by: Michal Berger <michal.berger@intel.com>
Change-Id: If75aeca0c44667ef02b72f2e4a9141da4057d291
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15459
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Karol Latecki <karol.latecki@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Kamil Godzwon <kamilx.godzwon@intel.com>
2023-04-12 16:44:21 +00:00
Ben Walker
78df9be449 nvmf/tcp: Wait for PDUs to release when closing a qpair
In the presence of hardware offload (for data digest) we may not be
able to immediately release all PDUs to free a connection. Add a
state to wait for them to finish.

Fixes #2862

Change-Id: I5ecbdad394c0296af6f5c2310d7867dd9de154cb
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16637
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2023-04-10 17:58:48 +00:00
Marcin Spiewak
8247bd4041 configure: added --max-lcores option to ./configure
This patch adds support for --max-lcores configuration
option to ./configure script. This option can be
used to change value of DPDK's RTE_MAX_LCORE
(which is by default set to 128 for x86 architecture).
If specified, DPDK will be configured to use
the value provided by the user instead of
the default one. The option can be useful
in systems where number of physical CPUs is
larger than 128.
When RTE_MAX_LCORE is increased, it is possible
to specify cores with identifiers larger than
128 in the SPDK's CPU mask.
If the option is not specifed, DPDK will use
default value of RTE_MAX_LCORE.
--max-lcores range is [1..1024]
Example usage:
./configure --max-lcores=256
./configure --max-lcores=16

Change-Id: I47d321ba394c9acf27eaa91619aeaad28db6de34
Signed-off-by: Marcin Spiewak <marcin.spiewak@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17453
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Community-CI: Mellanox Build Bot
2023-04-10 17:58:20 +00:00
Marcin Spiewak
b3a5763436 app: use --lcores to map ids greater than 128
Fixes #2812

This patch adds	support	for '--lcores <map_list>'
parameter in spdk.
This parameter allow mapping of the lcores
to CPU IDs, if the system contains CPUs with IDs
greater	or equal to 128 (RTE_MAX_LCORE). Such CPUs
can not be directly included in core mask
specified in '-m <mask>' parameter, as the dpdk
rejects cores if IDs are greater than 127.
The only way to	use them in spdk is to map lcore
to CPU using --lcores parameters specified
in command line.
--lcores and -m parameters are mutually
exclusive. Please use only one of them.
Examples:
build/bin/nvmf_tgt --lcores 0@130
build/bin/nvmf_tgt --lcores 0@150,1@151
build/bin/nvmf_tgt --lcores "(5-7)@(10-12)"
build/bin/nvmf_tgt --lcores "(5-7)@(136,138,140)"

Change-Id: Ia92be4499c8daaa936b1a4357d52ae303d6f3eb1
Signed-off-by: Marcin Spiewak <marcin.spiewak@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17403
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
2023-04-10 17:58:20 +00:00
Marcin Spiewak
cc54140080 env: added support for lcore map
This patch adds support for lcore mapping list, which
is needed by spdk if someone wants to use CPUs with IDs
greater than RTE_MAX_LCORE (128). For such CPUs it
is impossible to include them in the core mask (passed
to dpdk as '-c <mask>') as the dpdk doesn't allow
IDs greater than RTE_MAX_LCORE. Therefore they
must be mapped to lower lcore values using
'--lcores <maping_list>' passed to dpdk

Change-Id: If68f15cef2bca9e42a3457bf35477793b58ec53d
Signed-off-by: Marcin Spiewak <marcin.spiewak@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17399
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2023-04-10 17:58:20 +00:00
Jim Harris
b1a2319686 nvmf: retry QID check if duplicate detected
A host will consider a QID as reusable once it disconnects
from the target.  But our target does not immediately
free the QID's bit from the ctrlr->qpair_mask - it waits
until after a message is sent to the ctrlr's thread.

So this opens up a small window where the host makes
a valid connection with a recently free QID, but the
target rejects it.

When this happens, we will now start a 100us poller, and
recheck again.  This will give those messages time to
execute in this case, and avoid unnecessarily rejecting
the CONNECT command.

Tested with local patch that injects 10us delay before
clearing bit in qpair_mask, along with fused_ordering
test that allocates and frees qpair in quick succession.
Also tested with unit tests added in this patch.

Fixes issue #2955.

Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I850b895c29d86be9c5070a0e6126657e7a0578fe
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17362
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
2023-04-07 17:47:13 +00:00
yidong0635
aaba5d9c9e subsystem: Gather list changed conditions.
Just remove the duplicated Code and make the
Conditions for g_subsystems list to subsystems_list
together.

Signed-off-by: yidong0635 <dongx.yi@intel.com>
Change-Id: I011b550b83d32580bfd25130dab9e44bcbdc1daf
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/13753
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2023-04-07 17:35:08 +00:00
Karol Latecki
4bf72c9921 doc: add NVMe-oF TCP CVL 23.01 performance report link
Signed-off-by: Karol Latecki <karol.latecki@intel.com>
Change-Id: I9e6fe094bfb139a7030ed625e9dcd0e6320e4289
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17488
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: <sebastian.brzezinka@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2023-04-07 16:32:52 +00:00
Alexey Marchuk
9c636a02f9 accel/dpdk_cryptodev: Remove queued_cry_ops
If we were not able to submit all configured
crypto ops, then we can just release crypto_ops
and mbuf object of these crypto ops and save
the actual number of submitted operation in
the accel task. Once all submitted operations
complete, poller will call
accel_dpdk_cryptodev_process_task func to submit
cyrpto operations for reamining data blocks.
If no crypto ops were submitted then the task
will be palced in the channel's queued_tasks
and poller will try to resubmit the task.
That in theory should increase performance
since we attempted to resubmit queued ops
with burst size==1 which is not efficient

Fixes issue #2907

Signed-off-by: Alexey Marchuk <alexeymar@nvidia.com>
Change-Id: I4d17e8ed1ad5383848e4d09c46009c6cb2834360
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16784
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2023-04-06 23:30:49 +00:00
Alexey Marchuk
8f4d98bb40 accel/dpdk_cryptodev: Fix sgl init with offset
When accel task is processed is processed in
several iterations (submit part of cryops, wait
for completion and submit next part of cryops),
sgl is initialized with offset to exclude previously
processed blocks. However there is a bug since
spdk_iov_sgl_init doesn't advance iovs, as result
when we do sgl->iov->iov_len - sgl->iov_offset,
we may get unsigned int underflow.
Fix is init sgl with 0 offset and then advance it
with offset.
Modified unit test and added an assert in code to
verify this fix.

Signed-off-by: Alexey Marchuk <alexeymar@nvidia.com>
Change-Id: Ib53ff30f0c90d521f2cf6b3ec847b0d06869c2b5
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17456
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2023-04-06 23:30:49 +00:00
Alexey Marchuk
a6545ae311 test/blockdev: Use regualr RPC socket for mlx5 config
When RPC server is used for configuration, rpc_cmd
function waits 15 seconds to read all replies. If
mlx5 dpdk driver is used on slow machines or in
container, RPC framework_start_init may take more
than 15 seconds to execute. As result, rpc_cmd
exits earlier and output of some comamnds
remains in the pipe. Next call of rpc_cmd may
read wrong data, that leads to malformed json
config. To avoid this problem, redirect RPCs to
a regualr rpc socket.

Signed-off-by: Alexey Marchuk <alexeymar@nvidia.com>
Change-Id: Ibfcf56bb0a7f84f69394846d83746c91a4024b9e
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16389
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2023-04-06 23:30:23 +00:00
Jim Harris
db6297b501 env_dpdk: omit huge-related options when --no-huge specified
If user passes --no-huge as part of env_context, do
not add other huge-related options to the EAL command
line.  Instead emit an error message and return failure, if
any of them were specified explicitly.

Fixes c833f6aa ("env_dpdk: unlink hugepages if shm_id is not specified")
Fixes issue #2973.

Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I7aa49e4af5f3c333fa1e7dec4e3f5b4b92e7d414
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17483
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2023-04-06 21:32:03 +00:00
Shuhei Matsumoto
c2e288a625 iscsi: Return if conn->sock is NULL when updating connection params
iSCSI connection closes its socket when it is terminated. After the
socket is closed, the connection cannot access to it. However, the iSCSI
fuzz test terminated a connection while processing a text command. The
connection aborted the text command and the corresponding completion
callback accessed the closed socket. This unexpected access caused a
NULL pointer access.

Add a check if conn->sock is not NULL to iscsi_conn_params_update()
to avoid such NULL pointer access. The return type of the most iSCSI
library functions are void. Here, it is enough not to return 0. Hence,
use -ENXIO simply to indicate there is no available socket.

Fixes the issue #2958

Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: I2c1f58a63ee0a40561a17f81d4b4264061f411f6
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17353
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Michal Berger <michal.berger@intel.com>
2023-04-06 21:30:38 +00:00
Mike Gerdts
712ab983df blob: set rc to -EINVAL when esnap len too long
When bs_create_blob() is creating the internal xattr for the esnap ID,
it errors out if the ID is too long. This error path neglected to set
the return value. It now returns -EINVAL in this case.

Signed-off-by: Mike Gerdts <mgerdts@nvidia.com>
Change-Id: I6d756da47f41fb554cd6782add63378e81735118
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17292
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2023-04-06 21:30:03 +00:00
Mike Gerdts
6a55d0dbfa blob_bdev: fix doc for spdk_bs_bdev_claim
The documentation for spdk_bs_bdev_claim() errantly referred to
spdk_bdev_create_bs_dev_ro() when it should refer to
spdk_bdev_create_bs_dev(). This has been corrected.

Signed-off-by: Mike Gerdts <mgerdts@nvidia.com>
Change-Id: I1b19bedb93aa553e6cc319ebba64e62f2b80d2c1
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17291
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2023-04-06 21:30:03 +00:00
Michal Berger
53c5691fdf test/common: Merge pkgdep/dnf into pkgdep/yum
There's no point in keeping these separate as dnf-aware distros
also support yum and there are no plans to drop it anytime soon.
In fact, since the actual list of packages between dnf and yum
was different, the centos7 was not provisioned to the full extent.

Signed-off-by: Michal Berger <michal.berger@intel.com>
Change-Id: Ieec6796bf457d37b2618a1c2756d281f4af0c5b4
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16931
Reviewed-by: Karol Latecki <karol.latecki@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Pawel Piatek <pawelx.piatek@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2023-04-06 21:27:49 +00:00
Michal Berger
64aa7a5c16 test/common: Rename vm_setup.sh to autotest_setup.sh
The new name puts more emphasis on what is the main designation of
the script as it does not really touch anything VM-related.

The vm_setup.sh is preserved as a symlink available for a transition
period.

Signed-off-by: Michal Berger <michal.berger@intel.com>
Change-Id: I968a52cc069706f4c5e1b8a871988809e701a3fe
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16928
Reviewed-by: Karol Latecki <karol.latecki@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2023-04-06 21:27:49 +00:00
Michal Berger
e951b7689e test/common: Simplify README
Most of the information gathered there is outdated and generic and
falls out of scope of what vm_setup.sh/pkgdep is actually doing.
Simply mention the main purpose of the script, leaving actual
configuration to the user.

Signed-off-by: Michal Berger <michal.berger@intel.com>
Change-Id: I126515dea019e7f1cd76c8be1339aea080d2a2b0
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16927
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Karol Latecki <karol.latecki@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2023-04-06 21:27:49 +00:00
Michal Berger
ed1571eece pkgpdep: Remove unsupported pieces
We no longer support ubuntu's Xenial and Bionic flavors so they can
be removed.

swupd, Clearlinux's package manager, is also no longer supported.

Signed-off-by: Michal Berger <michal.berger@intel.com>
Change-Id: I526a89f4d3b3078949f235e46f8bb3a39b2a24b6
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16926
Reviewed-by: Pawel Piatek <pawelx.piatek@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Karol Latecki <karol.latecki@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2023-04-06 21:27:49 +00:00
Michal Berger
5b03ae4ec0 test/common: vm_setup.sh cleanup
Some minor code shuffle + removal of the autorun-spdk.conf creation.
Creating this config has little sense as some of these flags cannot
be used together anyway - it basically serves as a dump of all
supported flags which we usually are having a hard time to keep up
to date. That said, autotest_common.sh (and get_config_params())
gives a better view as to what flags are actually supported and
how they are used in practice.

Signed-off-by: Michal Berger <michal.berger@intel.com>
Change-Id: Ib223ec90be58e68ecab69176d213c353df530498
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16925
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Karol Latecki <karol.latecki@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2023-04-06 21:27:49 +00:00
Tomasz Zawadzki
66e0ed8e8d deprecation: add the md to documentation Makefile
Originally deprecation.md was pulled verbatim to
the built documentation. This resulted in very
weird paths on the spdk.io:
https://spdk.io/doc/md__home_sys_sgsw_oss_spdk_github_io_spdk_deprecation.html#deprecation

Use the way that changelog does it, by copying
the file and adding appropriate section links.

Now only the Doxygen version will contain the
section links. Meanwhile deprecation.md in
project root will not. This improves readability.

Change-Id: Ic5c1caf7603b847b3c7445bde76e277ba1ccb740
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16574
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2023-04-06 21:16:39 +00:00
Swapnil Ingle
94e395631e nvmf/vfio_user: move cq_is_full() closer to caller
Move cq_is_full() closer to its caller post_completion() and along with
fixing comments.

Signed-off-by: Swapnil Ingle <swapnil.ingle@nutanix.com>
Change-Id: I93262d1805f0f9f075c6946ed97cd3006ffba130
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16415
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2023-04-06 21:14:38 +00:00
Tim Zhang
edfd7ff2d0 bdev_nvme: add hdgst and ddgst in nvme_ctrlr_config_json
this add output when execute save_config function

Signed-off-by: Tim Zhang <hgmz371@gmail.com>
Change-Id: Ib465dc424beb691e86425878588bb732574fc9b4
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16097
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2023-04-06 21:08:38 +00:00
KanKuo
b17e84e51e UT/vhost/vhost.c:add the test of spdk_blk_construct
Signed-off-by: KanKuo <kuox.kan@intel.com>
Change-Id: Ib5b132020845c3f3e961b65590c100ad4f1567c3
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15873
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2023-04-06 21:05:52 +00:00
KanKuo
f46d3a2476 UT/bdev/bdev.c:add bdev_compare test
Signed-off-by: Kuo Kan <kuox.kan@intel.com>
Change-Id: Ib3d33cefc78f543e157ea552ee88f0514e305054
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15795
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2023-04-06 21:05:30 +00:00
Michal Berger
53f57b5dff test/nvmf: Reload irdma driver only when e810 test was requested
Signed-off-by: Michal Berger <michal.berger@intel.com>
Change-Id: I313d65b01c9214a6bde5775488fb32c70cefa4d6
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15357
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Karol Latecki <karol.latecki@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2023-04-06 21:02:06 +00:00
Michal Berger
bb9cd3b467 test/make: Check for config.mk instead of spdk.common.mk
make actually depends on this file to perform cleanup and that's
the file that's actually created by configure.

Signed-off-by: Michal Berger <michal.berger@intel.com>
Change-Id: Ic16d4f6268241e5e3cd845a579cd4b7ff885bbb8
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15355
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2023-04-06 20:59:49 +00:00
Michal Berger
099bdaf3b5 scripts/vagrant: Replace lsb_release with os-release check
lsb_release is not shipped under latest fedora distros, hence failing
this check. Use /etc/os-release instead.

Signed-off-by: Michal Berger <michal.berger@intel.com>
Change-Id: Id74095ff5dd5d43f7a97e4c5d026ac13da26d815
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15107
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Kamil Godzwon <kamilx.godzwon@intel.com>
2023-04-06 20:56:08 +00:00
Michal Berger
a66276d9b7 scripts/bash-completion: Adjustments for older Bash versions
Older versions of Bash don't handle -v option in array context very
well. Also, some of the compopt options are missing in older versions
so make sure stderr stays silent.

Signed-off-by: Michal Berger <michal.berger@intel.com>
Change-Id: I81989940e8b25e2dbeed91f97fed5aa65e7df656
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14130
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Krzysztof Karas <krzysztof.karas@intel.com>
2023-04-06 20:54:43 +00:00
Michal Berger
75bfce75c2 scripts/bash-completion: Extract all rpc methods
Currently we extract these methods from rpc.py's --help or from
rpc_get_methods() in case there's a SPDK application running
in the background. This, however, results in a list missing some
basic methods that rpc_get_methods() simply doesn't include, e.g.
save_subsystem_config().

To make sure we always have a complete list use both, --help and
the rpc_get_methods() together.

Signed-off-by: Michal Berger <michal.berger@intel.com>
Change-Id: Ie73917b74860cac13056bea9babc7f7b57e39b3a
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14115
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Krzysztof Karas <krzysztof.karas@intel.com>
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2023-04-06 20:54:43 +00:00
Michal Berger
ae3ae309db module/scheduler: Silence warning about rte_power under clean target
This warning is returned regardless if the rte_power libs were
present or not as the clean target always removes them prior this
check.

Signed-off-by: Michal Berger <michal.berger@intel.com>
Change-Id: I45bd350d434ec1fbb6504c7df05c4d27946d4f9b
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/13562
Reviewed-by: Krzysztof Karas <krzysztof.karas@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2023-04-06 20:53:01 +00:00
Michal Berger
86ad46db55 autobuild: Put DPDK's kernel drivers at proper location
This is the location freebsd_update_contigmem_mod() looks up to copy
the modules into right /boot directories.

Signed-off-by: Michal Berger <michallinuxstuff@gmail.com>
Change-Id: Ic5919cc6382433c641c4c7a8b1100a50abfc246a
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12925
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2023-04-06 20:52:07 +00:00
Krishna Kanth Reddy
92141ccf21 bdev/uring: Unset write_cache
Unset the write_cache as the uring bdev does not support Flush I/O.

Signed-off-by: Krishna Kanth Reddy <krish.reddy@samsung.com>
Change-Id: I8e6fce26b12176a7c77c40a1c9102be5cb72e358
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12900
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2023-04-06 20:50:48 +00:00
yidong0635
ce3ffc233b blobstore: Add assert in blob_id_cmp.
From the issue report in #2507
that comparing blob maybe be NULL.
So add assert, that in CI may catch this issue.
And other funtions add this also.

Signed-off-by: yidong0635 <dongx.yi@intel.com>
Change-Id: I98179ec76f2b6785b6921c37373204021c0669b6
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12737
Reviewed-by: Krzysztof Karas <krzysztof.karas@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2023-04-06 20:49:00 +00:00
Ben Walker
cb9e0db853 sock: Do aligned allocations for the pipes
Use posix_memalign to ensure aligned allocations. In reality, we'd get
64 byte alignment using even calloc, but this makes sure of it.

Change-Id: I6066e57c95b0f42cff439d452e4aed853189a523
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17508
Community-CI: Mellanox Build Bot
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2023-04-06 20:20:19 +00:00
Ben Walker
30f52282f4 util/pipe: Simplify some null checks
Several null checks are not actually necessary.

Change-Id: I6827e3d4147ed0b9fb22b2148656cba87be5e18c
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17507
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
2023-04-06 20:20:19 +00:00
Ben Walker
4c0b2d1684 util/pipe: Fix documentation on spdk_pipe_create
The pipe can now be entirely filled

Change-Id: Ib3ec7057224c9239800c1f2877f0441d29c64374
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17506
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
2023-04-06 20:20:19 +00:00
Ben Walker
4bb9dcdb7d test: Add test_iobuf.c to mock the iobuf library
Use it in all of the places that were previously hooking
spdk_mempool_get.

Change-Id: I311f75fb9601b4f987b106160eb0a0014d3327cd
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16329
Community-CI: Mellanox Build Bot
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2023-04-06 20:16:49 +00:00
Ben Walker
9bc7d6b34f thread: Move get/put calls into .c file
This will make it much easier to mock this library for use in unit
tests.

Change-Id: I7dc835865f75f9e29e8b709a634d30053ada2055
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16296
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2023-04-06 20:16:49 +00:00
Ben Walker
a9bcb7f261 thread: Move iobuf code to a separate compilation unit.
This makes it much easier to mock this code in unit tests without having
to mock up the entire thread library.

Change-Id: Ic3d9cb826ae71af780a06f88669c37cef2c9a4ae
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16173
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
2023-04-06 20:16:49 +00:00
Konrad Sztyber
297182a083 env_dpdk: add support for DPDK main branch
Now that DPDK v23.03.0 has been released, the version on the main branch
points to the next release, v23.07.0-rc0, so we need to adjust the
version check to enable testing against the main branch.

Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I37d165111c446612d573c19199e4ace6aa24d191
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17480
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
2023-04-06 20:14:27 +00:00
Konrad Sztyber
4282294b8a env_dpdk: add support for DPDK v23.03.0
Since there were no ABI changes in the interfaces used by SPDK, the
v22.11 functions are reused.

Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: Iff73405eec197f7ed1752366b6b38c28710a73ec
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17479
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Community-CI: Mellanox Build Bot
2023-04-06 20:14:27 +00:00
69 changed files with 683 additions and 7110 deletions

View File

@ -67,11 +67,6 @@ New API `spdk_bdev_part_construct_ext` is added and allows the bdev's UUID to be
the existing worker and namespace association logic to access every namespace from each worker.
This replicates behavior of bdevperf application when `-C` option is provided.
### util
New APIs `spdk_uuid_is_null` and `spdk_uuid_set_null` were added to compare and
set UUID to NULL value.
## v23.01
### accel

View File

@ -536,11 +536,10 @@ Example commands
## RAID {#bdev_ug_raid}
RAID virtual bdev module provides functionality to combine any SPDK bdevs into
one RAID bdev. Currently SPDK supports only RAID 0. RAID metadata may be stored
on member disks if enabled when creating the RAID bdev, so user does not have to
recreate the RAID volume when restarting application. It is not enabled by
default for backward compatibility. User may specify member disks to create
RAID volume even if they do not exist yet - as the member disks are registered at
one RAID bdev. Currently SPDK supports only RAID 0. RAID functionality does not
store on-disk metadata on the member disks, so user must recreate the RAID
volume when restarting application. User may specify member disks to create RAID
volume event if they do not exists yet - as the member disks are registered at
a later time, the RAID module will claim them and will surface the RAID volume
after all of the member disks are available. It is allowed to use disks of
different sizes - the smallest disk size will be the amount of space used on

View File

@ -495,10 +495,6 @@ Example response:
"bdev_lvol_delete_lvstore",
"bdev_lvol_rename_lvstore",
"bdev_lvol_create_lvstore",
"bdev_lvol_shallow_copy",
"bdev_lvol_set_xattr",
"bdev_lvol_get_xattr",
"bdev_lvol_get_fragmap",
"bdev_daos_delete",
"bdev_daos_create",
"bdev_daos_resize"
@ -5986,7 +5982,6 @@ Name | Optional | Type | Description
----------------------- | -------- | ----------- | -----------
name | Required | string | Bdev name
base_bdev_name | Required | string | Base bdev name
uuid | Optional | string | UUID of new bdev
#### Result
@ -10002,160 +9997,6 @@ Example response:
]
~~~
### bdev_lvol_shallow_copy {#rpc_bdev_lvol_shallow_copy}
Make a shallow copy of lvol over a given bdev. Only cluster allocated to the lvol will be written on the bdev.
Must have:
* lvol read only
* lvol size smaller than bdev size
* lvstore block size a multiple of bdev size
#### Parameters
Name | Optional | Type | Description
----------------------- | -------- | ----------- | -----------
src_lvol_name | Required | string | UUID or alias of lvol to create a copy from
dst_bdev_name | Required | string | Name of the bdev that acts as destination for the copy
### bdev_lvol_set_xattr {#rpc_bdev_lvol_set_xattr}
Set xattr for lvol bdev
#### Parameters
Name | Optional | Type | Description
----------------------- | -------- | ----------- | -----------
name | Required | string | UUID or alias of lvol
xattr_name | Required | string | Name of the xattr
xattr_value | Required | string | Value of the xattr
### bdev_lvol_get_xattr {#rpc_bdev_lvol_get_xattr}
Get xattr for lvol bdev
#### Parameters
Name | Optional | Type | Description
----------------------- | -------- | ----------- | -----------
name | Required | string | UUID or alias of lvol
xattr_name | Required | string | Name of the xattr
#### Example
Example request:
~~~json
{
"jsonrpc": "2.0",
"method": "bdev_lvol_shallow_copy",
"id": 1,
"params": {
"src_lvol_name": "8a47421a-20cf-444f-845c-d97ad0b0bd8e",
"dst_bdev_name": "Nvme1n1"
}
}
~~~
Example response:
~~~json
{
"jsonrpc": "2.0",
"id": 1,
"result": true
}
~~~
### bdev_lvol_shallow_copy_status {#rpc_bdev_lvol_shallow_copy_status}
Get shallow copy status
#### Result
This RPC reports the state of a shallow copy operation, in case of error a description, and
operation's advance state in the format _number_of_copied_clusters/total_clusters_to_copy_.
#### Parameters
Name | Optional | Type | Description
----------------------- | -------- | ----------- | -----------
src_lvol_name | Required | string | UUID or alias of source lvol
#### Example
Example request:
~~~json
{
"jsonrpc": "2.0",
"method": "bdev_lvol_shallow_copy_status",
"id": 1,
"params": {
"src_lvol_name": "8a47421a-20cf-444f-845c-d97ad0b0bd8e"
}
}
~~~
Example response:
~~~json
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"state": "in progress",
"progress": "2/4"
}
}
~~~
### bdev_lvol_get_fragmap {#bdev_lvol_get_fragmap}
Get a fragmap for a specific segment of a logical volume using the provided offset and size.
A fragmap is a bitmap that records the allocation status of clusters. A value of "1" indicates
that a cluster is allocated, whereas "0" signifies that a cluster is unallocated.
#### Parameters
Name | Optional | Type | Description
----------------------- | -------- | ----------- | -----------
name | Required | string | UUID or alias of the logical volume
offset | Optional | number | Offset in bytes of the specific segment of the logical volume (Default: 0)
size | Optional | number | Size in bytes of the specific segment of the logical volume (Default: 0 for representing the entire file)
#### Example
Example request:
~~~json
{
"jsonrpc": "2.0",
"method": "bdev_lvol_get_fragmap",
"id": 1,
"params": {
"name": "8a47421a-20cf-444f-845c-d97ad0b0bd8e",
"offset": 0,
"size": 41943040
}
}
~~~
Example response:
~~~json
{
"jsonrpc": "2.0",
"id": 1,
"result": {
"cluster_size": 4194304,
"num_clusters": 10,
"num_allocated_clusters": 0,
"fragmap": "AAA="
}
}
~~~
## RAID
### bdev_raid_get_bdevs {#rpc_bdev_raid_get_bdevs}
@ -10197,54 +10038,26 @@ Example response:
"result": [
{
"name": "RaidBdev0",
"uuid": "a0bf80ba-96c1-4a81-a008-ad2d1b4b814c",
"strip_size_kb": 128,
"state": "online",
"raid_level": "raid0",
"num_base_bdevs": 2,
"num_base_bdevs_discovered": 2,
"num_base_bdevs_operational": 2,
"base_bdevs_list": [
{
"name": "malloc0",
"uuid": "d2788884-5b3e-4fd7-87ff-6c78177e14ab",
"is_configured": true,
"data_offset": 256,
"data_size": 261888
},
{
"name": "malloc1",
"uuid": "a81bb1f8-5865-488a-8758-10152017e7d1",
"is_configured": true,
"data_offset": 256,
"data_size": 261888
}
"malloc0",
"malloc1"
]
},
{
"name": "RaidBdev1",
"uuid": "f7cb71ed-2d0e-4240-979e-27b0b7735f36",
"strip_size_kb": 128,
"state": "configuring",
"raid_level": "raid0",
"num_base_bdevs": 2,
"num_base_bdevs_discovered": 1,
"num_base_bdevs_operational": 2,
"base_bdevs_list": [
{
"name": "malloc2",
"uuid": "f60c20e1-3439-4f89-ae55-965a70333f86",
"is_configured": true,
"data_offset": 256,
"data_size": 261888
}
{
"name": "malloc3",
"uuid": "00000000-0000-0000-0000-000000000000",
"is_configured": false,
"data_offset": 0,
"data_size": 0
}
"malloc2",
null
]
}
]
@ -10332,78 +10145,6 @@ Example response:
}
~~~
### bdev_raid_remove_base_bdev {#rpc_bdev_raid_remove_base_bdev}
Remove base bdev from existing raid bdev.
#### Parameters
Name | Optional | Type | Description
----------------------- | -------- | ----------- | -----------
name | Required | string | Base bdev name in RAID
#### Example
Example request:
~~~json
{
"jsonrpc": "2.0",
"method": "bdev_raid_remove_base_bdev",
"id": 1,
"params": {
"name": "Raid0"
}
}
~~~
Example response:
~~~json
{
"jsonrpc": "2.0",
"id": 1,
"result": true
}
~~~
### bdev_raid_grow_base_bdev {#rpc_bdev_raid_grow_base_bdev}
Add a base bdev to a raid bdev, growing the raid's size if needed
#### Parameters
Name | Optional | Type | Description
----------------------- | -------- | ----------- | -----------
raid_name | Required | string | Raid bdev name
base_name | Required | string | Base bdev name
#### Example
Example request:
~~~json
{
"jsonrpc": "2.0",
"method": "bdev_raid_grow_base_bdev",
"id": 1,
"params": {
"raid_name": "Raid1",
"base_name": "Nvme1n1",
}
}
~~~
Example response:
~~~json
{
"jsonrpc": "2.0",
"id": 1,
"result": true
}
~~~
## SPLIT
### bdev_split_create {#rpc_bdev_split_create}

View File

@ -197,12 +197,4 @@ bdev_lvol_decouple_parent [-h] name
Decouple parent of a logical volume
optional arguments:
-h, --help show help
bdev_lvol_set_xattr [-h] name xattr_name xattr_value
Set xattr for lvol bdev
optional arguments:
-h, --help show help
bdev_lvol_get_xattr [-h] name xattr_name
Get xattr for lvol bdev
optional arguments:
-h, --help show help
```

View File

@ -497,14 +497,6 @@ const char *spdk_bdev_get_name(const struct spdk_bdev *bdev);
*/
const char *spdk_bdev_get_product_name(const struct spdk_bdev *bdev);
/**
* Get block device creation time.
*
* \param bdev Block device to query.
* \return Creation time of bdev as a null-terminated string, or NULL if not present.
*/
const char *spdk_bdev_get_creation_time(const struct spdk_bdev *bdev);
/**
* Get block device logical block size.
*

View File

@ -513,13 +513,6 @@ struct spdk_bdev {
*/
struct spdk_uuid uuid;
/**
* Creation time for this bdev.
*
* If not provided, it will be NULL.
*/
const char *creation_time;
/** Size in bytes of a metadata for the backend */
uint32_t md_len;

View File

@ -168,14 +168,6 @@ void spdk_bit_array_load_mask(struct spdk_bit_array *ba, const void *mask);
*/
void spdk_bit_array_clear_mask(struct spdk_bit_array *ba);
/**
* Encode a bit array into a base64 string.
*
* @param array Bit array to encode.
* @return base64 string.
*/
char *spdk_bit_array_to_base64_string(const struct spdk_bit_array *array);
#ifdef __cplusplus
}
#endif

View File

@ -515,42 +515,6 @@ uint64_t spdk_blob_get_next_allocated_io_unit(struct spdk_blob *blob, uint64_t o
*/
uint64_t spdk_blob_get_next_unallocated_io_unit(struct spdk_blob *blob, uint64_t offset);
/**
* Get the number of copied clusters of a shallow copy operation
* If a shallow copy of the blob is in progress or it is ended, this function returns
* the number of copied clusters.
*
* \param blob Blob struct to query.
*
* \return number of copied clusters.
*/
uint64_t spdk_blob_get_shallow_copy_copied_clusters(struct spdk_blob *blob);
/**
* Get the total number of clusters to be copied in a shallow copy operation
* If a shallow copy of the blob is in progress or it is ended, this function returns
* the total number of clusters to be copied.
*
* \param blob Blob struct to query.
*
* \return total number of clusters.
*/
uint64_t spdk_blob_get_shallow_copy_total_clusters(struct spdk_blob *blob);
/**
* Get the result of last shallow copy operation
* If a shallow copy of the blob is in progress or it is ended, this function returns
* the result of the operation.
*
* \param blob Blob struct to query.
*
* \return 0 on success, negative errno on failure.
*/
int spdk_blob_get_shallow_copy_result(struct spdk_blob *blob);
struct spdk_blob_xattr_opts {
/* Number of attributes */
size_t count;
@ -797,26 +761,6 @@ void spdk_bs_inflate_blob(struct spdk_blob_store *bs, struct spdk_io_channel *ch
void spdk_bs_blob_decouple_parent(struct spdk_blob_store *bs, struct spdk_io_channel *channel,
spdk_blob_id blobid, spdk_blob_op_complete cb_fn, void *cb_arg);
/**
* Perform a shallow copy over a device
*
* This call make a shallow copy of a blob over an external blobstore block device.
* Only cluster allocated to the blob will be written on the device.
* Blob size must be smaller than device size.
* Blobstore block size must be a multiple of device block size.
* \param bs Blobstore
* \param channel IO channel used to copy the blob.
* \param blobid The id of the blob.
* \param ext_dev The device to copy on
* \param cb_fn Called when the operation is complete.
* \param cb_arg Argument passed to function cb_fn.
*/
void spdk_bs_blob_shallow_copy(struct spdk_blob_store *bs, struct spdk_io_channel *channel,
spdk_blob_id blobid, struct spdk_bs_dev *ext_dev,
spdk_blob_op_complete cb_fn, void *cb_arg);
struct spdk_blob_open_opts {
enum blob_clear_method clear_method;

View File

@ -22,7 +22,6 @@ extern "C" {
struct spdk_bs_dev;
struct spdk_lvol_store;
struct spdk_lvol;
struct spdk_fragmap;
enum lvol_clear_method {
LVOL_CLEAR_WITH_DEFAULT = BLOB_CLEAR_WITH_DEFAULT,
@ -38,8 +37,8 @@ enum lvs_clear_method {
};
/* Must include null terminator. */
#define SPDK_LVS_NAME_MAX 256
#define SPDK_LVOL_NAME_MAX 256
#define SPDK_LVS_NAME_MAX 64
#define SPDK_LVOL_NAME_MAX 64
/**
* Parameters for lvolstore initialization.
@ -71,7 +70,7 @@ struct spdk_lvs_opts {
*/
spdk_bs_esnap_dev_create esnap_bs_dev_create;
} __attribute__((packed));
SPDK_STATIC_ASSERT(sizeof(struct spdk_lvs_opts) == 280, "Incorrect size");
SPDK_STATIC_ASSERT(sizeof(struct spdk_lvs_opts) == 88, "Incorrect size");
/**
* Initialize an spdk_lvs_opts structure to the defaults.
@ -117,16 +116,6 @@ typedef void (*spdk_lvol_op_with_handle_complete)(void *cb_arg, struct spdk_lvol
*/
typedef void (*spdk_lvol_op_complete)(void *cb_arg, int lvolerrno);
/**
* Callback definition for lvol operations with handle to fragmap
*
* @param cb_arg Custom arguments
* @param fragmap Handle to fragmap or NULL when lvolerrno is set
* @param lvolerrno Error
*/
typedef void (*spdk_lvol_op_with_fragmap_handle_complete)(void *cb_arg,
struct spdk_fragmap *fragmap, int lvolerrno);
/**
* Callback definition for spdk_lvol_iter_clones.
*
@ -261,31 +250,6 @@ void
spdk_lvol_rename(struct spdk_lvol *lvol, const char *new_name,
spdk_lvol_op_complete cb_fn, void *cb_arg);
/**
* Set lvol's xattr.
*
* \param lvol Handle to lvol.
* \param name xattr name.
* \param value xattr value.
* \param cb_fn Completion callback.
* \param cb_arg Completion callback custom arguments.
*/
void
spdk_lvol_set_xattr(struct spdk_lvol *lvol, const char *name, const char *value,
spdk_lvol_op_complete cb_fn, void *cb_arg);
/**
* Get lvol's xattr.
*
* \param lvol Handle to lvol.
* \param name Xattr name.
* \param value Xattr value.
* \param value_len Xattr value length.
*/
int
spdk_lvol_get_xattr(struct spdk_lvol *lvol, const char *name,
const void **value, size_t *value_len);
/**
* \brief Returns if it is possible to delete an lvol (i.e. lvol is not a snapshot that have at least one clone).
* \param lvol Handle to lvol
@ -417,20 +381,6 @@ void spdk_lvol_decouple_parent(struct spdk_lvol *lvol, spdk_lvol_op_complete cb_
*/
bool spdk_lvol_is_degraded(const struct spdk_lvol *lvol);
/**
* Make a shallow copy of lvol on given bs_dev.
*
* lvol must be read only and lvol size must be smaller than bs_dev size.
*
* \param lvol Handle to lvol
* \param ext_dev The bs_dev to copy on. This is created on the given bdev by using
* spdk_bdev_create_bs_dev_ext() beforehand
* \param cb_fn Completion callback
* \param cb_arg Completion callback custom arguments
*/
void spdk_lvol_shallow_copy(struct spdk_lvol *lvol, struct spdk_bs_dev *ext_dev,
spdk_lvol_op_complete cb_fn, void *cb_arg);
#ifdef __cplusplus
}
#endif

View File

@ -308,40 +308,6 @@ spdk_memset_s(void *data, size_t data_size, int ch, size_t count)
#endif
}
/**
* @brief Check if \b dividend is divisible by \b divisor
*
* @param dividend Dividend
* @param divisor Divisor which is a power of 2
* @return true
* @return false
*/
static inline bool
spdk_is_divisible_by(uint64_t dividend, uint64_t divisor)
{
return (dividend & (divisor - 1)) == 0;
}
/*
* Get the UTC time string in RFC3339 format.
*
* \param buf Buffer to store the UTC time string.
* \param buf_size Size of the buffer.
*
* \return void
*/
static inline void
spdk_current_utc_time_rfc3339(char *buf, size_t buf_size)
{
struct tm *utc;
time_t rawtime;
time(&rawtime);
utc = gmtime(&rawtime);
strftime(buf, buf_size, "%Y-%m-%dT%H:%M:%SZ", utc);
}
#ifdef __cplusplus
}
#endif

View File

@ -86,22 +86,6 @@ int spdk_uuid_generate_sha1(struct spdk_uuid *uuid, struct spdk_uuid *ns_uuid, c
*/
void spdk_uuid_copy(struct spdk_uuid *dst, const struct spdk_uuid *src);
/**
* Compare the UUID to the NULL value (all bits equal to zero).
*
* \param uuid The UUID to test.
*
* \return true if uuid is equal to the NULL value, false if not.
*/
bool spdk_uuid_is_null(const struct spdk_uuid *uuid);
/**
* Set the value of UUID to the NULL value.
*
* \param uuid The UUID to set.
*/
void spdk_uuid_set_null(struct spdk_uuid *uuid);
#ifdef __cplusplus
}
#endif

View File

@ -16,9 +16,6 @@
/* Default size of blobstore cluster */
#define SPDK_LVS_OPTS_CLUSTER_SZ (4 * 1024 * 1024)
/* Creation time format in RFC 3339 format */
#define SPDK_CREATION_TIME_MAX 21 /* 20 characters + null terminator */
/* UUID + '_' + blobid (20 characters for uint64_t).
* Null terminator is already included in SPDK_UUID_STRING_LEN. */
#define SPDK_LVOL_UNIQUE_ID_MAX (SPDK_UUID_STRING_LEN + 1 + 20)
@ -49,13 +46,6 @@ struct spdk_lvol_req {
char name[SPDK_LVOL_NAME_MAX];
};
struct spdk_lvol_copy_req {
spdk_lvol_op_complete cb_fn;
void *cb_arg;
struct spdk_lvol *lvol;
struct spdk_bs_dev *ext_dev;
};
struct spdk_lvs_with_handle_req {
spdk_lvs_op_with_handle_complete cb_fn;
void *cb_arg;
@ -109,7 +99,6 @@ struct spdk_lvol {
char name[SPDK_LVOL_NAME_MAX];
struct spdk_uuid uuid;
char uuid_str[SPDK_UUID_STRING_LEN];
char creation_time[SPDK_CREATION_TIME_MAX];
struct spdk_bdev *bdev;
int ref_count;
bool action_in_progress;
@ -119,30 +108,6 @@ struct spdk_lvol {
TAILQ_ENTRY(spdk_lvol) degraded_link;
};
struct spdk_fragmap {
struct spdk_bit_array *map;
uint64_t cluster_size;
uint64_t block_size;
uint64_t num_clusters;
uint64_t num_allocated_clusters;
};
struct spdk_fragmap_req {
struct spdk_bdev *bdev;
struct spdk_bdev_desc *bdev_desc;
struct spdk_io_channel *bdev_io_channel;
struct spdk_fragmap fragmap;
uint64_t offset;
uint64_t size;
uint64_t current_offset;
spdk_lvol_op_with_fragmap_handle_complete cb_fn;
void *cb_arg;
};
struct lvol_store_bdev *vbdev_lvol_store_first(void);
struct lvol_store_bdev *vbdev_lvol_store_next(struct lvol_store_bdev *prev);

View File

@ -4569,12 +4569,6 @@ spdk_bdev_get_aliases(const struct spdk_bdev *bdev)
return &bdev->aliases;
}
const char *
spdk_bdev_get_creation_time(const struct spdk_bdev *bdev)
{
return bdev->creation_time;
}
uint32_t
spdk_bdev_get_block_size(const struct spdk_bdev *bdev)
{
@ -7433,7 +7427,7 @@ bdev_register(struct spdk_bdev *bdev)
/* UUID may be specified by the user or defined by bdev itself.
* Otherwise it will be generated here, so this field will never be empty. */
if (spdk_uuid_is_null(&bdev->uuid)) {
if (spdk_mem_all_zero(&bdev->uuid, sizeof(bdev->uuid))) {
spdk_uuid_generate(&bdev->uuid);
}

View File

@ -664,7 +664,6 @@ rpc_dump_bdev_info(void *ctx, struct spdk_bdev *bdev)
uint64_t qos_limits[SPDK_BDEV_QOS_NUM_RATE_LIMIT_TYPES];
struct spdk_memory_domain **domains;
char uuid_str[SPDK_UUID_STRING_LEN];
const char *creation_time_str;
int i, rc;
spdk_json_write_object_begin(w);
@ -688,12 +687,6 @@ rpc_dump_bdev_info(void *ctx, struct spdk_bdev *bdev)
spdk_uuid_fmt_lower(uuid_str, sizeof(uuid_str), &bdev->uuid);
spdk_json_write_named_string(w, "uuid", uuid_str);
creation_time_str = spdk_bdev_get_creation_time(bdev);
if (creation_time_str == NULL) {
creation_time_str = "";
}
spdk_json_write_named_string(w, "creation_time", creation_time_str);
if (spdk_bdev_get_md_size(bdev) != 0) {
spdk_json_write_named_uint32(w, "md_size", spdk_bdev_get_md_size(bdev));
spdk_json_write_named_bool(w, "md_interleave", spdk_bdev_is_md_interleaved(bdev));

View File

@ -40,8 +40,6 @@ static int blob_remove_xattr(struct spdk_blob *blob, const char *name, bool inte
static void blob_write_extent_page(struct spdk_blob *blob, uint32_t extent, uint64_t cluster_num,
struct spdk_blob_md_page *page, spdk_blob_op_complete cb_fn, void *cb_arg);
static void bs_shallow_copy_cluster_find_next(void *cb_arg, int bserrno);
/*
* External snapshots require a channel per thread per esnap bdev. The tree
* is populated lazily as blob IOs are handled by the back_bs_dev. When this
@ -304,7 +302,6 @@ blob_alloc(struct spdk_blob_store *bs, spdk_blob_id id)
blob->parent_id = SPDK_BLOBID_INVALID;
blob->state = SPDK_BLOB_STATE_DIRTY;
blob->u.shallow_copy.bserrno = 1;
blob->extent_rle_found = false;
blob->extent_table_found = false;
blob->active.num_pages = 1;
@ -5882,30 +5879,6 @@ spdk_blob_get_next_unallocated_io_unit(struct spdk_blob *blob, uint64_t offset)
return blob_find_io_unit(blob, offset, false);
}
uint64_t
spdk_blob_get_shallow_copy_copied_clusters(struct spdk_blob *blob)
{
assert(blob != NULL);
return blob->u.shallow_copy.copied_clusters_number;
}
uint64_t
spdk_blob_get_shallow_copy_total_clusters(struct spdk_blob *blob)
{
assert(blob != NULL);
return blob->u.shallow_copy.num_clusters_to_copy;
}
int
spdk_blob_get_shallow_copy_result(struct spdk_blob *blob)
{
assert(blob != NULL);
return blob->u.shallow_copy.bserrno;
}
/* START spdk_bs_create_blob */
static void
@ -6946,234 +6919,6 @@ spdk_bs_blob_decouple_parent(struct spdk_blob_store *bs, struct spdk_io_channel
}
/* END spdk_bs_inflate_blob */
/* START spdk_bs_blob_shallow_copy */
struct shallow_copy_ctx {
struct spdk_bs_cpl cpl;
int bserrno;
/* Blob source for copy */
struct spdk_blob *blob;
struct spdk_io_channel *blob_channel;
/* Destination device for copy */
struct spdk_bs_dev *ext_dev;
struct spdk_io_channel *ext_channel;
/* Current cluster for copy operation */
uint64_t cluster;
/* Buffer for blob reading */
uint8_t *read_buff;
/* Struct for external device writing */
struct spdk_bs_dev_cb_args ext_args;
};
static void
bs_shallow_copy_cleanup_finish(void *cb_arg, int bserrno)
{
struct shallow_copy_ctx *ctx = cb_arg;
struct spdk_blob *_blob = ctx->blob;
struct spdk_bs_cpl *cpl = &ctx->cpl;
if (bserrno != 0) {
if (ctx->bserrno == 0) {
SPDK_ERRLOG("Shallow copy cleanup error %d\n", bserrno);
ctx->bserrno = bserrno;
}
}
_blob->u.shallow_copy.bserrno = ctx->bserrno;
ctx->ext_dev->destroy_channel(ctx->ext_dev, ctx->ext_channel);
spdk_free(ctx->read_buff);
cpl->u.blob_basic.cb_fn(cpl->u.blob_basic.cb_arg, ctx->bserrno);
free(ctx);
}
static void
bs_shallow_copy_bdev_write_cpl(struct spdk_io_channel *channel, void *cb_arg, int bserrno)
{
struct shallow_copy_ctx *ctx = (struct shallow_copy_ctx *)cb_arg;
struct spdk_blob *_blob = ctx->blob;
if (bserrno != 0) {
SPDK_ERRLOG("Shallow copy ext dev write error %d\n", bserrno);
ctx->bserrno = bserrno;
_blob->u.shallow_copy.bserrno = bserrno;
_blob->locked_operation_in_progress = false;
spdk_blob_close(_blob, bs_shallow_copy_cleanup_finish, ctx);
return;
}
ctx->cluster++;
_blob->u.shallow_copy.copied_clusters_number++;
bs_shallow_copy_cluster_find_next(ctx, 0);
}
static void
bs_shallow_copy_blob_read_cpl(void *cb_arg, int bserrno)
{
struct shallow_copy_ctx *ctx = (struct shallow_copy_ctx *)cb_arg;
struct spdk_bs_dev *ext_dev = ctx->ext_dev;
struct spdk_blob *_blob = ctx->blob;
if (bserrno != 0) {
SPDK_ERRLOG("Shallow copy blob read error %d\n", bserrno);
ctx->bserrno = bserrno;
_blob->u.shallow_copy.bserrno = bserrno;
_blob->locked_operation_in_progress = false;
spdk_blob_close(_blob, bs_shallow_copy_cleanup_finish, ctx);
return;
}
ctx->ext_args.channel = ctx->ext_channel;
ctx->ext_args.cb_fn = bs_shallow_copy_bdev_write_cpl;
ctx->ext_args.cb_arg = ctx;
ext_dev->write(ext_dev, ctx->ext_channel, ctx->read_buff,
bs_cluster_to_lba(_blob->bs, ctx->cluster),
bs_dev_byte_to_lba(_blob->bs->dev, _blob->bs->cluster_sz),
&ctx->ext_args);
}
static void
bs_shallow_copy_cluster_find_next(void *cb_arg, int bserrno)
{
struct shallow_copy_ctx *ctx = (struct shallow_copy_ctx *)cb_arg;
struct spdk_blob *_blob = ctx->blob;
if (bserrno != 0) {
SPDK_ERRLOG("Shallow copy bdev write error %d\n", bserrno);
ctx->bserrno = bserrno;
_blob->u.shallow_copy.bserrno = bserrno;
_blob->locked_operation_in_progress = false;
spdk_blob_close(_blob, bs_shallow_copy_cleanup_finish, ctx);
return;
}
while (ctx->cluster < _blob->active.num_clusters) {
if (_blob->active.clusters[ctx->cluster] != 0) {
break;
}
ctx->cluster++;
}
if (ctx->cluster < _blob->active.num_clusters) {
blob_request_submit_op_single(ctx->blob_channel, _blob, ctx->read_buff,
bs_cluster_to_lba(_blob->bs, ctx->cluster),
bs_dev_byte_to_lba(_blob->bs->dev, _blob->bs->cluster_sz),
bs_shallow_copy_blob_read_cpl, ctx, SPDK_BLOB_READ);
} else {
_blob->locked_operation_in_progress = false;
spdk_blob_close(_blob, bs_shallow_copy_cleanup_finish, ctx);
}
}
static void
bs_shallow_copy_blob_open_cpl(void *cb_arg, struct spdk_blob *_blob, int bserrno)
{
struct shallow_copy_ctx *ctx = (struct shallow_copy_ctx *)cb_arg;
struct spdk_bs_dev *ext_dev = ctx->ext_dev;
uint32_t blob_block_size;
uint64_t blob_total_size;
uint64_t i;
if (bserrno != 0) {
SPDK_ERRLOG("Shallow copy blob open error %d\n", bserrno);
ctx->bserrno = bserrno;
bs_shallow_copy_cleanup_finish(ctx, bserrno);
return;
}
blob_block_size = _blob->bs->dev->blocklen;
blob_total_size = spdk_blob_get_num_clusters(_blob) * spdk_bs_get_cluster_size(_blob->bs);
if (blob_total_size > ext_dev->blockcnt * ext_dev->blocklen) {
SPDK_ERRLOG("external device must have at least blob size\n");
ctx->bserrno = -EINVAL;
spdk_blob_close(_blob, bs_shallow_copy_cleanup_finish, ctx);
return;
}
if (blob_block_size % ext_dev->blocklen != 0) {
SPDK_ERRLOG("external device block size is not compatible with blobstore block size\n");
ctx->bserrno = -EINVAL;
spdk_blob_close(_blob, bs_shallow_copy_cleanup_finish, ctx);
return;
}
ctx->blob = _blob;
if (_blob->locked_operation_in_progress) {
SPDK_DEBUGLOG(blob, "Cannot make a shallow copy of blob - another operation in progress\n");
ctx->bserrno = -EBUSY;
spdk_blob_close(_blob, bs_shallow_copy_cleanup_finish, ctx);
return;
}
_blob->locked_operation_in_progress = true;
_blob->u.shallow_copy.copied_clusters_number = 0;
_blob->u.shallow_copy.num_clusters_to_copy = 0;
_blob->u.shallow_copy.bserrno = 0;
for (i = 0; i < _blob->active.num_clusters; i++) {
if (_blob->active.clusters[i] != 0) {
_blob->u.shallow_copy.num_clusters_to_copy++;
}
}
ctx->cluster = 0;
bs_shallow_copy_cluster_find_next(ctx, 0);
}
void
spdk_bs_blob_shallow_copy(struct spdk_blob_store *bs, struct spdk_io_channel *channel,
spdk_blob_id blobid, struct spdk_bs_dev *ext_dev,
spdk_blob_op_complete cb_fn, void *cb_arg)
{
struct shallow_copy_ctx *ctx;
struct spdk_io_channel *ext_channel;
ctx = calloc(1, sizeof(*ctx));
if (!ctx) {
cb_fn(cb_arg, -ENOMEM);
return;
}
ctx->cpl.type = SPDK_BS_CPL_TYPE_BLOB_BASIC;
ctx->cpl.u.bs_basic.cb_fn = cb_fn;
ctx->cpl.u.bs_basic.cb_arg = cb_arg;
ctx->bserrno = 0;
ctx->blob_channel = channel;
ctx->read_buff = spdk_malloc(bs->cluster_sz, bs->dev->blocklen, NULL,
SPDK_ENV_LCORE_ID_ANY, SPDK_MALLOC_DMA);
if (!ctx->read_buff) {
free(ctx);
cb_fn(cb_arg, -ENOMEM);
return;
}
ext_channel = ext_dev->create_channel(ext_dev);
if (!ext_channel) {
spdk_free(ctx->read_buff);
free(ctx);
cb_fn(cb_arg, -ENOMEM);
return;
}
ctx->ext_dev = ext_dev;
ctx->ext_channel = ext_channel;
spdk_bs_open_blob(bs, blobid, bs_shallow_copy_blob_open_cpl, ctx);
}
/* END spdk_bs_blob_shallow_copy */
/* START spdk_blob_resize */
struct spdk_bs_resize_ctx {
spdk_blob_op_complete cb_fn;

View File

@ -148,14 +148,6 @@ struct spdk_blob {
/* Number of data clusters retrieved from extent table,
* that many have to be read from extent pages. */
uint64_t remaining_clusters_in_et;
union {
struct {
uint64_t num_clusters_to_copy;
uint64_t copied_clusters_number;
int bserrno;
} shallow_copy;
} u;
};
struct spdk_blob_store {

View File

@ -22,9 +22,6 @@
spdk_blob_get_num_clusters;
spdk_blob_get_next_allocated_io_unit;
spdk_blob_get_next_unallocated_io_unit;
spdk_blob_get_shallow_copy_copied_clusters;
spdk_blob_get_shallow_copy_total_clusters;
spdk_blob_get_shallow_copy_result;
spdk_blob_opts_init;
spdk_bs_create_blob_ext;
spdk_bs_create_blob;
@ -41,7 +38,6 @@
spdk_bs_delete_blob;
spdk_bs_inflate_blob;
spdk_bs_blob_decouple_parent;
spdk_bs_blob_shallow_copy;
spdk_blob_open_opts_init;
spdk_bs_open_blob;
spdk_bs_open_blob_ext;

View File

@ -393,9 +393,6 @@ write_string_or_name(struct spdk_json_write_ctx *w, const char *val, size_t len)
{
const uint8_t *p = val;
const uint8_t *end = val + len;
bool failed = false;
int retval;
if (emit(w, "\"", 1)) { return fail(w); }
@ -418,25 +415,14 @@ write_string_or_name(struct spdk_json_write_ctx *w, const char *val, size_t len)
codepoint = utf8_decode_unsafe_4(p);
break;
default:
failed = true;
break;
}
if (failed) {
break;
return fail(w);
}
if (write_codepoint(w, codepoint)) { return fail(w); }
p += codepoint_len;
}
/* Always append "\"" in the end of string */
retval = emit(w, "\"", 1);
if (failed) {
return fail(w);
}
return retval;
return emit(w, "\"", 1);
}
static int

View File

@ -16,7 +16,6 @@
#define SPDK_LVOL_BLOB_OPTS_CHANNEL_OPS 512
#define LVOL_NAME "name"
#define LVOL_CREATION_TIME "creation_time"
SPDK_LOG_REGISTER_COMPONENT(lvol)
@ -115,7 +114,6 @@ lvol_alloc(struct spdk_lvol_store *lvs, const char *name, bool thin_provision,
spdk_uuid_generate(&lvol->uuid);
spdk_uuid_fmt_lower(lvol->uuid_str, sizeof(lvol->uuid_str), &lvol->uuid);
spdk_uuid_fmt_lower(lvol->unique_id, sizeof(lvol->uuid_str), &lvol->uuid);
spdk_current_utc_time_rfc3339(lvol->creation_time, sizeof(lvol->creation_time));
TAILQ_INSERT_TAIL(&lvs->pending_lvols, lvol, link);
@ -257,11 +255,11 @@ load_next_lvol(void *cb_arg, struct spdk_blob *blob, int lvolerrno)
if (rc != 0 || value_len != SPDK_UUID_STRING_LEN || attr[SPDK_UUID_STRING_LEN - 1] != '\0' ||
spdk_uuid_parse(&lvol->uuid, attr) != 0) {
SPDK_INFOLOG(lvol, "Missing or corrupt lvol uuid\n");
spdk_uuid_set_null(&lvol->uuid);
memset(&lvol->uuid, 0, sizeof(lvol->uuid));
}
spdk_uuid_fmt_lower(lvol->uuid_str, sizeof(lvol->uuid_str), &lvol->uuid);
if (!spdk_uuid_is_null(&lvol->uuid)) {
if (!spdk_mem_all_zero(&lvol->uuid, sizeof(lvol->uuid))) {
snprintf(lvol->unique_id, sizeof(lvol->unique_id), "%s", lvol->uuid_str);
} else {
spdk_uuid_fmt_lower(lvol->unique_id, sizeof(lvol->unique_id), &lvol->lvol_store->uuid);
@ -280,11 +278,6 @@ load_next_lvol(void *cb_arg, struct spdk_blob *blob, int lvolerrno)
snprintf(lvol->name, sizeof(lvol->name), "%s", attr);
rc = spdk_blob_get_xattr_value(blob, "creation_time", (const void **)&attr, &value_len);
if (rc == 0 && value_len <= SPDK_CREATION_TIME_MAX) {
snprintf(lvol->creation_time, sizeof(lvol->creation_time), "%s", attr);
}
TAILQ_INSERT_TAIL(&lvs->lvols, lvol, link);
lvs->lvol_count++;
@ -659,7 +652,7 @@ lvs_opts_copy(const struct spdk_lvs_opts *src, struct spdk_lvs_opts *dst)
/* You should not remove this statement, but need to update the assert statement
* if you add a new field, and also add a corresponding SET_FIELD statement */
SPDK_STATIC_ASSERT(sizeof(struct spdk_lvs_opts) == 280, "Incorrect size");
SPDK_STATIC_ASSERT(sizeof(struct spdk_lvs_opts) == 88, "Incorrect size");
#undef FIELD_OK
#undef SET_FIELD
@ -1146,12 +1139,6 @@ lvol_get_xattr_value(void *xattr_ctx, const char *name,
*value_len = sizeof(lvol->uuid_str);
return;
}
if (!strcmp(LVOL_CREATION_TIME, name)) {
*value = lvol->creation_time;
*value_len = sizeof(lvol->creation_time);
return;
}
*value = NULL;
*value_len = 0;
}
@ -1197,7 +1184,7 @@ spdk_lvol_create(struct spdk_lvol_store *lvs, const char *name, uint64_t sz,
struct spdk_blob_store *bs;
struct spdk_lvol *lvol;
struct spdk_blob_opts opts;
char *xattr_names[] = {LVOL_NAME, "uuid", LVOL_CREATION_TIME};
char *xattr_names[] = {LVOL_NAME, "uuid"};
int rc;
if (lvs == NULL) {
@ -1252,7 +1239,7 @@ spdk_lvol_create_esnap_clone(const void *esnap_id, uint32_t id_len, uint64_t siz
struct spdk_lvol *lvol;
struct spdk_blob_opts opts;
uint64_t cluster_sz;
char *xattr_names[] = {LVOL_NAME, "uuid", LVOL_CREATION_TIME};
char *xattr_names[] = {LVOL_NAME, "uuid"};
int rc;
if (lvs == NULL) {
@ -1316,7 +1303,7 @@ spdk_lvol_create_snapshot(struct spdk_lvol *origlvol, const char *snapshot_name,
struct spdk_blob *origblob;
struct spdk_lvol_with_handle_req *req;
struct spdk_blob_xattr_opts snapshot_xattrs;
char *xattr_names[] = {LVOL_NAME, "uuid", LVOL_CREATION_TIME};
char *xattr_names[] = {LVOL_NAME, "uuid"};
int rc;
if (origlvol == NULL) {
@ -1377,7 +1364,7 @@ spdk_lvol_create_clone(struct spdk_lvol *origlvol, const char *clone_name,
struct spdk_lvol_store *lvs;
struct spdk_blob *origblob;
struct spdk_blob_xattr_opts clone_xattrs;
char *xattr_names[] = {LVOL_NAME, "uuid", LVOL_CREATION_TIME};
char *xattr_names[] = {LVOL_NAME, "uuid"};
int rc;
if (origlvol == NULL) {
@ -1562,51 +1549,6 @@ spdk_lvol_rename(struct spdk_lvol *lvol, const char *new_name,
spdk_blob_sync_md(blob, lvol_rename_cb, req);
}
static void
lvol_set_xattr_cb(void *cb_arg, int lvolerrno)
{
struct spdk_lvol_req *req = cb_arg;
req->cb_fn(req->cb_arg, lvolerrno);
free(req);
}
void
spdk_lvol_set_xattr(struct spdk_lvol *lvol, const char *name, const char *value,
spdk_lvol_op_complete cb_fn, void *cb_arg)
{
struct spdk_blob *blob = lvol->blob;
struct spdk_lvol_req *req;
int rc;
req = calloc(1, sizeof(*req));
if (!req) {
SPDK_ERRLOG("Cannot alloc memory for lvol request pointer\n");
cb_fn(cb_arg, -ENOMEM);
return;
}
req->cb_fn = cb_fn;
req->cb_arg = cb_arg;
rc = spdk_blob_set_xattr(blob, name, value, strlen(value) + 1);
if (rc < 0) {
free(req);
cb_fn(cb_arg, rc);
return;
}
spdk_blob_sync_md(blob, lvol_set_xattr_cb, req);
}
int
spdk_lvol_get_xattr(struct spdk_lvol *lvol, const char *name,
const void **value, size_t *value_len)
{
struct spdk_blob *blob = lvol->blob;
return spdk_blob_get_xattr_value(blob, name, value, value_len);
}
void
spdk_lvol_destroy(struct spdk_lvol *lvol, spdk_lvol_op_complete cb_fn, void *cb_arg)
{
@ -2266,78 +2208,3 @@ spdk_lvol_is_degraded(const struct spdk_lvol *lvol)
}
return spdk_blob_is_degraded(blob);
}
static void
lvol_shallow_copy_cb(void *cb_arg, int lvolerrno)
{
struct spdk_lvol_req *req = cb_arg;
spdk_bs_free_io_channel(req->channel);
if (lvolerrno < 0) {
SPDK_ERRLOG("Could not make a shallow copy of lvol\n");
}
req->cb_fn(req->cb_arg, lvolerrno);
free(req);
}
void
spdk_lvol_shallow_copy(struct spdk_lvol *lvol, struct spdk_bs_dev *ext_dev,
spdk_lvol_op_complete cb_fn, void *cb_arg)
{
struct spdk_lvol_req *req;
spdk_blob_id blob_id;
uint64_t lvol_total_size;
assert(cb_fn != NULL);
if (lvol == NULL) {
SPDK_ERRLOG("lvol does not exist\n");
cb_fn(cb_arg, -ENODEV);
return;
}
if (ext_dev == NULL) {
SPDK_ERRLOG("External device does not exist\n");
cb_fn(cb_arg, -ENODEV);
return;
}
if (!spdk_blob_is_read_only(lvol->blob)) {
SPDK_ERRLOG("lvol must be read only\n");
cb_fn(cb_arg, -EPERM);
return;
}
lvol_total_size = spdk_blob_get_num_clusters(lvol->blob) *
spdk_bs_get_cluster_size(lvol->lvol_store->blobstore);
if (lvol_total_size > ext_dev->blockcnt * ext_dev->blocklen) {
SPDK_ERRLOG("bdev must have at least lvol size\n");
cb_fn(cb_arg, -EFBIG);
return;
}
req = calloc(1, sizeof(*req));
if (!req) {
SPDK_ERRLOG("Cannot alloc memory for lvol request pointer\n");
cb_fn(cb_arg, -ENOMEM);
return;
}
req->cb_fn = cb_fn;
req->cb_arg = cb_arg;
req->channel = spdk_bs_alloc_io_channel(lvol->lvol_store->blobstore);
if (req->channel == NULL) {
SPDK_ERRLOG("Cannot alloc io channel for lvol shallow copy request\n");
free(req);
cb_fn(cb_arg, -ENOMEM);
return;
}
blob_id = spdk_blob_get_id(lvol->blob);
spdk_bs_blob_shallow_copy(lvol->lvol_store->blobstore, req->channel, blob_id, ext_dev,
lvol_shallow_copy_cb, req);
}

View File

@ -26,7 +26,6 @@
spdk_lvol_get_by_uuid;
spdk_lvol_get_by_names;
spdk_lvol_is_degraded;
spdk_lvol_shallow_copy;
# internal functions
spdk_lvol_resize;

View File

@ -544,7 +544,7 @@ nvmf_write_subsystem_config_json(struct spdk_json_write_ctx *w,
spdk_json_write_named_string_fmt(w, "eui64", "%016"PRIX64, from_be64(&ns_opts.eui64));
}
if (!spdk_uuid_is_null(&ns_opts.uuid)) {
if (!spdk_mem_all_zero(&ns_opts.uuid, sizeof(ns_opts.uuid))) {
spdk_uuid_fmt_lower(uuid_str, sizeof(uuid_str), &ns_opts.uuid);
spdk_json_write_named_string(w, "uuid", uuid_str);
}

View File

@ -259,7 +259,7 @@ dump_nvmf_subsystem(struct spdk_json_write_ctx *w, struct spdk_nvmf_subsystem *s
json_write_hex_str(w, ns_opts.eui64, sizeof(ns_opts.eui64));
}
if (!spdk_uuid_is_null(&ns_opts.uuid)) {
if (!spdk_mem_all_zero(&ns_opts.uuid, sizeof(ns_opts.uuid))) {
char uuid_str[SPDK_UUID_STRING_LEN];
spdk_uuid_fmt_lower(uuid_str, sizeof(uuid_str), &ns_opts.uuid);
@ -1221,7 +1221,7 @@ nvmf_rpc_ns_paused(struct spdk_nvmf_subsystem *subsystem,
SPDK_STATIC_ASSERT(sizeof(ns_opts.eui64) == sizeof(ctx->ns_params.eui64), "size mismatch");
memcpy(ns_opts.eui64, ctx->ns_params.eui64, sizeof(ns_opts.eui64));
if (!spdk_uuid_is_null(&ctx->ns_params.uuid)) {
if (!spdk_mem_all_zero(&ctx->ns_params.uuid, sizeof(ctx->ns_params.uuid))) {
ns_opts.uuid = ctx->ns_params.uuid;
}

View File

@ -1529,7 +1529,7 @@ spdk_nvmf_ns_opts_get_defaults(struct spdk_nvmf_ns_opts *opts, size_t opts_size)
memset(opts->eui64, 0, sizeof(opts->eui64));
}
if (FIELD_OK(uuid)) {
spdk_uuid_set_null(&opts->uuid);
memset(&opts->uuid, 0, sizeof(opts->uuid));
}
SET_FIELD(anagrpid, 0);
@ -1558,7 +1558,7 @@ nvmf_ns_opts_copy(struct spdk_nvmf_ns_opts *opts,
memcpy(opts->eui64, user_opts->eui64, sizeof(opts->eui64));
}
if (FIELD_OK(uuid)) {
spdk_uuid_copy(&opts->uuid, &user_opts->uuid);
memcpy(&opts->uuid, &user_opts->uuid, sizeof(opts->uuid));
}
SET_FIELD(anagrpid);
@ -1686,7 +1686,7 @@ spdk_nvmf_subsystem_add_ns_ext(struct spdk_nvmf_subsystem *subsystem, const char
/* Cache the zcopy capability of the bdev device */
ns->zcopy = spdk_bdev_io_type_supported(ns->bdev, SPDK_BDEV_IO_TYPE_ZCOPY);
if (spdk_uuid_is_null(&opts.uuid)) {
if (spdk_mem_all_zero(&opts.uuid, sizeof(opts.uuid))) {
opts.uuid = *spdk_bdev_get_uuid(ns->bdev);
}

View File

@ -621,7 +621,7 @@ spdk_reduce_vol_init(struct spdk_reduce_vol_params *params,
return;
}
if (spdk_uuid_is_null(&params->uuid)) {
if (spdk_mem_all_zero(&params->uuid, sizeof(params->uuid))) {
spdk_uuid_generate(&params->uuid);
}

View File

@ -11,7 +11,6 @@
#include "spdk/likely.h"
#include "spdk/util.h"
#include "spdk/base64.h"
typedef uint64_t spdk_bit_array_word;
#define SPDK_BIT_ARRAY_WORD_TZCNT(x) (__builtin_ctzll(x))
@ -492,51 +491,3 @@ spdk_bit_pool_free_all_bits(struct spdk_bit_pool *pool)
pool->lowest_free_bit = 0;
pool->free_count = spdk_bit_array_capacity(pool->array);
}
static int
bits_to_bytes(const int bits)
{
return ((bits + 7) >> 3);
}
char *
spdk_bit_array_to_base64_string(const struct spdk_bit_array *array)
{
uint32_t bit_count = spdk_bit_array_capacity(array);
size_t byte_count = bits_to_bytes(bit_count);
void *bytes;
char *encoded;
size_t total_size;
int rc;
bytes = calloc(byte_count, sizeof(char));
if (bytes == NULL) {
return NULL;
}
for (uint32_t i = 0; i < bit_count; i++) {
if (spdk_bit_array_get(array, i)) {
/*
* Set the bit in bytes's correct position
*/
((uint8_t *)bytes)[i / 8] |= 1 << (i % 8);
}
}
total_size = spdk_base64_get_encoded_strlen(byte_count) + 1;
encoded = calloc(total_size, sizeof(char));
if (encoded == NULL) {
free(bytes);
return NULL;
}
rc = spdk_base64_encode(encoded, bytes, byte_count);
if (rc != 0) {
free(bytes);
free(encoded);
return NULL;
}
free(bytes);
return encoded;
}

View File

@ -155,8 +155,6 @@
spdk_uuid_generate;
spdk_uuid_generate_sha1;
spdk_uuid_copy;
spdk_uuid_is_null;
spdk_uuid_set_null;
# public functions in fd_group.h
spdk_fd_group_create;

View File

@ -52,18 +52,6 @@ spdk_uuid_copy(struct spdk_uuid *dst, const struct spdk_uuid *src)
uuid_copy((void *)dst, (void *)src);
}
bool
spdk_uuid_is_null(const struct spdk_uuid *uuid)
{
return uuid_is_null((void *)uuid);
}
void
spdk_uuid_set_null(struct spdk_uuid *uuid)
{
uuid_clear((void *)uuid);
}
#else
#include <uuid.h>
@ -120,18 +108,6 @@ spdk_uuid_copy(struct spdk_uuid *dst, const struct spdk_uuid *src)
memcpy(dst, src, sizeof(*dst));
}
bool
spdk_uuid_is_null(const struct spdk_uuid *uuid)
{
return uuid_is_nil((const uuid_t *)uuid, NULL);
}
void
spdk_uuid_set_null(struct spdk_uuid *uuid)
{
uuid_create_nil((uuid_t *)uuid, NULL);
}
#endif
int

View File

@ -145,7 +145,7 @@ DEPDIRS-bdev_null := $(BDEV_DEPS_THREAD)
DEPDIRS-bdev_nvme = $(BDEV_DEPS_THREAD) accel nvme trace
DEPDIRS-bdev_ocf := $(BDEV_DEPS_THREAD)
DEPDIRS-bdev_passthru := $(BDEV_DEPS_THREAD)
DEPDIRS-bdev_raid := $(BDEV_DEPS_THREAD) accel
DEPDIRS-bdev_raid := $(BDEV_DEPS_THREAD)
DEPDIRS-bdev_rbd := $(BDEV_DEPS_THREAD)
DEPDIRS-bdev_uring := $(BDEV_DEPS_THREAD)
DEPDIRS-bdev_virtio := $(BDEV_DEPS_THREAD) virtio

View File

@ -93,7 +93,7 @@ rpc_bdev_ftl_create(struct spdk_jsonrpc_request *request,
goto out;
}
if (spdk_uuid_is_null(&conf.uuid)) {
if (spdk_mem_all_zero(&conf.uuid, sizeof(conf.uuid))) {
conf.mode |= SPDK_FTL_MODE_CREATE;
}

View File

@ -11,8 +11,6 @@
#include "spdk/string.h"
#include "spdk/uuid.h"
#include "spdk/blob.h"
#include "spdk/bit_array.h"
#include "spdk/base64.h"
#include "vbdev_lvol.h"
@ -1144,7 +1142,6 @@ _create_lvol_disk(struct spdk_lvol *lvol, bool destroy)
assert((total_size % bdev->blocklen) == 0);
bdev->blockcnt = total_size / bdev->blocklen;
bdev->uuid = lvol->uuid;
bdev->creation_time = lvol->creation_time;
bdev->required_alignment = lvs_bdev->bdev->required_alignment;
bdev->split_on_optimal_io_boundary = true;
bdev->optimal_io_boundary = spdk_bs_get_cluster_size(lvol->lvol_store->blobstore) / bdev->blocklen;
@ -1360,43 +1357,6 @@ vbdev_lvol_rename(struct spdk_lvol *lvol, const char *new_lvol_name,
spdk_lvol_rename(lvol, new_lvol_name, _vbdev_lvol_rename_cb, req);
}
static void
_vbdev_lvol_set_xattr_cb(void *cb_arg, int lvolerrno)
{
struct spdk_lvol_req *req = cb_arg;
if (lvolerrno != 0) {
SPDK_ERRLOG("Setting xattr failed\n");
}
req->cb_fn(req->cb_arg, lvolerrno);
free(req);
}
void
vbdev_lvol_set_xattr(struct spdk_lvol *lvol, const char *name,
const char *value, spdk_lvol_op_complete cb_fn, void *cb_arg)
{
struct spdk_lvol_req *req;
req = calloc(1, sizeof(*req));
if (req == NULL) {
cb_fn(cb_arg, -ENOMEM);
return;
}
req->cb_fn = cb_fn;
req->cb_arg = cb_arg;
spdk_lvol_set_xattr(lvol, name, value, _vbdev_lvol_set_xattr_cb, req);
}
int
vbdev_lvol_get_xattr(struct spdk_lvol *lvol, const char *name,
const void **value, size_t *value_len)
{
return spdk_lvol_get_xattr(lvol, name, value, value_len);
}
static void
_vbdev_lvol_resize_cb(void *cb_arg, int lvolerrno)
{
@ -2060,256 +2020,4 @@ fail:
/* End external snapshot support */
static void
_vbdev_lvol_shallow_copy_base_bdev_event_cb(enum spdk_bdev_event_type type, struct spdk_bdev *bdev,
void *event_ctx)
{
}
static void
_vbdev_lvol_shallow_copy_cb(void *cb_arg, int lvolerrno)
{
struct spdk_lvol_copy_req *req = cb_arg;
struct spdk_lvol *lvol = req->lvol;
if (lvolerrno != 0) {
SPDK_ERRLOG("Could not make a shallow copy of bdev lvol %s due to error: %d.\n", lvol->name,
lvolerrno);
}
req->ext_dev->destroy(req->ext_dev);
req->cb_fn(req->cb_arg, lvolerrno);
free(req);
}
void
vbdev_lvol_shallow_copy(struct spdk_lvol *lvol, const char *bdev_name,
spdk_lvol_op_complete cb_fn, void *cb_arg)
{
struct spdk_bs_dev *ext_dev;
struct spdk_lvol_copy_req *req;
int rc;
if (lvol == NULL) {
SPDK_ERRLOG("lvol does not exist\n");
cb_fn(cb_arg, -EINVAL);
return;
}
if (bdev_name == NULL) {
SPDK_ERRLOG("bdev name does not exist\n");
cb_fn(cb_arg, -ENODEV);
return;
}
assert(lvol->bdev != NULL);
req = calloc(1, sizeof(*req));
if (req == NULL) {
SPDK_ERRLOG("Cannot alloc memory for vbdev lvol copy request pointer\n");
cb_fn(cb_arg, -ENOMEM);
return;
}
rc = spdk_bdev_create_bs_dev_ext(bdev_name, _vbdev_lvol_shallow_copy_base_bdev_event_cb,
NULL, &ext_dev);
if (rc < 0) {
SPDK_ERRLOG("Cannot create external bdev blob device\n");
free(req);
return;
}
req->cb_fn = cb_fn;
req->cb_arg = cb_arg;
req->lvol = lvol;
req->ext_dev = ext_dev;
spdk_lvol_shallow_copy(lvol, ext_dev, _vbdev_lvol_shallow_copy_cb, req);
}
static void seek_hole_done_cb(struct spdk_bdev_io *bdev_io, bool success, void *cb_arg);
static void
get_fragmap_done(struct spdk_fragmap_req *req, int error_code, const char *error_msg)
{
req->cb_fn(req->cb_arg, &req->fragmap, error_code);
spdk_bit_array_free(&req->fragmap.map);
spdk_put_io_channel(req->bdev_io_channel);
spdk_bdev_close(req->bdev_desc);
free(req);
}
static void
seek_data_done_cb(struct spdk_bdev_io *bdev_io, bool success, void *cb_arg)
{
struct spdk_fragmap_req *req = cb_arg;
uint64_t next_data_offset_blocks;
int rc;
next_data_offset_blocks = spdk_bdev_io_get_seek_offset(bdev_io);
spdk_bdev_free_io(bdev_io);
req->current_offset = next_data_offset_blocks * req->fragmap.block_size;
if (next_data_offset_blocks == UINT64_MAX || req->current_offset >= req->offset + req->size) {
get_fragmap_done(req, 0, NULL);
return;
}
rc = spdk_bdev_seek_hole(req->bdev_desc, req->bdev_io_channel,
spdk_divide_round_up(req->current_offset, req->fragmap.block_size),
seek_hole_done_cb, req);
if (rc != 0) {
get_fragmap_done(req, rc, "failed to seek hole");
}
}
static void
seek_hole_done_cb(struct spdk_bdev_io *bdev_io, bool success, void *cb_arg)
{
struct spdk_fragmap_req *req = cb_arg;
uint64_t next_offset;
uint64_t start_cluster;
uint64_t num_clusters;
int rc;
next_offset = spdk_bdev_io_get_seek_offset(bdev_io) * req->fragmap.block_size;
spdk_bdev_free_io(bdev_io);
next_offset = spdk_min(next_offset, req->offset + req->size);
start_cluster = spdk_divide_round_up(req->current_offset - req->offset, req->fragmap.cluster_size);
num_clusters = spdk_divide_round_up(next_offset - req->current_offset, req->fragmap.cluster_size);
for (uint64_t i = 0; i < num_clusters; i++) {
spdk_bit_array_set(req->fragmap.map, start_cluster + i);
}
req->fragmap.num_allocated_clusters += num_clusters;
req->current_offset = next_offset;
if (req->current_offset == req->offset + req->size) {
get_fragmap_done(req, 0, NULL);
return;
}
rc = spdk_bdev_seek_data(req->bdev_desc, req->bdev_io_channel,
spdk_divide_round_up(req->current_offset, req->fragmap.block_size),
seek_data_done_cb, req);
if (rc != 0) {
get_fragmap_done(req, rc, "failed to seek data");
}
}
static void
dummy_bdev_event_cb(enum spdk_bdev_event_type type, struct spdk_bdev *bdev, void *ctx)
{
}
void
vbdev_lvol_get_fragmap(struct spdk_lvol *lvol, uint64_t offset, uint64_t size,
spdk_lvol_op_with_fragmap_handle_complete cb_fn, void *cb_arg)
{
struct spdk_bdev_desc *desc;
struct spdk_io_channel *channel;
struct spdk_bit_array *fragmap;
struct spdk_fragmap_req *req;
uint64_t cluster_size, num_clusters, block_size, num_blocks, lvol_size, segment_size;
int rc;
/*
* Create a bitmap recording the allocated clusters
*/
cluster_size = spdk_bs_get_cluster_size(lvol->lvol_store->blobstore);
block_size = spdk_bdev_get_block_size(lvol->bdev);
num_blocks = spdk_bdev_get_num_blocks(lvol->bdev);
lvol_size = num_blocks * block_size;
if (offset + size > lvol_size) {
SPDK_ERRLOG("offset %lu and size %lu exceed lvol size %lu\n",
offset, size, lvol_size);
cb_fn(cb_arg, NULL, -EINVAL);
return;
}
segment_size = size;
if (size == 0) {
segment_size = lvol_size;
}
if (!spdk_is_divisible_by(offset, cluster_size) ||
!spdk_is_divisible_by(segment_size, cluster_size)) {
SPDK_ERRLOG("offset %lu and size %lu must be a multiple of cluster size %lu\n",
offset, segment_size, cluster_size);
cb_fn(cb_arg, NULL, -EINVAL);
return;
}
num_clusters = spdk_divide_round_up(segment_size, cluster_size);
fragmap = spdk_bit_array_create(num_clusters);
if (fragmap == NULL) {
SPDK_ERRLOG("failed to allocate fragmap with num_clusters %lu\n", num_clusters);
cb_fn(cb_arg, NULL, -ENOMEM);
return;
}
/*
* Construct a fragmap of the lvol
*/
rc = spdk_bdev_open_ext(lvol->bdev->name, false,
dummy_bdev_event_cb, NULL, &desc);
if (rc != 0) {
spdk_bit_array_free(&fragmap);
SPDK_ERRLOG("could not open bdev %s\n", lvol->bdev->name);
cb_fn(cb_arg, NULL, rc);
return;
}
channel = spdk_bdev_get_io_channel(desc);
if (channel == NULL) {
spdk_bit_array_free(&fragmap);
spdk_bdev_close(desc);
SPDK_ERRLOG("could not allocate I/O channel.\n");
cb_fn(cb_arg, NULL, -ENOMEM);
return;
}
req = calloc(1, sizeof(struct spdk_fragmap_req));
if (req == NULL) {
SPDK_ERRLOG("could not allocate fragmap_io\n");
spdk_put_io_channel(channel);
spdk_bdev_close(desc);
spdk_bit_array_free(&fragmap);
cb_fn(cb_arg, NULL, -ENOMEM);
return;
}
req->bdev = lvol->bdev;
req->bdev_desc = desc;
req->bdev_io_channel = channel;
req->offset = offset;
req->size = segment_size;
req->current_offset = offset;
req->cb_fn = cb_fn;
req->cb_arg = cb_arg;
req->fragmap.map = fragmap;
req->fragmap.num_clusters = num_clusters;
req->fragmap.block_size = block_size;
req->fragmap.cluster_size = cluster_size;
req->fragmap.num_allocated_clusters = 0;
rc = spdk_bdev_seek_data(desc, channel,
spdk_divide_round_up(offset, block_size),
seek_data_done_cb, req);
if (rc != 0) {
SPDK_ERRLOG("failed to seek data\n");
spdk_put_io_channel(channel);
spdk_bdev_close(desc);
spdk_bit_array_free(&fragmap);
free(req);
cb_fn(cb_arg, NULL, rc);
}
}
SPDK_LOG_REGISTER_COMPONENT(vbdev_lvol)

View File

@ -69,27 +69,6 @@ void vbdev_lvol_set_read_only(struct spdk_lvol *lvol, spdk_lvol_op_complete cb_f
void vbdev_lvol_rename(struct spdk_lvol *lvol, const char *new_lvol_name,
spdk_lvol_op_complete cb_fn, void *cb_arg);
/**
* \brief Set lvol's xattr
* \param lvol Handle to lvol
* \param name xattr name
* \param value xattr value
* \param cb_fn Completion callback
* \param cb_arg Completion callback custom arguments
*/
void vbdev_lvol_set_xattr(struct spdk_lvol *lvol, const char *name,
const char *value, spdk_lvol_op_complete cb_fn, void *cb_arg);
/**
* \brief Get lvol's xattr
* \param lvol Handle to lvol
* \param name Xattr name
* \param value Xattr value
* \param value_len Xattr value length
*/
int vbdev_lvol_get_xattr(struct spdk_lvol *lvol, const char *name,
const void **value, size_t *value_len);
/**
* Destroy a logical volume
* \param lvol Handle to lvol
@ -146,27 +125,4 @@ int vbdev_lvol_esnap_dev_create(void *bs_ctx, void *blob_ctx, struct spdk_blob *
const void *esnap_id, uint32_t id_len,
struct spdk_bs_dev **_bs_dev);
/**
* \brief Make a shallow copy of lvol over a bdev
*
* \param lvol Handle to lvol
* \param bdev_name Name of the bdev to copy on
* \param cb_fn Completion callback
* \param cb_arg Completion callback custom arguments
*/
void vbdev_lvol_shallow_copy(struct spdk_lvol *lvol, const char *bdev_name,
spdk_lvol_op_complete cb_fn, void *cb_arg);
/**
* @brief Get a fragmap for a specific segment of a logical volume using the provided offset and size
*
* @param lvol Handle to lvol
* @param offset Offset in bytes of the specific segment of the logical volume
* @param size Size in bytes of the specific segment of the logical volume
* @param cb_fn Completion callback
* @param cb_arg Completion callback custom arguments
*/
void vbdev_lvol_get_fragmap(struct spdk_lvol *lvol, uint64_t offset, uint64_t size,
spdk_lvol_op_with_fragmap_handle_complete cb_fn, void *cb_arg);
#endif /* SPDK_VBDEV_LVOL_H */

View File

@ -10,8 +10,6 @@
#include "vbdev_lvol.h"
#include "spdk/string.h"
#include "spdk/log.h"
#include "spdk/bdev_module.h"
#include "spdk/bit_array.h"
SPDK_LOG_REGISTER_COMPONENT(lvol_rpc)
@ -701,154 +699,6 @@ cleanup:
SPDK_RPC_REGISTER("bdev_lvol_rename", rpc_bdev_lvol_rename, SPDK_RPC_RUNTIME)
struct rpc_bdev_lvol_set_xattr {
char *name;
char *xattr_name;
char *xattr_value;
};
static void
free_rpc_bdev_lvol_set_xattr(struct rpc_bdev_lvol_set_xattr *req)
{
free(req->name);
free(req->xattr_name);
free(req->xattr_value);
}
static const struct spdk_json_object_decoder rpc_bdev_lvol_set_xattr_decoders[] = {
{"name", offsetof(struct rpc_bdev_lvol_set_xattr, name), spdk_json_decode_string},
{"xattr_name", offsetof(struct rpc_bdev_lvol_set_xattr, xattr_name), spdk_json_decode_string},
{"xattr_value", offsetof(struct rpc_bdev_lvol_set_xattr, xattr_value), spdk_json_decode_string},
};
static void
rpc_bdev_lvol_set_xattr_cb(void *cb_arg, int lvolerrno)
{
struct spdk_jsonrpc_request *request = cb_arg;
if (lvolerrno != 0) {
goto invalid;
}
spdk_jsonrpc_send_bool_response(request, true);
return;
invalid:
spdk_jsonrpc_send_error_response(request, SPDK_JSONRPC_ERROR_INVALID_PARAMS,
spdk_strerror(-lvolerrno));
}
static void
rpc_bdev_lvol_set_xattr(struct spdk_jsonrpc_request *request,
const struct spdk_json_val *params)
{
struct rpc_bdev_lvol_set_xattr req = {};
struct spdk_bdev *bdev;
struct spdk_lvol *lvol;
SPDK_INFOLOG(lvol_rpc, "Setting lvol xattr\n");
if (spdk_json_decode_object(params, rpc_bdev_lvol_set_xattr_decoders,
SPDK_COUNTOF(rpc_bdev_lvol_set_xattr_decoders),
&req)) {
SPDK_INFOLOG(lvol_rpc, "spdk_json_decode_object failed\n");
spdk_jsonrpc_send_error_response(request, SPDK_JSONRPC_ERROR_INTERNAL_ERROR,
"spdk_json_decode_object failed");
goto cleanup;
}
bdev = spdk_bdev_get_by_name(req.name);
if (bdev == NULL) {
SPDK_ERRLOG("bdev '%s' does not exist\n", req.name);
spdk_jsonrpc_send_error_response(request, -ENODEV, spdk_strerror(ENODEV));
goto cleanup;
}
lvol = vbdev_lvol_get_from_bdev(bdev);
if (lvol == NULL) {
SPDK_ERRLOG("lvol does not exist\n");
spdk_jsonrpc_send_error_response(request, -ENODEV, spdk_strerror(ENODEV));
goto cleanup;
}
vbdev_lvol_set_xattr(lvol, req.xattr_name, req.xattr_value, rpc_bdev_lvol_set_xattr_cb, request);
cleanup:
free_rpc_bdev_lvol_set_xattr(&req);
}
SPDK_RPC_REGISTER("bdev_lvol_set_xattr", rpc_bdev_lvol_set_xattr, SPDK_RPC_RUNTIME)
struct rpc_bdev_lvol_get_xattr {
char *name;
char *xattr_name;
};
static void
free_rpc_bdev_lvol_get_xattr(struct rpc_bdev_lvol_get_xattr *req)
{
free(req->name);
free(req->xattr_name);
}
static const struct spdk_json_object_decoder rpc_bdev_lvol_get_xattr_decoders[] = {
{"name", offsetof(struct rpc_bdev_lvol_get_xattr, name), spdk_json_decode_string},
{"xattr_name", offsetof(struct rpc_bdev_lvol_get_xattr, xattr_name), spdk_json_decode_string},
};
static void
rpc_bdev_lvol_get_xattr(struct spdk_jsonrpc_request *request,
const struct spdk_json_val *params)
{
struct rpc_bdev_lvol_get_xattr req = {};
struct spdk_json_write_ctx *w;
struct spdk_bdev *bdev;
struct spdk_lvol *lvol;
const void *xattr_value;
size_t xattr_value_len;
int rc;
SPDK_INFOLOG(lvol_rpc, "Getting lvol xattr\n");
if (spdk_json_decode_object(params, rpc_bdev_lvol_get_xattr_decoders,
SPDK_COUNTOF(rpc_bdev_lvol_get_xattr_decoders),
&req)) {
SPDK_INFOLOG(lvol_rpc, "spdk_json_decode_object failed\n");
spdk_jsonrpc_send_error_response(request, SPDK_JSONRPC_ERROR_INTERNAL_ERROR,
"spdk_json_decode_object failed");
goto cleanup;
}
bdev = spdk_bdev_get_by_name(req.name);
if (bdev == NULL) {
SPDK_ERRLOG("bdev '%s' does not exist\n", req.name);
spdk_jsonrpc_send_error_response(request, -ENODEV, spdk_strerror(ENODEV));
goto cleanup;
}
lvol = vbdev_lvol_get_from_bdev(bdev);
if (lvol == NULL) {
SPDK_ERRLOG("lvol does not exist\n");
spdk_jsonrpc_send_error_response(request, -ENODEV, spdk_strerror(ENODEV));
goto cleanup;
}
rc = vbdev_lvol_get_xattr(lvol, req.xattr_name, &xattr_value, &xattr_value_len);
if (rc != 0) {
spdk_jsonrpc_send_error_response(request, SPDK_JSONRPC_ERROR_INTERNAL_ERROR,
spdk_strerror(-rc));
goto cleanup;
}
w = spdk_jsonrpc_begin_result(request);
spdk_json_write_string(w, (const char *)xattr_value);
spdk_jsonrpc_end_result(request, w);
cleanup:
free_rpc_bdev_lvol_get_xattr(&req);
}
SPDK_RPC_REGISTER("bdev_lvol_get_xattr", rpc_bdev_lvol_get_xattr, SPDK_RPC_RUNTIME)
struct rpc_bdev_lvol_inflate {
char *name;
};
@ -1350,7 +1200,6 @@ rpc_dump_lvol(struct spdk_json_write_ctx *w, struct spdk_lvol *lvol)
spdk_json_write_named_string_fmt(w, "alias", "%s/%s", lvs->name, lvol->name);
spdk_json_write_named_string(w, "uuid", lvol->uuid_str);
spdk_json_write_named_string(w, "name", lvol->name);
spdk_json_write_named_string(w, "creation_time", lvol->creation_time);
spdk_json_write_named_bool(w, "is_thin_provisioned", spdk_blob_is_thin_provisioned(lvol->blob));
spdk_json_write_named_bool(w, "is_snapshot", spdk_blob_is_snapshot(lvol->blob));
spdk_json_write_named_bool(w, "is_clone", spdk_blob_is_clone(lvol->blob));
@ -1492,260 +1341,3 @@ cleanup:
free_rpc_bdev_lvol_grow_lvstore(&req);
}
SPDK_RPC_REGISTER("bdev_lvol_grow_lvstore", rpc_bdev_lvol_grow_lvstore, SPDK_RPC_RUNTIME)
struct rpc_bdev_lvol_shallow_copy {
char *src_lvol_name;
char *dst_bdev_name;
};
static void
free_rpc_bdev_lvol_shallow_copy(struct rpc_bdev_lvol_shallow_copy *req)
{
free(req->src_lvol_name);
free(req->dst_bdev_name);
}
static const struct spdk_json_object_decoder rpc_bdev_lvol_shallow_copy_decoders[] = {
{"src_lvol_name", offsetof(struct rpc_bdev_lvol_shallow_copy, src_lvol_name), spdk_json_decode_string},
{"dst_bdev_name", offsetof(struct rpc_bdev_lvol_shallow_copy, dst_bdev_name), spdk_json_decode_string},
};
static void
rpc_bdev_lvol_shallow_copy_cb(void *cb_arg, int lvolerrno)
{
struct spdk_jsonrpc_request *request = cb_arg;
if (lvolerrno != 0) {
goto invalid;
}
spdk_jsonrpc_send_bool_response(request, true);
return;
invalid:
spdk_jsonrpc_send_error_response(request, SPDK_JSONRPC_ERROR_INVALID_PARAMS,
spdk_strerror(-lvolerrno));
}
static void
rpc_bdev_lvol_shallow_copy(struct spdk_jsonrpc_request *request,
const struct spdk_json_val *params)
{
struct rpc_bdev_lvol_shallow_copy req = {};
struct spdk_lvol *src_lvol;
struct spdk_bdev *src_lvol_bdev;
struct spdk_bdev *dst_bdev;
SPDK_INFOLOG(lvol_rpc, "Shallow copying lvol\n");
if (spdk_json_decode_object(params, rpc_bdev_lvol_shallow_copy_decoders,
SPDK_COUNTOF(rpc_bdev_lvol_shallow_copy_decoders),
&req)) {
SPDK_INFOLOG(lvol_rpc, "spdk_json_decode_object failed\n");
spdk_jsonrpc_send_error_response(request, SPDK_JSONRPC_ERROR_INTERNAL_ERROR,
"spdk_json_decode_object failed");
goto cleanup;
}
src_lvol_bdev = spdk_bdev_get_by_name(req.src_lvol_name);
if (src_lvol_bdev == NULL) {
SPDK_ERRLOG("lvol bdev '%s' does not exist\n", req.src_lvol_name);
spdk_jsonrpc_send_error_response(request, -ENODEV, spdk_strerror(ENODEV));
goto cleanup;
}
src_lvol = vbdev_lvol_get_from_bdev(src_lvol_bdev);
if (src_lvol == NULL) {
SPDK_ERRLOG("lvol does not exist\n");
spdk_jsonrpc_send_error_response(request, -ENODEV, spdk_strerror(ENODEV));
goto cleanup;
}
dst_bdev = spdk_bdev_get_by_name(req.dst_bdev_name);
if (dst_bdev == NULL) {
SPDK_ERRLOG("bdev '%s' does not exist\n", req.dst_bdev_name);
spdk_jsonrpc_send_error_response(request, -ENODEV, spdk_strerror(ENODEV));
goto cleanup;
}
vbdev_lvol_shallow_copy(src_lvol, req.dst_bdev_name, rpc_bdev_lvol_shallow_copy_cb, request);
cleanup:
free_rpc_bdev_lvol_shallow_copy(&req);
}
SPDK_RPC_REGISTER("bdev_lvol_shallow_copy", rpc_bdev_lvol_shallow_copy, SPDK_RPC_RUNTIME)
struct rpc_bdev_lvol_shallow_copy_status {
char *src_lvol_name;
};
static void
free_rpc_bdev_lvol_shallow_copy_status(struct rpc_bdev_lvol_shallow_copy_status *req)
{
free(req->src_lvol_name);
}
static const struct spdk_json_object_decoder rpc_bdev_lvol_shallow_copy_status_decoders[] = {
{"src_lvol_name", offsetof(struct rpc_bdev_lvol_shallow_copy_status, src_lvol_name), spdk_json_decode_string},
};
static void
rpc_bdev_lvol_shallow_copy_status(struct spdk_jsonrpc_request *request,
const struct spdk_json_val *params)
{
struct rpc_bdev_lvol_shallow_copy_status req = {};
struct spdk_bdev *src_lvol_bdev;
struct spdk_lvol *src_lvol;
struct spdk_json_write_ctx *w;
uint64_t copied_clusters, total_clusters;
int result;
SPDK_INFOLOG(lvol_rpc, "Shallow copy status\n");
if (spdk_json_decode_object(params, rpc_bdev_lvol_shallow_copy_status_decoders,
SPDK_COUNTOF(rpc_bdev_lvol_shallow_copy_status_decoders),
&req)) {
SPDK_INFOLOG(lvol_rpc, "spdk_json_decode_object failed\n");
spdk_jsonrpc_send_error_response(request, SPDK_JSONRPC_ERROR_INTERNAL_ERROR,
"spdk_json_decode_object failed");
goto cleanup;
}
src_lvol_bdev = spdk_bdev_get_by_name(req.src_lvol_name);
if (src_lvol_bdev == NULL) {
SPDK_ERRLOG("lvol bdev '%s' does not exist\n", req.src_lvol_name);
spdk_jsonrpc_send_error_response(request, -ENODEV, spdk_strerror(ENODEV));
goto cleanup;
}
src_lvol = vbdev_lvol_get_from_bdev(src_lvol_bdev);
if (src_lvol == NULL) {
SPDK_ERRLOG("lvol does not exist\n");
spdk_jsonrpc_send_error_response(request, -ENODEV, spdk_strerror(ENODEV));
goto cleanup;
}
copied_clusters = spdk_blob_get_shallow_copy_copied_clusters(src_lvol->blob);
total_clusters = spdk_blob_get_shallow_copy_total_clusters(src_lvol->blob);
result = spdk_blob_get_shallow_copy_result(src_lvol->blob);
w = spdk_jsonrpc_begin_result(request);
spdk_json_write_object_begin(w);
spdk_json_write_named_string_fmt(w, "progress", "%lu/%lu", copied_clusters, total_clusters);
if (result > 0) {
spdk_json_write_named_string(w, "state", "none");
} else if (copied_clusters < total_clusters && result == 0) {
spdk_json_write_named_string(w, "state", "in progress");
} else if (copied_clusters == total_clusters && result == 0) {
spdk_json_write_named_string(w, "state", "complete");
} else {
spdk_json_write_named_string(w, "state", "error");
spdk_json_write_named_string(w, "error", spdk_strerror(-result));
}
spdk_json_write_object_end(w);
spdk_jsonrpc_end_result(request, w);
cleanup:
free_rpc_bdev_lvol_shallow_copy_status(&req);
}
SPDK_RPC_REGISTER("bdev_lvol_shallow_copy_status", rpc_bdev_lvol_shallow_copy_status,
SPDK_RPC_RUNTIME)
struct rpc_bdev_lvol_get_fragmap {
char *name;
uint64_t offset;
uint64_t size;
};
static void
free_rpc_bdev_lvol_get_fragmap(struct rpc_bdev_lvol_get_fragmap *r)
{
free(r->name);
}
static const struct spdk_json_object_decoder rpc_bdev_lvol_get_fragmap_decoders[] = {
{"name", offsetof(struct rpc_bdev_lvol_get_fragmap, name), spdk_json_decode_string, true},
{"offset", offsetof(struct rpc_bdev_lvol_get_fragmap, offset), spdk_json_decode_uint64, true},
{"size", offsetof(struct rpc_bdev_lvol_get_fragmap, size), spdk_json_decode_uint64, true},
};
static void
rpc_bdev_lvol_get_fragmap_cb(void *cb_arg, struct spdk_fragmap *fragmap, int lvolerrno)
{
struct spdk_json_write_ctx *w;
struct spdk_jsonrpc_request *request = cb_arg;
char *encoded;
if (lvolerrno != 0) {
goto invalid;
}
encoded = spdk_bit_array_to_base64_string(fragmap->map);
if (encoded == NULL) {
SPDK_ERRLOG("Failed to encode fragmap to base64 string\n");
lvolerrno = -EINVAL;
goto invalid;
}
w = spdk_jsonrpc_begin_result(request);
spdk_json_write_object_begin(w);
spdk_json_write_named_uint64(w, "cluster_size", fragmap->cluster_size);
spdk_json_write_named_uint64(w, "num_clusters", fragmap->num_clusters);
spdk_json_write_named_uint64(w, "num_allocated_clusters", fragmap->num_allocated_clusters);
spdk_json_write_named_string(w, "fragmap", encoded);
spdk_json_write_object_end(w);
spdk_jsonrpc_end_result(request, w);
free(encoded);
return;
invalid:
spdk_jsonrpc_send_error_response(request, SPDK_JSONRPC_ERROR_INVALID_PARAMS,
spdk_strerror(-lvolerrno));
}
static void
rpc_bdev_lvol_get_fragmap(struct spdk_jsonrpc_request *request, const struct spdk_json_val *params)
{
struct rpc_bdev_lvol_get_fragmap req = {};
struct spdk_bdev *bdev;
struct spdk_lvol *lvol;
if (spdk_json_decode_object(params, rpc_bdev_lvol_get_fragmap_decoders,
SPDK_COUNTOF(rpc_bdev_lvol_get_fragmap_decoders),
&req)) {
SPDK_ERRLOG("spdk_json_decode_object failed\n");
spdk_jsonrpc_send_error_response(request, SPDK_JSONRPC_ERROR_INTERNAL_ERROR,
"spdk_json_decode_object failed");
goto cleanup;
}
bdev = spdk_bdev_get_by_name(req.name);
if (bdev == NULL) {
SPDK_ERRLOG("bdev '%s' does not exist\n", req.name);
spdk_jsonrpc_send_error_response(request, -ENODEV, spdk_strerror(ENODEV));
goto cleanup;
}
lvol = vbdev_lvol_get_from_bdev(bdev);
if (lvol == NULL) {
SPDK_ERRLOG("lvol does not exist\n");
spdk_jsonrpc_send_error_response(request, -ENODEV, spdk_strerror(ENODEV));
goto cleanup;
}
vbdev_lvol_get_fragmap(lvol, req.offset, req.size, rpc_bdev_lvol_get_fragmap_cb, request);
cleanup:
free_rpc_bdev_lvol_get_fragmap(&req);
}
SPDK_RPC_REGISTER("bdev_lvol_get_fragmap", rpc_bdev_lvol_get_fragmap, SPDK_RPC_RUNTIME)

View File

@ -737,7 +737,7 @@ create_malloc_disk(struct spdk_bdev **bdev, const struct malloc_bdev_opts *opts)
mdisk->disk.optimal_io_boundary = opts->optimal_io_boundary;
mdisk->disk.split_on_optimal_io_boundary = true;
}
if (!spdk_uuid_is_null(&opts->uuid)) {
if (!spdk_mem_all_zero(&opts->uuid, sizeof(opts->uuid))) {
spdk_uuid_copy(&mdisk->disk.uuid, &opts->uuid);
}

View File

@ -46,7 +46,6 @@ SPDK_BDEV_MODULE_REGISTER(passthru, &passthru_if)
struct bdev_names {
char *vbdev_name;
char *bdev_name;
struct spdk_uuid uuid;
TAILQ_ENTRY(bdev_names) link;
};
static TAILQ_HEAD(, bdev_names) g_bdev_names = TAILQ_HEAD_INITIALIZER(g_bdev_names);
@ -405,19 +404,11 @@ vbdev_passthru_config_json(struct spdk_json_write_ctx *w)
struct vbdev_passthru *pt_node;
TAILQ_FOREACH(pt_node, &g_pt_nodes, link) {
const struct spdk_uuid *uuid = spdk_bdev_get_uuid(&pt_node->pt_bdev);
spdk_json_write_object_begin(w);
spdk_json_write_named_string(w, "method", "bdev_passthru_create");
spdk_json_write_named_object_begin(w, "params");
spdk_json_write_named_string(w, "base_bdev_name", spdk_bdev_get_name(pt_node->base_bdev));
spdk_json_write_named_string(w, "name", spdk_bdev_get_name(&pt_node->pt_bdev));
if (!spdk_uuid_is_null(uuid)) {
char uuid_str[SPDK_UUID_STRING_LEN];
spdk_uuid_fmt_lower(uuid_str, sizeof(uuid_str), uuid);
spdk_json_write_named_string(w, "uuid", uuid_str);
}
spdk_json_write_object_end(w);
spdk_json_write_object_end(w);
}
@ -456,8 +447,7 @@ pt_bdev_ch_destroy_cb(void *io_device, void *ctx_buf)
/* Create the passthru association from the bdev and vbdev name and insert
* on the global list. */
static int
vbdev_passthru_insert_name(const char *bdev_name, const char *vbdev_name,
const struct spdk_uuid *uuid)
vbdev_passthru_insert_name(const char *bdev_name, const char *vbdev_name)
{
struct bdev_names *name;
@ -489,10 +479,6 @@ vbdev_passthru_insert_name(const char *bdev_name, const char *vbdev_name,
return -ENOMEM;
}
if (uuid) {
spdk_uuid_copy(&name->uuid, uuid);
}
TAILQ_INSERT_TAIL(&g_bdev_names, name, link);
return 0;
@ -621,7 +607,6 @@ vbdev_passthru_register(const char *bdev_name)
break;
}
pt_node->pt_bdev.product_name = "passthru";
spdk_uuid_copy(&pt_node->pt_bdev.uuid, &name->uuid);
/* The base bdev that we're attaching to. */
rc = spdk_bdev_open_ext(bdev_name, true, vbdev_passthru_base_bdev_event_cb,
@ -700,15 +685,14 @@ vbdev_passthru_register(const char *bdev_name)
/* Create the passthru disk from the given bdev and vbdev name. */
int
bdev_passthru_create_disk(const char *bdev_name, const char *vbdev_name,
const struct spdk_uuid *uuid)
bdev_passthru_create_disk(const char *bdev_name, const char *vbdev_name)
{
int rc;
/* Insert the bdev name into our global name list even if it doesn't exist yet,
* it may show up soon...
*/
rc = vbdev_passthru_insert_name(bdev_name, vbdev_name, uuid);
rc = vbdev_passthru_insert_name(bdev_name, vbdev_name);
if (rc) {
return rc;
}

View File

@ -16,11 +16,9 @@
*
* \param bdev_name Bdev on which pass through vbdev will be created.
* \param vbdev_name Name of the pass through bdev.
* \param uuid Optional UUID to assign to the pass through bdev.
* \return 0 on success, other on failure.
*/
int bdev_passthru_create_disk(const char *bdev_name, const char *vbdev_name,
const struct spdk_uuid *uuid);
int bdev_passthru_create_disk(const char *bdev_name, const char *vbdev_name);
/**
* Delete passthru bdev.

View File

@ -14,7 +14,6 @@
struct rpc_bdev_passthru_create {
char *base_bdev_name;
char *name;
char *uuid;
};
/* Free the allocated memory resource after the RPC handling. */
@ -23,14 +22,12 @@ free_rpc_bdev_passthru_create(struct rpc_bdev_passthru_create *r)
{
free(r->base_bdev_name);
free(r->name);
free(r->uuid);
}
/* Structure to decode the input parameters for this RPC method. */
static const struct spdk_json_object_decoder rpc_bdev_passthru_create_decoders[] = {
{"base_bdev_name", offsetof(struct rpc_bdev_passthru_create, base_bdev_name), spdk_json_decode_string},
{"name", offsetof(struct rpc_bdev_passthru_create, name), spdk_json_decode_string},
{"uuid", offsetof(struct rpc_bdev_passthru_create, uuid), spdk_json_decode_string, true},
};
/* Decode the parameters for this RPC method and properly construct the passthru
@ -42,8 +39,6 @@ rpc_bdev_passthru_create(struct spdk_jsonrpc_request *request,
{
struct rpc_bdev_passthru_create req = {NULL};
struct spdk_json_write_ctx *w;
struct spdk_uuid *uuid = NULL;
struct spdk_uuid decoded_uuid;
int rc;
if (spdk_json_decode_object(params, rpc_bdev_passthru_create_decoders,
@ -55,16 +50,7 @@ rpc_bdev_passthru_create(struct spdk_jsonrpc_request *request,
goto cleanup;
}
if (req.uuid) {
if (spdk_uuid_parse(&decoded_uuid, req.uuid)) {
spdk_jsonrpc_send_error_response(request, SPDK_JSONRPC_ERROR_INVALID_PARAMS,
"Failed to parse bdev UUID");
goto cleanup;
}
uuid = &decoded_uuid;
}
rc = bdev_passthru_create_disk(req.base_bdev_name, req.name, uuid);
rc = bdev_passthru_create_disk(req.base_bdev_name, req.name);
if (rc != 0) {
spdk_jsonrpc_send_error_response(request, rc, spdk_strerror(-rc));
goto cleanup;

View File

@ -10,7 +10,7 @@ SO_VER := 5
SO_MINOR := 0
CFLAGS += -I$(SPDK_ROOT_DIR)/lib/bdev/
C_SRCS = bdev_raid.c bdev_raid_rpc.c bdev_raid_sb.c raid0.c raid1.c concat.c
C_SRCS = bdev_raid.c bdev_raid_rpc.c raid0.c raid1.c concat.c
ifeq ($(CONFIG_RAID5F),y)
C_SRCS += raid5f.c

File diff suppressed because it is too large Load Diff

View File

@ -9,13 +9,6 @@
#include "spdk/bdev_module.h"
#include "spdk/uuid.h"
#include "bdev_raid_sb.h"
#define RAID_BDEV_MIN_DATA_OFFSET_SIZE (1024*1024) /* 1 MiB */
SPDK_STATIC_ASSERT(RAID_BDEV_SB_MAX_LENGTH < RAID_BDEV_MIN_DATA_OFFSET_SIZE,
"Incorrect min data offset");
enum raid_level {
INVALID_RAID_LEVEL = -1,
RAID0 = 0,
@ -54,27 +47,15 @@ enum raid_bdev_state {
* required per base device for raid bdev will be kept here
*/
struct raid_base_bdev_info {
/* The raid bdev that this base bdev belongs to */
struct raid_bdev *raid_bdev;
/* name of the bdev */
char *name;
/* uuid of the bdev */
struct spdk_uuid uuid;
/* pointer to base spdk bdev */
struct spdk_bdev *bdev;
/* pointer to base bdev descriptor opened by raid bdev */
struct spdk_bdev_desc *desc;
/* data offset for raid bdev [blocks] */
uint64_t data_offset;
/* data size of for raid bdev [blocks] */
uint64_t data_size;
/*
* When underlying base device calls the hot plug function on drive removal,
* this flag will be set and later after doing some processing, base device
@ -84,12 +65,6 @@ struct raid_base_bdev_info {
/* Hold the number of blocks to know how large the base bdev is resized. */
uint64_t blockcnt;
/* io channel for the app thread */
struct spdk_io_channel *app_thread_ch;
/* Set to true when base bdev has completed the configuration process */
bool is_configured;
};
/*
@ -113,8 +88,6 @@ struct raid_bdev_io {
/* Private data for the raid module */
void *module_private;
TAILQ_ENTRY(raid_bdev_io) link;
};
/*
@ -153,9 +126,6 @@ struct raid_bdev {
/* number of base bdevs discovered */
uint8_t num_base_bdevs_discovered;
/* number of operational base bdevs */
uint8_t num_base_bdevs_operational;
/* minimum number of viable base bdevs that are required by array to operate */
uint8_t min_base_bdevs_operational;
@ -170,27 +140,6 @@ struct raid_bdev {
/* Private data for the raid module */
void *module_private;
/* Counter of callers of raid_bdev_suspend() */
uint32_t suspend_cnt;
/* Number of channels remaining to suspend */
uint32_t suspend_num_channels;
/* List of suspend contexts */
TAILQ_HEAD(, raid_bdev_suspend_ctx) suspend_ctx;
/* Device mutex */
pthread_mutex_t mutex;
/* Superblock */
struct raid_bdev_superblock *sb;
/* Superblock write context */
void *sb_write_ctx;
/* A flag to indicate that an operation to add a base bdev is in progress */
bool base_bdev_updating;
};
#define RAID_FOR_EACH_BASE_BDEV(r, i) \
@ -209,15 +158,6 @@ struct raid_bdev_io_channel {
/* Private raid module IO channel */
struct spdk_io_channel *module_channel;
/* Number of raid IOs on this channel */
uint32_t num_ios;
/* Is the channel currently suspended */
bool is_suspended;
/* List of suspended IOs */
TAILQ_HEAD(, raid_bdev_io) suspended_ios;
};
/* TAIL head for raid bdev list */
@ -228,8 +168,7 @@ extern struct raid_all_tailq g_raid_bdev_list;
typedef void (*raid_bdev_destruct_cb)(void *cb_ctx, int rc);
int raid_bdev_create(const char *name, uint32_t strip_size, uint8_t num_base_bdevs,
enum raid_level level, struct raid_bdev **raid_bdev_out,
const struct spdk_uuid *uuid, bool superblock);
enum raid_level level, struct raid_bdev **raid_bdev_out, const struct spdk_uuid *uuid);
void raid_bdev_delete(struct raid_bdev *raid_bdev, raid_bdev_destruct_cb cb_fn, void *cb_ctx);
int raid_bdev_add_base_device(struct raid_bdev *raid_bdev, const char *name, uint8_t slot);
struct raid_bdev *raid_bdev_find_by_name(const char *name);
@ -238,9 +177,6 @@ const char *raid_bdev_level_to_str(enum raid_level level);
enum raid_bdev_state raid_bdev_str_to_state(const char *str);
const char *raid_bdev_state_to_str(enum raid_bdev_state state);
void raid_bdev_write_info_json(struct raid_bdev *raid_bdev, struct spdk_json_write_ctx *w);
int raid_bdev_remove_base_bdev(struct spdk_bdev *base_bdev);
int raid_bdev_grow_base_bdev(struct raid_bdev *raid_bdev, char *base_bdev_name,
raid_bdev_destruct_cb cb_fn, void *cb_arg);
/*
* RAID module descriptor
@ -306,9 +242,6 @@ struct raid_bdev_module {
void (*resize)(struct raid_bdev *raid_bdev);
TAILQ_ENTRY(raid_bdev_module) link;
bool (*channel_grow_base_bdev)(struct raid_bdev *raid_bdev,
struct raid_bdev_io_channel *raid_ch);
};
void raid_bdev_module_list_add(struct raid_bdev_module *raid_module);
@ -330,62 +263,4 @@ void raid_bdev_queue_io_wait(struct raid_bdev_io *raid_io, struct spdk_bdev *bde
void raid_bdev_io_complete(struct raid_bdev_io *raid_io, enum spdk_bdev_io_status status);
void raid_bdev_module_stop_done(struct raid_bdev *raid_bdev);
/**
* Raid bdev I/O read/write wrapper for spdk_bdev_readv_blocks_ext function.
*/
static inline int
raid_bdev_readv_blocks_ext(struct raid_base_bdev_info *base_info, struct spdk_io_channel *ch,
struct iovec *iov, int iovcnt, uint64_t offset_blocks,
uint64_t num_blocks, spdk_bdev_io_completion_cb cb, void *cb_arg,
struct spdk_bdev_ext_io_opts *opts)
{
struct spdk_bdev_desc *desc = base_info->desc;
uint64_t offset = base_info->data_offset + offset_blocks;
return spdk_bdev_readv_blocks_ext(desc, ch, iov, iovcnt, offset, num_blocks, cb, cb_arg, opts);
}
/**
* Raid bdev I/O read/write wrapper for spdk_bdev_writev_blocks_ext function.
*/
static inline int
raid_bdev_writev_blocks_ext(struct raid_base_bdev_info *base_info, struct spdk_io_channel *ch,
struct iovec *iov, int iovcnt, uint64_t offset_blocks,
uint64_t num_blocks, spdk_bdev_io_completion_cb cb, void *cb_arg,
struct spdk_bdev_ext_io_opts *opts)
{
struct spdk_bdev_desc *desc = base_info->desc;
uint64_t offset = base_info->data_offset + offset_blocks;
return spdk_bdev_writev_blocks_ext(desc, ch, iov, iovcnt, offset, num_blocks, cb, cb_arg, opts);
}
/**
* Raid bdev I/O read/write wrapper for spdk_bdev_unmap_blocks function.
*/
static inline int
raid_bdev_unmap_blocks(struct raid_base_bdev_info *base_info, struct spdk_io_channel *ch,
uint64_t offset_blocks, uint64_t num_blocks,
spdk_bdev_io_completion_cb cb, void *cb_arg)
{
struct spdk_bdev_desc *desc = base_info->desc;
uint64_t offset = base_info->data_offset + offset_blocks;
return spdk_bdev_unmap_blocks(desc, ch, offset, num_blocks, cb, cb_arg);
}
/**
* Raid bdev I/O read/write wrapper for spdk_bdev_flush_blocks function.
*/
static inline int
raid_bdev_flush_blocks(struct raid_base_bdev_info *base_info, struct spdk_io_channel *ch,
uint64_t offset_blocks, uint64_t num_blocks,
spdk_bdev_io_completion_cb cb, void *cb_arg)
{
struct spdk_bdev_desc *desc = base_info->desc;
uint64_t offset = base_info->data_offset + offset_blocks;
return spdk_bdev_flush_blocks(desc, ch, offset, num_blocks, cb, cb_arg);
}
#endif /* SPDK_BDEV_RAID_INTERNAL_H */

View File

@ -87,12 +87,8 @@ rpc_bdev_raid_get_bdevs(struct spdk_jsonrpc_request *request,
/* Get raid bdev list based on the category requested */
TAILQ_FOREACH(raid_bdev, &g_raid_bdev_list, global_link) {
if (raid_bdev->state == state || state == RAID_BDEV_STATE_MAX) {
char uuid_str[SPDK_UUID_STRING_LEN];
spdk_json_write_object_begin(w);
spdk_json_write_named_string(w, "name", raid_bdev->bdev.name);
spdk_uuid_fmt_lower(uuid_str, sizeof(uuid_str), &raid_bdev->bdev.uuid);
spdk_json_write_named_string(w, "uuid", uuid_str);
raid_bdev_write_info_json(raid_bdev, w);
spdk_json_write_object_end(w);
}
@ -134,9 +130,6 @@ struct rpc_bdev_raid_create {
/* UUID for this raid bdev */
char *uuid;
/* superblock support */
bool superblock;
};
/*
@ -203,7 +196,6 @@ static const struct spdk_json_object_decoder rpc_bdev_raid_create_decoders[] = {
{"raid_level", offsetof(struct rpc_bdev_raid_create, level), decode_raid_level},
{"base_bdevs", offsetof(struct rpc_bdev_raid_create, base_bdevs), decode_base_bdevs},
{"uuid", offsetof(struct rpc_bdev_raid_create, uuid), spdk_json_decode_string, true},
{"superblock", offsetof(struct rpc_bdev_raid_create, superblock), spdk_json_decode_bool, true},
};
/*
@ -245,7 +237,7 @@ rpc_bdev_raid_create(struct spdk_jsonrpc_request *request,
}
rc = raid_bdev_create(req.name, req.strip_size_kb, req.base_bdevs.num_base_bdevs,
req.level, &raid_bdev, uuid, req.superblock);
req.level, &raid_bdev, uuid);
if (rc != 0) {
spdk_jsonrpc_send_error_response_fmt(request, rc,
"Failed to create RAID bdev %s: %s",
@ -389,192 +381,3 @@ cleanup:
free(ctx);
}
SPDK_RPC_REGISTER("bdev_raid_delete", rpc_bdev_raid_delete, SPDK_RPC_RUNTIME)
/*
* Decoder object for RPC bdev_raid_remove_base_bdev
*/
static const struct spdk_json_object_decoder rpc_bdev_raid_remove_base_bdev_decoders[] = {
{"name", 0, spdk_json_decode_string},
};
/*
* brief:
* bdev_raid_remove_base_bdev function is the RPC for removing base bdev from a raid bdev.
* It takes base bdev name as input.
* params:
* request - pointer to json rpc request
* params - pointer to request parameters
* returns:
* none
*/
static void
rpc_bdev_raid_remove_base_bdev(struct spdk_jsonrpc_request *request,
const struct spdk_json_val *params)
{
struct spdk_bdev *bdev;
char *name = NULL;
int rc;
if (spdk_json_decode_object(params, rpc_bdev_raid_remove_base_bdev_decoders,
SPDK_COUNTOF(rpc_bdev_raid_remove_base_bdev_decoders),
&name)) {
spdk_jsonrpc_send_error_response(request, SPDK_JSONRPC_ERROR_PARSE_ERROR,
"spdk_json_decode_object failed");
return;
}
bdev = spdk_bdev_get_by_name(name);
if (bdev == NULL) {
spdk_jsonrpc_send_error_response_fmt(request, -ENODEV, "base bdev %s is not found in config", name);
goto cleanup;
}
rc = raid_bdev_remove_base_bdev(bdev);
if (rc != 0) {
spdk_jsonrpc_send_error_response_fmt(request, rc, "Failed to remove base bdev %s from raid bdev",
name);
goto cleanup;
}
spdk_jsonrpc_send_bool_response(request, true);
cleanup:
free(name);
}
SPDK_RPC_REGISTER("bdev_raid_remove_base_bdev", rpc_bdev_raid_remove_base_bdev, SPDK_RPC_RUNTIME)
/*
* Input structure for RPC rpc_bdev_raid_grow_base_bdev
*/
struct rpc_bdev_raid_grow_base_bdev {
/* Raid bdev name */
char *raid_bdev_name;
/* Base bdev name */
char *base_bdev_name;
};
/*
* brief:
* free_rpc_bdev_raid_grow_base_bdev frees RPC bdev_raid_grow_base_bdev related parameters
* params:
* req - pointer to RPC request
* returns:
* none
*/
static void
free_rpc_bdev_raid_grow_base_bdev(struct rpc_bdev_raid_grow_base_bdev *req)
{
free(req->raid_bdev_name);
free(req->base_bdev_name);
}
/*
* Decoder object for RPC bdev_raid_grow_base_bdev
*/
static const struct spdk_json_object_decoder rpc_bdev_raid_grow_base_bdev_decoders[] = {
{"raid_name", offsetof(struct rpc_bdev_raid_grow_base_bdev, raid_bdev_name), spdk_json_decode_string},
{"base_name", offsetof(struct rpc_bdev_raid_grow_base_bdev, base_bdev_name), spdk_json_decode_string},
};
struct rpc_bdev_raid_grow_base_bdev_ctx {
struct rpc_bdev_raid_grow_base_bdev req;
struct spdk_jsonrpc_request *request;
};
/*
* brief:
* params:
* cb_arg - pointer to the callback context.
* rc - return code of the adding a base bdev.
* returns:
* none
*/
static void
bdev_raid_grow_base_bdev_done(void *cb_arg, int rc)
{
struct rpc_bdev_raid_grow_base_bdev_ctx *ctx = cb_arg;
struct spdk_jsonrpc_request *request = ctx->request;
if (rc != 0) {
SPDK_ERRLOG("Failed to grow raid %s adding base bdev %s (%d): %s\n",
ctx->req.raid_bdev_name, ctx->req.base_bdev_name, rc, spdk_strerror(-rc));
spdk_jsonrpc_send_error_response(request, SPDK_JSONRPC_ERROR_INTERNAL_ERROR,
spdk_strerror(-rc));
goto exit;
}
spdk_jsonrpc_send_bool_response(request, true);
exit:
free_rpc_bdev_raid_grow_base_bdev(&ctx->req);
free(ctx);
}
/*
* brief:
* bdev_raid_grow_base_bdev is the RPC add a base bdev to a raid bdev, growing the raid's size if needed
* It takes raid bdev name and base bdev name as input.
* params:
* request - pointer to json rpc request
* params - pointer to request parameters
* returns:
* none
*/
static void
rpc_bdev_raid_grow_base_bdev(struct spdk_jsonrpc_request *request,
const struct spdk_json_val *params)
{
struct rpc_bdev_raid_grow_base_bdev_ctx *ctx;
struct raid_bdev *raid_bdev;
struct spdk_bdev *base_bdev;
int rc;
ctx = calloc(1, sizeof(*ctx));
if (!ctx) {
spdk_jsonrpc_send_error_response(request, -ENOMEM, spdk_strerror(ENOMEM));
return;
}
if (spdk_json_decode_object(params, rpc_bdev_raid_grow_base_bdev_decoders,
SPDK_COUNTOF(rpc_bdev_raid_grow_base_bdev_decoders),
&ctx->req)) {
spdk_jsonrpc_send_error_response(request, SPDK_JSONRPC_ERROR_PARSE_ERROR,
"spdk_json_decode_object failed");
goto cleanup;
}
raid_bdev = raid_bdev_find_by_name(ctx->req.raid_bdev_name);
if (raid_bdev == NULL) {
spdk_jsonrpc_send_error_response_fmt(request, -ENODEV,
"raid bdev %s not found",
ctx->req.raid_bdev_name);
goto cleanup;
}
base_bdev = spdk_bdev_get_by_name(ctx->req.base_bdev_name);
if (base_bdev == NULL) {
spdk_jsonrpc_send_error_response_fmt(request, -ENODEV,
"base bdev %s not found",
ctx->req.base_bdev_name);
goto cleanup;
}
ctx->request = request;
rc = raid_bdev_grow_base_bdev(raid_bdev, ctx->req.base_bdev_name, bdev_raid_grow_base_bdev_done,
ctx);
if (rc != 0) {
spdk_jsonrpc_send_error_response_fmt(request, rc,
"Failed to grow raid %s adding base bdev %s: %s",
ctx->req.raid_bdev_name, ctx->req.base_bdev_name,
spdk_strerror(-rc));
goto cleanup;
}
return;
cleanup:
free_rpc_bdev_raid_grow_base_bdev(&ctx->req);
free(ctx);
}
SPDK_RPC_REGISTER("bdev_raid_grow_base_bdev", rpc_bdev_raid_grow_base_bdev, SPDK_RPC_RUNTIME)

View File

@ -1,229 +0,0 @@
/* SPDX-License-Identifier: BSD-3-Clause
* Copyright (C) 2022 Intel Corporation.
* All rights reserved.
*/
#include "spdk/bdev_module.h"
#include "spdk/crc32.h"
#include "spdk/env.h"
#include "spdk/log.h"
#include "spdk/string.h"
#include "spdk/util.h"
#include "bdev_raid_sb.h"
struct raid_bdev_read_sb_ctx {
struct spdk_bdev_desc *desc;
struct spdk_io_channel *ch;
raid_bdev_load_sb_cb cb;
void *cb_ctx;
void *buf;
uint32_t buf_size;
};
struct raid_bdev_save_sb_ctx {
raid_bdev_save_sb_cb cb;
void *cb_ctx;
};
static void raid_bdev_read_sb_cb(struct spdk_bdev_io *bdev_io, bool success, void *cb_arg);
static int
raid_bdev_parse_superblock(struct raid_bdev_read_sb_ctx *ctx)
{
struct raid_bdev_superblock *sb = ctx->buf;
struct spdk_bdev *bdev = spdk_bdev_desc_get_bdev(ctx->desc);
uint32_t crc;
if (memcmp(sb->signature, RAID_BDEV_SB_SIG, sizeof(sb->signature))) {
SPDK_DEBUGLOG(bdev_raid_sb, "invalid signature\n");
return -EINVAL;
}
if (sb->length > ctx->buf_size) {
if (sb->length > RAID_BDEV_SB_MAX_LENGTH) {
SPDK_DEBUGLOG(bdev_raid_sb, "invalid length\n");
return -EINVAL;
}
return -EAGAIN;
}
crc = sb->crc;
raid_bdev_sb_update_crc(sb);
if (sb->crc != crc) {
SPDK_WARNLOG("Incorrect superblock crc on bdev %s\n", spdk_bdev_get_name(bdev));
sb->crc = crc;
return -EINVAL;
}
if (sb->version.major > RAID_BDEV_SB_VERSION_MAJOR) {
SPDK_ERRLOG("Not supported superblock major version %d on bdev %s\n",
sb->version.major, spdk_bdev_get_name(bdev));
return -EINVAL;
}
if (sb->version.major == RAID_BDEV_SB_VERSION_MAJOR &&
sb->version.minor > RAID_BDEV_SB_VERSION_MINOR) {
SPDK_WARNLOG("Superblock minor version %d on bdev %s is higher than the currently supported: %d\n",
sb->version.minor, spdk_bdev_get_name(bdev), RAID_BDEV_SB_VERSION_MINOR);
}
return 0;
}
static void
raid_bdev_read_sb_ctx_free(struct raid_bdev_read_sb_ctx *ctx)
{
spdk_dma_free(ctx->buf);
free(ctx);
}
static int
raid_bdev_read_sb_remainder(struct raid_bdev_read_sb_ctx *ctx)
{
struct raid_bdev_superblock *sb = ctx->buf;
struct spdk_bdev *bdev = spdk_bdev_desc_get_bdev(ctx->desc);
uint32_t buf_size_prev;
void *buf;
int rc;
buf_size_prev = ctx->buf_size;
ctx->buf_size = SPDK_ALIGN_CEIL(sb->length, spdk_bdev_get_block_size(bdev));
buf = spdk_dma_realloc(ctx->buf, ctx->buf_size, spdk_bdev_get_buf_align(bdev), NULL);
if (buf == NULL) {
SPDK_ERRLOG("Failed to reallocate buffer\n");
return -ENOMEM;
}
ctx->buf = buf;
rc = spdk_bdev_read(ctx->desc, ctx->ch, ctx->buf + buf_size_prev, buf_size_prev,
ctx->buf_size - buf_size_prev, raid_bdev_read_sb_cb, ctx);
if (rc != 0) {
SPDK_ERRLOG("Failed to read bdev %s superblock remainder: %s\n",
spdk_bdev_get_name(bdev), spdk_strerror(-rc));
return rc;
}
return 0;
}
static void
raid_bdev_read_sb_cb(struct spdk_bdev_io *bdev_io, bool success, void *cb_arg)
{
struct raid_bdev_read_sb_ctx *ctx = cb_arg;
struct raid_bdev_superblock *sb = NULL;
int status;
spdk_bdev_free_io(bdev_io);
if (success) {
status = raid_bdev_parse_superblock(ctx);
if (status == -EAGAIN) {
status = raid_bdev_read_sb_remainder(ctx);
if (status == 0) {
return;
}
} else if (status != 0) {
SPDK_DEBUGLOG(bdev_raid_sb, "failed to parse bdev %s superblock\n",
spdk_bdev_get_name(spdk_bdev_desc_get_bdev(ctx->desc)));
} else {
sb = ctx->buf;
}
} else {
status = -EIO;
}
if (ctx->cb) {
ctx->cb(sb, status, ctx->cb_ctx);
}
raid_bdev_read_sb_ctx_free(ctx);
}
int
raid_bdev_load_base_bdev_superblock(struct spdk_bdev_desc *desc, struct spdk_io_channel *ch,
raid_bdev_load_sb_cb cb, void *cb_ctx)
{
struct spdk_bdev *bdev = spdk_bdev_desc_get_bdev(desc);
struct raid_bdev_read_sb_ctx *ctx;
int rc;
ctx = calloc(1, sizeof(*ctx));
if (!ctx) {
return -ENOMEM;
}
ctx->desc = desc;
ctx->ch = ch;
ctx->cb = cb;
ctx->cb_ctx = cb_ctx;
ctx->buf_size = SPDK_ALIGN_CEIL(sizeof(struct raid_bdev_superblock),
spdk_bdev_get_block_size(bdev));
ctx->buf = spdk_dma_malloc(ctx->buf_size, spdk_bdev_get_buf_align(bdev), NULL);
if (!ctx->buf) {
rc = -ENOMEM;
goto err;
}
rc = spdk_bdev_read(desc, ch, ctx->buf, 0, ctx->buf_size, raid_bdev_read_sb_cb, ctx);
if (rc) {
goto err;
}
return 0;
err:
raid_bdev_read_sb_ctx_free(ctx);
return rc;
}
static void
raid_bdev_write_sb_cb(struct spdk_bdev_io *bdev_io, bool success, void *cb_arg)
{
struct raid_bdev_save_sb_ctx *ctx = cb_arg;
spdk_bdev_free_io(bdev_io);
if (ctx->cb) {
ctx->cb(success ? 0 : -EIO, ctx->cb_ctx);
}
free(ctx);
}
int
raid_bdev_save_base_bdev_superblock(struct spdk_bdev_desc *desc, struct spdk_io_channel *ch,
const struct raid_bdev_superblock *sb,
raid_bdev_save_sb_cb cb, void *cb_ctx)
{
struct spdk_bdev *bdev = spdk_bdev_desc_get_bdev(desc);
uint64_t nbytes = SPDK_ALIGN_CEIL(sb->length, spdk_bdev_get_block_size(bdev));
struct raid_bdev_save_sb_ctx *ctx;
int rc;
ctx = calloc(1, sizeof(*ctx));
if (!ctx) {
return -ENOMEM;
}
ctx->cb = cb;
ctx->cb_ctx = cb_ctx;
rc = spdk_bdev_write(desc, ch, (void *)sb, 0, nbytes, raid_bdev_write_sb_cb, ctx);
if (rc) {
free(ctx);
}
return rc;
}
void
raid_bdev_sb_update_crc(struct raid_bdev_superblock *sb)
{
sb->crc = 0;
sb->crc = spdk_crc32c_update(sb, sb->length, 0);
}
SPDK_LOG_REGISTER_COMPONENT(bdev_raid_sb)

View File

@ -1,98 +0,0 @@
/* SPDX-License-Identifier: BSD-3-Clause
* Copyright (C) 2022 Intel Corporation.
* All rights reserved.
*/
#ifndef SPDK_BDEV_RAID_SB_H_
#define SPDK_BDEV_RAID_SB_H_
#include "spdk/stdinc.h"
#include "spdk/util.h"
#include "spdk/uuid.h"
#define RAID_BDEV_SB_VERSION_MAJOR 1
#define RAID_BDEV_SB_VERSION_MINOR 0
#define RAID_BDEV_SB_NAME_SIZE 64
#define RAID_BDEV_SB_MAX_LENGTH \
SPDK_ALIGN_CEIL((sizeof(struct raid_bdev_superblock) + UINT8_MAX * sizeof(struct raid_bdev_sb_base_bdev)), 0x1000)
enum raid_bdev_sb_base_bdev_state {
RAID_SB_BASE_BDEV_MISSING = 0,
RAID_SB_BASE_BDEV_CONFIGURED = 1,
RAID_SB_BASE_BDEV_FAILED = 2,
RAID_SB_BASE_BDEV_REMOVED = 3,
};
struct raid_bdev_sb_base_bdev {
/* uuid of the base bdev */
struct spdk_uuid uuid;
/* offset in blocks from base device start to the start of raid data area */
uint64_t data_offset;
/* size in blocks of the base device raid data area */
uint64_t data_size;
/* state of the base bdev */
uint32_t state;
/* feature/status flags */
uint32_t flags;
/* slot number of this base bdev in the raid */
uint8_t slot;
uint8_t reserved[23];
};
SPDK_STATIC_ASSERT(sizeof(struct raid_bdev_sb_base_bdev) == 64, "incorrect size");
struct raid_bdev_superblock {
#define RAID_BDEV_SB_SIG "SPDKRAID"
uint8_t signature[8];
struct {
/* incremented when a breaking change in the superblock structure is made */
uint16_t major;
/* incremented for changes in the superblock that are backward compatible */
uint16_t minor;
} version;
/* length in bytes of the entire superblock */
uint32_t length;
/* crc32c checksum of the entire superblock */
uint32_t crc;
/* feature/status flags */
uint32_t flags;
/* unique id of the raid bdev */
struct spdk_uuid uuid;
/* name of the raid bdev */
uint8_t name[RAID_BDEV_SB_NAME_SIZE];
/* size of the raid bdev in blocks */
uint64_t raid_size;
/* the raid bdev block size - must be the same for all base bdevs */
uint32_t block_size;
/* the raid level */
uint32_t level;
/* strip (chunk) size in blocks */
uint32_t strip_size;
/* state of the raid */
uint32_t state;
/* sequence number, incremented on every superblock update */
uint64_t seq_number;
/* number of raid base devices */
uint8_t num_base_bdevs;
uint8_t reserved[86];
/* size of the base bdevs array */
uint8_t base_bdevs_size;
/* array of base bdev descriptors */
struct raid_bdev_sb_base_bdev base_bdevs[];
};
SPDK_STATIC_ASSERT(sizeof(struct raid_bdev_superblock) == 224, "incorrect size");
typedef void (*raid_bdev_load_sb_cb)(const struct raid_bdev_superblock *sb, int status, void *ctx);
typedef void (*raid_bdev_save_sb_cb)(int status, void *ctx);
int raid_bdev_load_base_bdev_superblock(struct spdk_bdev_desc *desc, struct spdk_io_channel *ch,
raid_bdev_load_sb_cb cb, void *cb_ctx);
int raid_bdev_save_base_bdev_superblock(struct spdk_bdev_desc *desc, struct spdk_io_channel *ch,
const struct raid_bdev_superblock *sb, raid_bdev_save_sb_cb cb, void *cb_ctx);
void raid_bdev_sb_update_crc(struct raid_bdev_superblock *sb);
#endif /* SPDK_BDEV_RAID_SB_H_ */

View File

@ -110,12 +110,12 @@ concat_submit_rw_request(struct raid_bdev_io *raid_io)
io_opts.metadata = bdev_io->u.bdev.md_buf;
if (bdev_io->type == SPDK_BDEV_IO_TYPE_READ) {
ret = raid_bdev_readv_blocks_ext(base_info, base_ch,
ret = spdk_bdev_readv_blocks_ext(base_info->desc, base_ch,
bdev_io->u.bdev.iovs, bdev_io->u.bdev.iovcnt,
pd_lba, pd_blocks, concat_bdev_io_completion,
raid_io, &io_opts);
} else if (bdev_io->type == SPDK_BDEV_IO_TYPE_WRITE) {
ret = raid_bdev_writev_blocks_ext(base_info, base_ch,
ret = spdk_bdev_writev_blocks_ext(base_info->desc, base_ch,
bdev_io->u.bdev.iovs, bdev_io->u.bdev.iovcnt,
pd_lba, pd_blocks, concat_bdev_io_completion,
raid_io, &io_opts);
@ -242,12 +242,12 @@ concat_submit_null_payload_request(struct raid_bdev_io *raid_io)
base_ch = raid_io->raid_ch->base_channel[i];
switch (bdev_io->type) {
case SPDK_BDEV_IO_TYPE_UNMAP:
ret = raid_bdev_unmap_blocks(base_info, base_ch,
ret = spdk_bdev_unmap_blocks(base_info->desc, base_ch,
pd_lba, pd_blocks,
concat_base_io_complete, raid_io);
break;
case SPDK_BDEV_IO_TYPE_FLUSH:
ret = raid_bdev_flush_blocks(base_info, base_ch,
ret = spdk_bdev_flush_blocks(base_info->desc, base_ch,
pd_lba, pd_blocks,
concat_base_io_complete, raid_io);
break;
@ -287,11 +287,9 @@ concat_start(struct raid_bdev *raid_bdev)
int idx = 0;
RAID_FOR_EACH_BASE_BDEV(raid_bdev, base_info) {
uint64_t strip_cnt = base_info->data_size >> raid_bdev->strip_size_shift;
uint64_t strip_cnt = base_info->bdev->blockcnt >> raid_bdev->strip_size_shift;
uint64_t pd_block_cnt = strip_cnt << raid_bdev->strip_size_shift;
base_info->data_size = pd_block_cnt;
block_range[idx].start = total_blockcnt;
block_range[idx].length = pd_block_cnt;
total_blockcnt += pd_block_cnt;

View File

@ -111,12 +111,12 @@ raid0_submit_rw_request(struct raid_bdev_io *raid_io)
io_opts.metadata = bdev_io->u.bdev.md_buf;
if (bdev_io->type == SPDK_BDEV_IO_TYPE_READ) {
ret = raid_bdev_readv_blocks_ext(base_info, base_ch,
ret = spdk_bdev_readv_blocks_ext(base_info->desc, base_ch,
bdev_io->u.bdev.iovs, bdev_io->u.bdev.iovcnt,
pd_lba, pd_blocks, raid0_bdev_io_completion,
raid_io, &io_opts);
} else if (bdev_io->type == SPDK_BDEV_IO_TYPE_WRITE) {
ret = raid_bdev_writev_blocks_ext(base_info, base_ch,
ret = spdk_bdev_writev_blocks_ext(base_info->desc, base_ch,
bdev_io->u.bdev.iovs, bdev_io->u.bdev.iovcnt,
pd_lba, pd_blocks, raid0_bdev_io_completion,
raid_io, &io_opts);
@ -303,13 +303,13 @@ raid0_submit_null_payload_request(struct raid_bdev_io *raid_io)
switch (bdev_io->type) {
case SPDK_BDEV_IO_TYPE_UNMAP:
ret = raid_bdev_unmap_blocks(base_info, base_ch,
ret = spdk_bdev_unmap_blocks(base_info->desc, base_ch,
offset_in_disk, nblocks_in_disk,
raid0_base_io_complete, raid_io);
break;
case SPDK_BDEV_IO_TYPE_FLUSH:
ret = raid_bdev_flush_blocks(base_info, base_ch,
ret = spdk_bdev_flush_blocks(base_info->desc, base_ch,
offset_in_disk, nblocks_in_disk,
raid0_base_io_complete, raid_io);
break;
@ -335,22 +335,15 @@ raid0_submit_null_payload_request(struct raid_bdev_io *raid_io)
}
}
static int
raid0_start(struct raid_bdev *raid_bdev)
static uint64_t
raid0_calculate_blockcnt(struct raid_bdev *raid_bdev)
{
uint64_t min_blockcnt = UINT64_MAX;
uint64_t base_bdev_data_size;
struct raid_base_bdev_info *base_info;
RAID_FOR_EACH_BASE_BDEV(raid_bdev, base_info) {
/* Calculate minimum block count from all base bdevs */
min_blockcnt = spdk_min(min_blockcnt, base_info->data_size);
}
base_bdev_data_size = (min_blockcnt >> raid_bdev->strip_size_shift) << raid_bdev->strip_size_shift;
RAID_FOR_EACH_BASE_BDEV(raid_bdev, base_info) {
base_info->data_size = base_bdev_data_size;
min_blockcnt = spdk_min(min_blockcnt, base_info->bdev->blockcnt);
}
/*
@ -361,7 +354,14 @@ raid0_start(struct raid_bdev *raid_bdev)
SPDK_DEBUGLOG(bdev_raid0, "min blockcount %" PRIu64 ", numbasedev %u, strip size shift %u\n",
min_blockcnt, raid_bdev->num_base_bdevs, raid_bdev->strip_size_shift);
raid_bdev->bdev.blockcnt = base_bdev_data_size * raid_bdev->num_base_bdevs;
return ((min_blockcnt >> raid_bdev->strip_size_shift) <<
raid_bdev->strip_size_shift) * raid_bdev->num_base_bdevs;
}
static int
raid0_start(struct raid_bdev *raid_bdev)
{
raid_bdev->bdev.blockcnt = raid0_calculate_blockcnt(raid_bdev);
if (raid_bdev->num_base_bdevs > 1) {
raid_bdev->bdev.optimal_io_boundary = raid_bdev->strip_size;
@ -380,16 +380,8 @@ raid0_resize(struct raid_bdev *raid_bdev)
{
uint64_t blockcnt;
int rc;
uint64_t min_blockcnt = UINT64_MAX;
struct raid_base_bdev_info *base_info;
uint64_t base_bdev_data_size;
RAID_FOR_EACH_BASE_BDEV(raid_bdev, base_info) {
min_blockcnt = spdk_min(min_blockcnt, base_info->bdev->blockcnt - base_info->data_offset);
}
base_bdev_data_size = (min_blockcnt >> raid_bdev->strip_size_shift) << raid_bdev->strip_size_shift;
blockcnt = base_bdev_data_size * raid_bdev->num_base_bdevs;
blockcnt = raid0_calculate_blockcnt(raid_bdev);
if (blockcnt == raid_bdev->bdev.blockcnt) {
return;
@ -403,11 +395,6 @@ raid0_resize(struct raid_bdev *raid_bdev)
rc = spdk_bdev_notify_blockcnt_change(&raid_bdev->bdev, blockcnt);
if (rc != 0) {
SPDK_ERRLOG("Failed to notify blockcount change\n");
return;
}
RAID_FOR_EACH_BASE_BDEV(raid_bdev, base_info) {
base_info->data_size = base_bdev_data_size;
}
}

View File

@ -13,17 +13,6 @@ struct raid1_info {
struct raid_bdev *raid_bdev;
};
struct raid1_io_channel {
/* Index of last base bdev used for reads */
uint8_t base_bdev_read_idx;
/* Read bandwidths generated for base_bdevs */
uint64_t *base_bdev_read_bw;
/* Maximum read bandwidth from all base_bdevs */
uint64_t base_bdev_max_read_bw;
};
static void
raid1_bdev_io_completion(struct spdk_bdev_io *bdev_io, bool success, void *cb_arg)
{
@ -56,81 +45,25 @@ raid1_init_ext_io_opts(struct spdk_bdev_io *bdev_io, struct spdk_bdev_ext_io_opt
opts->metadata = bdev_io->u.bdev.md_buf;
}
static uint8_t
raid1_channel_next_read_base_bdev(struct raid_bdev_io_channel *raid_ch)
{
struct raid1_io_channel *raid1_ch = spdk_io_channel_get_ctx(raid_ch->module_channel);
uint8_t idx = raid1_ch->base_bdev_read_idx;
uint8_t i;
for (i = 0; i < raid_ch->num_channels; i++) {
if (++idx == raid_ch->num_channels) {
idx = 0;
}
if (raid_ch->base_channel[idx]) {
raid1_ch->base_bdev_read_idx = idx;
if (raid1_ch->base_bdev_read_bw[idx] < raid1_ch->base_bdev_max_read_bw) {
break;
}
}
}
return raid1_ch->base_bdev_read_idx;
}
static void
raid1_channel_update_read_bw_counters(struct raid_bdev_io_channel *raid_ch, uint64_t pd_blocks)
{
struct raid1_io_channel *raid1_ch = spdk_io_channel_get_ctx(raid_ch->module_channel);
uint8_t idx = raid1_ch->base_bdev_read_idx;
uint8_t i;
if (spdk_unlikely(raid1_ch->base_bdev_max_read_bw > UINT64_MAX - pd_blocks)) {
for (i = 0; i < raid_ch->num_channels; i++) {
raid1_ch->base_bdev_read_bw[i] = 0;
}
raid1_ch->base_bdev_max_read_bw = 0;
}
raid1_ch->base_bdev_read_bw[idx] += pd_blocks;
raid1_ch->base_bdev_max_read_bw = spdk_max(raid1_ch->base_bdev_max_read_bw,
raid1_ch->base_bdev_read_bw[idx]);
}
static int
raid1_submit_read_request(struct raid_bdev_io *raid_io)
{
struct raid_bdev *raid_bdev = raid_io->raid_bdev;
struct spdk_bdev_io *bdev_io = spdk_bdev_io_from_ctx(raid_io);
struct spdk_bdev_ext_io_opts io_opts;
struct raid_bdev_io_channel *raid_ch = raid_io->raid_ch;
struct raid_base_bdev_info *base_info;
struct spdk_io_channel *base_ch = NULL;
uint8_t ch_idx = 0;
struct raid_base_bdev_info *base_info = &raid_bdev->base_bdev_info[ch_idx];
struct spdk_io_channel *base_ch = raid_io->raid_ch->base_channel[ch_idx];
uint64_t pd_lba, pd_blocks;
uint8_t idx;
int ret;
pd_lba = bdev_io->u.bdev.offset_blocks;
pd_blocks = bdev_io->u.bdev.num_blocks;
idx = raid1_channel_next_read_base_bdev(raid_ch);
if (spdk_unlikely(raid_ch->base_channel[idx] == NULL)) {
raid_bdev_io_complete(raid_io, SPDK_BDEV_IO_STATUS_FAILED);
return 0;
}
raid1_channel_update_read_bw_counters(raid_ch, pd_blocks);
base_info = &raid_bdev->base_bdev_info[idx];
base_ch = raid_io->raid_ch->base_channel[idx];
raid_io->base_bdev_io_remaining = 1;
raid1_init_ext_io_opts(bdev_io, &io_opts);
ret = raid_bdev_readv_blocks_ext(base_info, base_ch,
ret = spdk_bdev_readv_blocks_ext(base_info->desc, base_ch,
bdev_io->u.bdev.iovs, bdev_io->u.bdev.iovcnt,
pd_lba, pd_blocks, raid1_bdev_io_completion,
raid_io, &io_opts);
@ -155,7 +88,7 @@ raid1_submit_write_request(struct raid_bdev_io *raid_io)
struct raid_base_bdev_info *base_info;
struct spdk_io_channel *base_ch;
uint64_t pd_lba, pd_blocks;
uint8_t idx;
uint16_t idx = raid_io->base_bdev_io_submitted;
uint64_t base_bdev_io_not_submitted;
int ret = 0;
@ -167,17 +100,11 @@ raid1_submit_write_request(struct raid_bdev_io *raid_io)
}
raid1_init_ext_io_opts(bdev_io, &io_opts);
for (idx = raid_io->base_bdev_io_submitted; idx < raid_bdev->num_base_bdevs; idx++) {
for (; idx < raid_bdev->num_base_bdevs; idx++) {
base_info = &raid_bdev->base_bdev_info[idx];
base_ch = raid_io->raid_ch->base_channel[idx];
if (base_ch == NULL) {
raid_io->base_bdev_io_submitted++;
raid_bdev_io_complete_part(raid_io, 1, SPDK_BDEV_IO_STATUS_SUCCESS);
continue;
}
ret = raid_bdev_writev_blocks_ext(base_info, base_ch,
ret = spdk_bdev_writev_blocks_ext(base_info->desc, base_ch,
bdev_io->u.bdev.iovs, bdev_io->u.bdev.iovcnt,
pd_lba, pd_blocks, raid1_bdev_io_completion,
raid_io, &io_opts);
@ -198,10 +125,6 @@ raid1_submit_write_request(struct raid_bdev_io *raid_io)
raid_io->base_bdev_io_submitted++;
}
if (raid_io->base_bdev_io_submitted == 0) {
ret = -ENODEV;
}
return ret;
}
@ -228,44 +151,6 @@ raid1_submit_rw_request(struct raid_bdev_io *raid_io)
}
}
static void
raid1_ioch_destroy(void *io_device, void *ctx_buf)
{
struct raid1_io_channel *r1ch = ctx_buf;
free(r1ch->base_bdev_read_bw);
}
static int
raid1_ioch_create(void *io_device, void *ctx_buf)
{
struct raid1_io_channel *r1ch = ctx_buf;
struct raid1_info *r1info = io_device;
struct raid_bdev *raid_bdev = r1info->raid_bdev;
int status = 0;
r1ch->base_bdev_read_idx = 0;
r1ch->base_bdev_max_read_bw = 0;
r1ch->base_bdev_read_bw = calloc(raid_bdev->num_base_bdevs,
sizeof(*r1ch->base_bdev_read_bw));
if (!r1ch->base_bdev_read_bw) {
SPDK_ERRLOG("Failed to initialize io channel\n");
status = -ENOMEM;
}
return status;
}
static void
raid1_io_device_unregister_done(void *io_device)
{
struct raid1_info *r1info = io_device;
raid_bdev_module_stop_done(r1info->raid_bdev);
free(r1info);
}
static int
raid1_start(struct raid_bdev *raid_bdev)
{
@ -281,19 +166,12 @@ raid1_start(struct raid_bdev *raid_bdev)
r1info->raid_bdev = raid_bdev;
RAID_FOR_EACH_BASE_BDEV(raid_bdev, base_info) {
min_blockcnt = spdk_min(min_blockcnt, base_info->data_size);
}
RAID_FOR_EACH_BASE_BDEV(raid_bdev, base_info) {
base_info->data_size = min_blockcnt;
min_blockcnt = spdk_min(min_blockcnt, base_info->bdev->blockcnt);
}
raid_bdev->bdev.blockcnt = min_blockcnt;
raid_bdev->module_private = r1info;
spdk_io_device_register(r1info, raid1_ioch_create, raid1_ioch_destroy,
sizeof(struct raid1_io_channel), NULL);
return 0;
}
@ -302,49 +180,19 @@ raid1_stop(struct raid_bdev *raid_bdev)
{
struct raid1_info *r1info = raid_bdev->module_private;
spdk_io_device_unregister(r1info, raid1_io_device_unregister_done);
return false;
}
static struct spdk_io_channel *
raid1_get_io_channel(struct raid_bdev *raid_bdev)
{
struct raid1_info *r1info = raid_bdev->module_private;
return spdk_get_io_channel(r1info);
}
static bool
channel_grow_base_bdev(struct raid_bdev *raid_bdev, struct raid_bdev_io_channel *raid_ch)
{
struct raid1_io_channel *raid1_ch = spdk_io_channel_get_ctx(raid_ch->module_channel);
void *tmp;
tmp = realloc(raid1_ch->base_bdev_read_bw,
raid_bdev->num_base_bdevs * sizeof(*raid1_ch->base_bdev_read_bw));
if (!tmp) {
SPDK_ERRLOG("Unable to reallocate raid1 channel base_bdev_modes_read_bw\n");
return false;
}
memset(tmp + raid_ch->num_channels * sizeof(*raid1_ch->base_bdev_read_bw), 0,
sizeof(*raid1_ch->base_bdev_read_bw));
raid1_ch->base_bdev_read_bw = tmp;
free(r1info);
return true;
}
static struct raid_bdev_module g_raid1_module = {
.level = RAID1,
.base_bdevs_min = 1,
.base_bdevs_min = 2,
.base_bdevs_constraint = {CONSTRAINT_MIN_BASE_BDEVS_OPERATIONAL, 1},
.memory_domains_supported = true,
.start = raid1_start,
.stop = raid1_stop,
.submit_rw_request = raid1_submit_rw_request,
.get_io_channel = raid1_get_io_channel,
.channel_grow_base_bdev = channel_grow_base_bdev,
};
RAID_MODULE_REGISTER(&g_raid1_module)

File diff suppressed because it is too large Load Diff

View File

@ -401,7 +401,7 @@ def bdev_raid_get_bdevs(client, category):
return client.call('bdev_raid_get_bdevs', params)
def bdev_raid_create(client, name, raid_level, base_bdevs, strip_size=None, strip_size_kb=None, uuid=None, superblock=False):
def bdev_raid_create(client, name, raid_level, base_bdevs, strip_size=None, strip_size_kb=None, uuid=None):
"""Create raid bdev. Either strip size arg will work but one is required.
Args:
@ -411,13 +411,11 @@ def bdev_raid_create(client, name, raid_level, base_bdevs, strip_size=None, stri
raid_level: raid level of raid bdev, supported values 0
base_bdevs: Space separated names of Nvme bdevs in double quotes, like "Nvme0n1 Nvme1n1 Nvme2n1"
uuid: UUID for this raid bdev (optional)
superblock: information about raid bdev will be stored in superblock on each base bdev,
disabled by default due to backward compatibility
Returns:
None
"""
params = {'name': name, 'raid_level': raid_level, 'base_bdevs': base_bdevs, 'superblock': superblock}
params = {'name': name, 'raid_level': raid_level, 'base_bdevs': base_bdevs}
if strip_size:
params['strip_size'] = strip_size
@ -444,34 +442,6 @@ def bdev_raid_delete(client, name):
return client.call('bdev_raid_delete', params)
def bdev_raid_remove_base_bdev(client, name):
"""Remove base bdev from existing raid bdev
Args:
name: base bdev name
Returns:
None
"""
params = {'name': name}
return client.call('bdev_raid_remove_base_bdev', params)
def bdev_raid_grow_base_bdev(client, raid_name, base_name):
"""Add a base bdev to a raid bdev, growing the raid's size if needed
Args:
raid_name: raid bdev name
base_name: base bdev name
Returns:
None
"""
params = {'raid_name': raid_name, 'base_name': base_name}
return client.call('bdev_raid_grow_base_bdev', params)
def bdev_aio_create(client, filename, name, block_size=None, readonly=False):
"""Construct a Linux AIO block device.
@ -1327,13 +1297,12 @@ def bdev_iscsi_delete(client, name):
return client.call('bdev_iscsi_delete', params)
def bdev_passthru_create(client, base_bdev_name, name, uuid=None):
def bdev_passthru_create(client, base_bdev_name, name):
"""Construct a pass-through block device.
Args:
base_bdev_name: name of the existing bdev
name: name of block device
uuid: UUID of block device (optional)
Returns:
Name of created block device.
@ -1342,8 +1311,6 @@ def bdev_passthru_create(client, base_bdev_name, name, uuid=None):
'base_bdev_name': base_bdev_name,
'name': name,
}
if uuid:
params['uuid'] = uuid
return client.call('bdev_passthru_create', params)

View File

@ -161,36 +161,6 @@ def bdev_lvol_rename(client, old_name, new_name):
return client.call('bdev_lvol_rename', params)
def bdev_lvol_set_xattr(client, name, xattr_name, xattr_value):
"""Set extended attribute on a logical volume.
Args:
name: name of logical volume
xattr_name: name of extended attribute
xattr_value: value of extended attribute
"""
params = {
'name': name,
'xattr_name': xattr_name,
'xattr_value': xattr_value,
}
return client.call('bdev_lvol_set_xattr', params)
def bdev_lvol_get_xattr(client, name, xattr_name):
"""Get extended attribute on a logical volume.
Args:
name: name of logical volume
xattr_name: name of extended attribute
"""
params = {
'name': name,
'xattr_name': xattr_name,
}
return client.call('bdev_lvol_get_xattr', params)
def bdev_lvol_resize(client, name, size_in_mib):
"""Resize a logical volume.
@ -253,51 +223,6 @@ def bdev_lvol_decouple_parent(client, name):
return client.call('bdev_lvol_decouple_parent', params)
def bdev_lvol_shallow_copy(client, src_lvol_name, dst_bdev_name):
"""Make a shallow copy of lvol over a given bdev
Args:
src_lvol_name: name of lvol to create a copy from
bdev_name: name of the bdev that acts as destination for the copy
"""
params = {
'src_lvol_name': src_lvol_name,
'dst_bdev_name': dst_bdev_name
}
return client.call('bdev_lvol_shallow_copy', params)
def bdev_lvol_shallow_copy_status(client, src_lvol_name):
"""Get shallow copy status
Args:
src_lvol_name: name of source lvol
"""
params = {
'src_lvol_name': src_lvol_name
}
return client.call('bdev_lvol_shallow_copy_status', params)
def bdev_lvol_get_fragmap(client, name, offset=0, size=0):
"""Get a fragmap for a specific segment of a logical volume using the provided offset and size
Args:
name: lvol bdev name
offset: offset in bytes of the specific segment of the logical volume
size: size in bytes of the specific segment of the logical volume
"""
params = {
'name': name,
}
if offset:
params['offset'] = offset
if size:
params['size'] = size
return client.call('bdev_lvol_get_fragmap', params)
def bdev_lvol_delete_lvstore(client, uuid=None, lvs_name=None):
"""Destroy a logical volume store.

View File

@ -1129,13 +1129,11 @@ if __name__ == "__main__":
def bdev_passthru_create(args):
print_json(rpc.bdev.bdev_passthru_create(args.client,
base_bdev_name=args.base_bdev_name,
name=args.name,
uuid=args.uuid))
name=args.name))
p = subparsers.add_parser('bdev_passthru_create', help='Add a pass through bdev on existing bdev')
p.add_argument('-b', '--base-bdev-name', help="Name of the existing bdev", required=True)
p.add_argument('-p', '--name', help="Name of the pass through bdev", required=True)
p.add_argument('-u', '--uuid', help="UUID of the bdev")
p.set_defaults(func=bdev_passthru_create)
def bdev_passthru_delete(args):
@ -1999,28 +1997,6 @@ Format: 'user:u1 secret:s1 muser:mu1 msecret:ms1,user:u2 secret:s2 muser:mu2 mse
p.add_argument('new_name', help='new lvol name')
p.set_defaults(func=bdev_lvol_rename)
def bdev_lvol_set_xattr(args):
rpc.lvol.bdev_lvol_set_xattr(args.client,
name=args.name,
xattr_name=args.xattr_name,
xattr_value=args.xattr_value)
p = subparsers.add_parser('bdev_lvol_set_xattr', help='Set xattr for lvol bdev')
p.add_argument('name', help='lvol bdev name')
p.add_argument('xattr_name', help='xattr name')
p.add_argument('xattr_value', help='xattr value')
p.set_defaults(func=bdev_lvol_set_xattr)
def bdev_lvol_get_xattr(args):
print_dict(rpc.lvol.bdev_lvol_get_xattr(args.client,
name=args.name,
xattr_name=args.xattr_name))
p = subparsers.add_parser('bdev_lvol_get_xattr', help='Get xattr for lvol bdev')
p.add_argument('name', help='lvol bdev name')
p.add_argument('xattr_name', help='xattr name')
p.set_defaults(func=bdev_lvol_get_xattr)
def bdev_lvol_inflate(args):
rpc.lvol.bdev_lvol_inflate(args.client,
name=args.name)
@ -2063,36 +2039,6 @@ Format: 'user:u1 secret:s1 muser:mu1 msecret:ms1,user:u2 secret:s2 muser:mu2 mse
p.add_argument('name', help='lvol bdev name')
p.set_defaults(func=bdev_lvol_delete)
def bdev_lvol_shallow_copy(args):
rpc.lvol.bdev_lvol_shallow_copy(args.client,
src_lvol_name=args.src_lvol_name,
dst_bdev_name=args.dst_bdev_name)
p = subparsers.add_parser('bdev_lvol_shallow_copy', help="""Make a shallow copy of lvol over a given bdev.
lvol must be read only""")
p.add_argument('src_lvol_name', help='source lvol name')
p.add_argument('dst_bdev_name', help='destination bdev name')
p.set_defaults(func=bdev_lvol_shallow_copy)
def bdev_lvol_shallow_copy_status(args):
print_json(rpc.lvol.bdev_lvol_shallow_copy_status(args.client,
src_lvol_name=args.src_lvol_name))
p = subparsers.add_parser('bdev_lvol_shallow_copy_status', help='Get shallow copy status')
p.add_argument('src_lvol_name', help='source lvol name')
p.set_defaults(func=bdev_lvol_shallow_copy_status)
def bdev_lvol_get_fragmap(args):
print_json(rpc.lvol.bdev_lvol_get_fragmap(args.client,
name=args.name,
offset=args.offset,
size=args.size))
p = subparsers.add_parser('bdev_lvol_get_fragmap',
help='Get a fragmap for a specific segment of a logical volume using the provided offset and size.')
p.add_argument('name', help='lvol bdev name')
p.add_argument('--offset', help='offset in bytes of the specific segment of the logical volume', type=int, required=False)
p.add_argument('--size', help='size in bytes of the specific segment of the logical volume', type=int, required=False)
p.set_defaults(func=bdev_lvol_get_fragmap)
def bdev_lvol_delete_lvstore(args):
rpc.lvol.bdev_lvol_delete_lvstore(args.client,
uuid=args.uuid,
@ -2146,16 +2092,13 @@ Format: 'user:u1 secret:s1 muser:mu1 msecret:ms1,user:u2 secret:s2 muser:mu2 mse
strip_size_kb=args.strip_size_kb,
raid_level=args.raid_level,
base_bdevs=base_bdevs,
uuid=args.uuid,
superblock=args.superblock)
uuid=args.uuid)
p = subparsers.add_parser('bdev_raid_create', help='Create new raid bdev')
p.add_argument('-n', '--name', help='raid bdev name', required=True)
p.add_argument('-z', '--strip-size-kb', help='strip size in KB', type=int)
p.add_argument('-r', '--raid-level', help='raid level, raid0, raid1 and a special level concat are supported', required=True)
p.add_argument('-b', '--base-bdevs', help='base bdevs name, whitespace separated list in quotes', required=True)
p.add_argument('--uuid', help='UUID for this raid bdev', required=False)
p.add_argument('-s', '--superblock', help='information about raid bdev will be stored in superblock on each base bdev, '
'disabled by default due to backward compatibility', action='store_true')
p.set_defaults(func=bdev_raid_create)
def bdev_raid_delete(args):
@ -2165,23 +2108,6 @@ Format: 'user:u1 secret:s1 muser:mu1 msecret:ms1,user:u2 secret:s2 muser:mu2 mse
p.add_argument('name', help='raid bdev name')
p.set_defaults(func=bdev_raid_delete)
def bdev_raid_remove_base_bdev(args):
rpc.bdev.bdev_raid_remove_base_bdev(args.client,
name=args.name)
p = subparsers.add_parser('bdev_raid_remove_base_bdev', help='Remove base bdev from existing raid bdev')
p.add_argument('name', help='base bdev name')
p.set_defaults(func=bdev_raid_remove_base_bdev)
def bdev_raid_grow_base_bdev(args):
rpc.bdev.bdev_raid_grow_base_bdev(args.client,
raid_name=args.raid_name,
base_name=args.base_name)
p = subparsers.add_parser('bdev_raid_grow_base_bdev', help="""Add a base bdev to a raid bdev,
growing the raid\'s size if needed""")
p.add_argument('raid_name', help='raid bdev name')
p.add_argument('base_name', help='base bdev name')
p.set_defaults(func=bdev_raid_grow_base_bdev)
# split
def bdev_split_create(args):
print_array(rpc.bdev.bdev_split_create(args.client,

View File

@ -118,13 +118,11 @@ function raid_function_test() {
return 0
}
function verify_raid_bdev_state() (
set +x
function verify_raid_bdev_state() {
local raid_bdev_name=$1
local expected_state=$2
local raid_level=$3
local strip_size=$4
local num_base_bdevs_operational=$5
local raid_bdev
local raid_bdev_info
local num_base_bdevs
@ -161,49 +159,28 @@ function verify_raid_bdev_state() (
return 1
fi
num_base_bdevs=$(echo $raid_bdev_info | jq -r '[.base_bdevs_list[]] | length')
num_base_bdevs=$(echo $raid_bdev_info | jq -r '.base_bdevs_list | length')
tmp=$(echo $raid_bdev_info | jq -r '.num_base_bdevs')
if [ "$num_base_bdevs" != "$tmp" ]; then
echo "incorrect num_base_bdevs: $tmp, expected: $num_base_bdevs"
return 1
fi
num_base_bdevs_discovered=$(echo $raid_bdev_info | jq -r '[.base_bdevs_list[] | select(.is_configured)] | length')
num_base_bdevs_discovered=$(echo $raid_bdev_info | jq -r '[.base_bdevs_list[] | strings] | length')
tmp=$(echo $raid_bdev_info | jq -r '.num_base_bdevs_discovered')
if [ "$num_base_bdevs_discovered" != "$tmp" ]; then
echo "incorrect num_base_bdevs_discovered: $tmp, expected: $num_base_bdevs_discovered"
return 1
fi
tmp=$(echo $raid_bdev_info | jq -r '.num_base_bdevs_operational')
if [ "$num_base_bdevs_operational" != "$tmp" ]; then
echo "incorrect num_base_bdevs_operational $tmp, expected: $num_base_bdevs_operational"
return 1
fi
)
function has_redundancy() {
case $1 in
"raid1" | "raid5f") return 0 ;;
*) return 1 ;;
esac
}
function raid_state_function_test() {
local raid_level=$1
local num_base_bdevs=$2
local raid_bdev
local base_bdevs=($(for ((i = 1; i <= num_base_bdevs; i++)); do echo BaseBdev$i; done))
local base_bdev1="Non_Existed_Base_1"
local base_bdev2="Non_Existed_Base_2"
local raid_bdev_name="Existed_Raid"
local strip_size
local strip_size_create_arg
if [ $raid_level != "raid1" ]; then
strip_size=64
strip_size_create_arg="-z $strip_size"
else
strip_size=0
fi
local strip_size=64
$rootdir/test/app/bdev_svc/bdev_svc -r $rpc_server -i 0 -L bdev_raid &
raid_pid=$!
@ -212,61 +189,56 @@ function raid_state_function_test() {
# Step1: create a RAID bdev with no base bdevs
# Expect state: CONFIGURING
$rpc_py bdev_raid_create $strip_size_create_arg -r $raid_level -b "${base_bdevs[*]}" -n $raid_bdev_name
if ! verify_raid_bdev_state $raid_bdev_name "configuring" $raid_level $strip_size $num_base_bdevs; then
$rpc_py bdev_raid_create -z $strip_size -r $raid_level -b "$base_bdev1 $base_bdev2" -n $raid_bdev_name
if ! verify_raid_bdev_state $raid_bdev_name "configuring" $raid_level $strip_size; then
return 1
else
# Test: Delete the RAID bdev successfully
$rpc_py bdev_raid_delete $raid_bdev_name
fi
$rpc_py bdev_raid_delete $raid_bdev_name
# Step2: create one base bdev and add to the RAID bdev
# Expect state: CONFIGURING
$rpc_py bdev_raid_create $strip_size_create_arg -r $raid_level -b "${base_bdevs[*]}" -n $raid_bdev_name
$rpc_py bdev_malloc_create 32 512 -b ${base_bdevs[0]}
waitforbdev ${base_bdevs[0]}
if ! verify_raid_bdev_state $raid_bdev_name "configuring" $raid_level $strip_size $num_base_bdevs; then
$rpc_py bdev_raid_create -z $strip_size -r $raid_level -b "$base_bdev1 $base_bdev2" -n $raid_bdev_name
$rpc_py bdev_malloc_create 32 512 -b $base_bdev1
waitforbdev $base_bdev1
if ! verify_raid_bdev_state $raid_bdev_name "configuring" $raid_level $strip_size; then
$rpc_py bdev_malloc_delete $base_bdev1
$rpc_py bdev_raid_delete $raid_bdev_name
return 1
else
# Test: Delete the RAID bdev successfully
$rpc_py bdev_raid_delete $raid_bdev_name
fi
$rpc_py bdev_raid_delete $raid_bdev_name
# Step3: create remaining base bdevs and add to the RAID bdev
# Step3: create another base bdev and add to the RAID bdev
# Expect state: ONLINE
$rpc_py bdev_raid_create $strip_size_create_arg -r $raid_level -b "${base_bdevs[*]}" -n $raid_bdev_name
for ((i = 1; i < num_base_bdevs; i++)); do
if ! verify_raid_bdev_state $raid_bdev_name "configuring" $raid_level $strip_size $num_base_bdevs; then
return 1
fi
$rpc_py bdev_malloc_create 32 512 -b ${base_bdevs[$i]}
waitforbdev ${base_bdevs[$i]}
done
if ! verify_raid_bdev_state $raid_bdev_name "online" $raid_level $strip_size $num_base_bdevs; then
$rpc_py bdev_raid_create -z $strip_size -r $raid_level -b "$base_bdev1 $base_bdev2" -n $raid_bdev_name
$rpc_py bdev_malloc_create 32 512 -b $base_bdev2
waitforbdev $base_bdev2
if ! verify_raid_bdev_state $raid_bdev_name "online" $raid_level $strip_size; then
$rpc_py bdev_malloc_delete $base_bdev1
$rpc_py bdev_malloc_delete $base_bdev2
$rpc_py bdev_raid_delete $raid_bdev_name
return 1
fi
# Step4: delete one base bdev from the RAID bdev
$rpc_py bdev_malloc_delete ${base_bdevs[0]}
local expected_state
if ! has_redundancy $raid_level; then
expected_state="offline"
else
expected_state="online"
fi
if ! verify_raid_bdev_state $raid_bdev_name $expected_state $raid_level $strip_size $((num_base_bdevs - 1)); then
# Expect state: OFFLINE
$rpc_py bdev_malloc_delete $base_bdev2
if ! verify_raid_bdev_state $raid_bdev_name "offline" $raid_level $strip_size; then
$rpc_py bdev_malloc_delete $base_bdev1
$rpc_py bdev_raid_delete $raid_bdev_name
return 1
fi
# Step5: delete remaining base bdevs from the RAID bdev
# Step5: delete last base bdev from the RAID bdev
# Expect state: removed from system
for ((i = 1; i < num_base_bdevs; i++)); do
raid_bdev=$($rpc_py bdev_raid_get_bdevs all | jq -r '.[0]["name"]')
if [ "$raid_bdev" != $raid_bdev_name ]; then
echo "$raid_bdev_name removed before all base bdevs were deleted"
return 1
fi
$rpc_py bdev_malloc_delete ${base_bdevs[$i]}
done
$rpc_py bdev_malloc_delete $base_bdev1
raid_bdev=$($rpc_py bdev_raid_get_bdevs all | jq -r '.[0]["name"] | select(.)')
if [ -n "$raid_bdev" ]; then
echo "$raid_bdev_name is not removed"
$rpc_py bdev_raid_delete $raid_bdev_name
return 1
fi
@ -320,220 +292,12 @@ function raid0_resize_test() {
return 0
}
function raid_superblock_test() {
local raid_level=$1
local num_base_bdevs=$2
local base_bdevs_malloc=()
local base_bdevs_pt=()
local base_bdevs_pt_uuid=()
local raid_bdev_name="raid_bdev1"
local raid_bdev_uuid
local raid_bdev
local strip_size
local strip_size_create_arg
if [ $raid_level != "raid1" ]; then
strip_size=64
strip_size_create_arg="-z $strip_size"
else
strip_size=0
fi
$rootdir/test/app/bdev_svc/bdev_svc -r $rpc_server -i 0 -L bdev_raid &
raid_pid=$!
echo "Process raid pid: $raid_pid"
waitforlisten $raid_pid $rpc_server
# Create base bdevs
for ((i = 1; i <= num_base_bdevs; i++)); do
local bdev_malloc="malloc$i"
local bdev_pt="pt$i"
local bdev_pt_uuid="00000000-0000-0000-0000-00000000000$i"
base_bdevs_malloc+=($bdev_malloc)
base_bdevs_pt+=($bdev_pt)
base_bdevs_pt_uuid+=($bdev_pt_uuid)
$rpc_py bdev_malloc_create 32 512 -b $bdev_malloc
$rpc_py bdev_passthru_create -b $bdev_malloc -p $bdev_pt -u $bdev_pt_uuid
done
# Create RAID bdev with superblock
$rpc_py bdev_raid_create $strip_size_create_arg -r $raid_level -b "${base_bdevs_pt[*]}" -n $raid_bdev_name -s
if ! verify_raid_bdev_state $raid_bdev_name "online" $raid_level $strip_size $num_base_bdevs; then
return 1
fi
# Get RAID bdev's UUID
raid_bdev_uuid=$($rpc_py bdev_get_bdevs -b $raid_bdev_name | jq -r '.[] | .uuid | select(.)')
if [ -z "$raid_bdev_uuid" ]; then
return 1
fi
# Stop the RAID bdev
$rpc_py bdev_raid_delete $raid_bdev_name
raid_bdev=$($rpc_py bdev_raid_get_bdevs all | jq -r '.[]')
if [ -n "$raid_bdev" ]; then
return 1
fi
# Delete the passthru bdevs
for i in "${base_bdevs_pt[@]}"; do
$rpc_py bdev_passthru_delete $i
done
if [ "$($rpc_py bdev_get_bdevs | jq -r '[.[] | select(.product_name == "passthru")] | any')" == "true" ]; then
return 1
fi
# Try to create new RAID bdev from malloc bdevs
# Should not reach online state due to superblock still present on base bdevs
$rpc_py bdev_raid_create $strip_size_create_arg -r $raid_level -b "${base_bdevs_malloc[*]}" -n $raid_bdev_name
if ! verify_raid_bdev_state $raid_bdev_name "configuring" $raid_level $strip_size $num_base_bdevs; then
return 1
fi
# Stop the RAID bdev
$rpc_py bdev_raid_delete $raid_bdev_name
raid_bdev=$($rpc_py bdev_raid_get_bdevs all | jq -r '.[]')
if [ -n "$raid_bdev" ]; then
return 1
fi
# Re-add first base bdev
$rpc_py bdev_passthru_create -b ${base_bdevs_malloc[0]} -p ${base_bdevs_pt[0]} -u ${base_bdevs_pt_uuid[0]}
# Check if the RAID bdev was assembled from superblock
if ! verify_raid_bdev_state $raid_bdev_name "configuring" $raid_level $strip_size $num_base_bdevs; then
return 1
fi
# Re-add remaining base bdevs
for ((i = 1; i < num_base_bdevs; i++)); do
$rpc_py bdev_passthru_create -b ${base_bdevs_malloc[$i]} -p ${base_bdevs_pt[$i]} -u ${base_bdevs_pt_uuid[$i]}
done
# Check if the RAID bdev is in online state
if ! verify_raid_bdev_state $raid_bdev_name "online" $raid_level $strip_size $num_base_bdevs; then
return 1
fi
# Check if the RAID bdev has the same UUID as when first created
if [ "$($rpc_py bdev_get_bdevs -b $raid_bdev_name | jq -r '.[] | .uuid')" != "$raid_bdev_uuid" ]; then
return 1
fi
if has_redundancy $raid_level; then
# Delete one base bdev
$rpc_py bdev_passthru_delete ${base_bdevs_pt[0]}
# Check if the RAID bdev is in online state (degraded)
if ! verify_raid_bdev_state $raid_bdev_name "online" $raid_level $strip_size $((num_base_bdevs - 1)); then
return 1
fi
# Stop the RAID bdev
$rpc_py bdev_raid_delete $raid_bdev_name
raid_bdev=$($rpc_py bdev_raid_get_bdevs all | jq -r '.[]')
if [ -n "$raid_bdev" ]; then
return 1
fi
# Delete remaining base bdevs
for ((i = 1; i < num_base_bdevs; i++)); do
$rpc_py bdev_passthru_delete ${base_bdevs_pt[$i]}
done
# Re-add base bdevs from the second up to (not including) the last one
for ((i = 1; i < num_base_bdevs - 1; i++)); do
$rpc_py bdev_passthru_create -b ${base_bdevs_malloc[$i]} -p ${base_bdevs_pt[$i]} -u ${base_bdevs_pt_uuid[$i]}
# Check if the RAID bdev is in configuring state
if ! verify_raid_bdev_state $raid_bdev_name "configuring" $raid_level $strip_size $((num_base_bdevs - 1)); then
return 1
fi
done
# Re-add the last base bdev
i=$((num_base_bdevs - 1))
$rpc_py bdev_passthru_create -b ${base_bdevs_malloc[$i]} -p ${base_bdevs_pt[$i]} -u ${base_bdevs_pt_uuid[$i]}
# Check if the RAID bdev is in online state (degraded)
if ! verify_raid_bdev_state $raid_bdev_name "online" $raid_level $strip_size $((num_base_bdevs - 1)); then
return 1
fi
if [ $num_base_bdevs -gt 2 ]; then
# Stop the RAID bdev
$rpc_py bdev_raid_delete $raid_bdev_name
raid_bdev=$($rpc_py bdev_raid_get_bdevs all | jq -r '.[]')
if [ -n "$raid_bdev" ]; then
return 1
fi
# Delete remaining base bdevs
for ((i = 1; i < num_base_bdevs; i++)); do
$rpc_py bdev_passthru_delete ${base_bdevs_pt[$i]}
done
# Re-add first base bdev
# This is the "failed" device and contains the "old" version of the superblock
$rpc_py bdev_passthru_create -b ${base_bdevs_malloc[0]} -p ${base_bdevs_pt[0]} -u ${base_bdevs_pt_uuid[0]}
# Check if the RAID bdev is in configuring state
if ! verify_raid_bdev_state $raid_bdev_name "configuring" $raid_level $strip_size $num_base_bdevs; then
return 1
fi
# Re-add the last base bdev
i=$((num_base_bdevs - 1))
$rpc_py bdev_passthru_create -b ${base_bdevs_malloc[$i]} -p ${base_bdevs_pt[$i]} -u ${base_bdevs_pt_uuid[$i]}
# Check if the RAID bdev is in configuring state
# This should use the newer superblock version and have n-1 online base bdevs
if ! verify_raid_bdev_state $raid_bdev_name "configuring" $raid_level $strip_size $((num_base_bdevs - 1)); then
return 1
fi
# Re-add remaining base bdevs
for ((i = 1; i < num_base_bdevs - 1; i++)); do
$rpc_py bdev_passthru_create -b ${base_bdevs_malloc[$i]} -p ${base_bdevs_pt[$i]} -u ${base_bdevs_pt_uuid[$i]}
done
# Check if the RAID bdev is in online state (degraded)
if ! verify_raid_bdev_state $raid_bdev_name "online" $raid_level $strip_size $((num_base_bdevs - 1)); then
return 1
fi
fi
# Check if the RAID bdev has the same UUID as when first created
if [ "$($rpc_py bdev_get_bdevs -b $raid_bdev_name | jq -r '.[] | .uuid')" != "$raid_bdev_uuid" ]; then
return 1
fi
fi
killprocess $raid_pid
return 0
}
trap 'on_error_exit;' ERR
raid_function_test raid0
raid_function_test concat
raid_state_function_test raid0
raid_state_function_test concat
raid0_resize_test
for n in {2..4}; do
for level in raid0 concat raid1; do
raid_state_function_test $level $n
raid_superblock_test $level $n
done
done
if [ "$CONFIG_RAID5F" == y ]; then
for n in {3..4}; do
raid_state_function_test raid5f $n
raid_superblock_test raid5f $n
done
fi
rm -f $tmp_file

View File

@ -55,8 +55,6 @@ function start_spdk_tgt() {
function setup_bdev_conf() {
"$rpc_py" <<- RPC
iobuf_set_options --small-pool-count 10000 --large-pool-count 1100
framework_start_init
bdev_split_create Malloc1 2
bdev_split_create -s 4 Malloc2 8
bdev_malloc_create -b Malloc0 32 512
@ -67,12 +65,9 @@ function setup_bdev_conf() {
bdev_malloc_create -b Malloc5 32 512
bdev_malloc_create -b Malloc6 32 512
bdev_malloc_create -b Malloc7 32 512
bdev_malloc_create -b Malloc8 32 512
bdev_malloc_create -b Malloc9 32 512
bdev_passthru_create -p TestPT -b Malloc3
bdev_raid_create -n raid0 -z 64 -r 0 -b "Malloc4 Malloc5"
bdev_raid_create -n concat0 -z 64 -r concat -b "Malloc6 Malloc7"
bdev_raid_create -n raid1 -r 1 -b "Malloc8 Malloc9"
bdev_set_qos_limit --rw_mbytes_per_sec 100 Malloc3
bdev_set_qos_limit --rw_ios_per_sec 20000 Malloc0
RPC
@ -673,7 +668,7 @@ if [ -n "$crypto_device" ] && [ -n "$wcs_file" ]; then
exit 1
fi
fi
if [[ $test_type == bdev || $test_type == crypto_* ]]; then
if [[ $test_type == crypto_* ]]; then
wait_for_rpc="--wait-for-rpc"
fi
start_spdk_tgt
@ -726,8 +721,7 @@ esac
cat <<- CONF > "$conf_file"
{"subsystems":[
$("$rpc_py" save_subsystem_config -n accel),
$("$rpc_py" save_subsystem_config -n bdev),
$("$rpc_py" save_subsystem_config -n iobuf)
$("$rpc_py" save_subsystem_config -n bdev)
]}
CONF

View File

@ -1,71 +0,0 @@
#!/usr/bin/env bash
# SPDX-License-Identifier: BSD-3-Clause
# Copyright (C) 2023 SUSE LLC.
# All rights reserved.
#
testdir=$(readlink -f $(dirname $0))
rootdir=$(readlink -f $testdir/../..)
source $rootdir/test/common/autotest_common.sh
source $rootdir/test/lvol/common.sh
source $rootdir/test/bdev/nbd_common.sh
function test_shallow_copy_compare() {
# Create lvs
bs_malloc_name=$(rpc_cmd bdev_malloc_create 20 $MALLOC_BS)
lvs_uuid=$(rpc_cmd bdev_lvol_create_lvstore "$bs_malloc_name" lvs_test)
# Create lvol with 4 cluster
lvol_size=$((LVS_DEFAULT_CLUSTER_SIZE_MB * 4))
lvol_uuid=$(rpc_cmd bdev_lvol_create -u "$lvs_uuid" lvol_test "$lvol_size" -t)
# Fill second and fourth cluster of lvol
nbd_start_disks "$DEFAULT_RPC_ADDR" "$lvol_uuid" /dev/nbd0
dd if=/dev/urandom of=/dev/nbd0 oflag=direct bs="$LVS_DEFAULT_CLUSTER_SIZE" count=1 seek=1
dd if=/dev/urandom of=/dev/nbd0 oflag=direct bs="$LVS_DEFAULT_CLUSTER_SIZE" count=1 seek=3
nbd_stop_disks "$DEFAULT_RPC_ADDR" /dev/nbd0
# Create snapshots of lvol bdev
snapshot_uuid=$(rpc_cmd bdev_lvol_snapshot lvs_test/lvol_test lvol_snapshot)
# Fill first and third cluster of lvol
nbd_start_disks "$DEFAULT_RPC_ADDR" "$lvol_uuid" /dev/nbd0
dd if=/dev/urandom of=/dev/nbd0 oflag=direct bs="$LVS_DEFAULT_CLUSTER_SIZE" count=1
dd if=/dev/urandom of=/dev/nbd0 oflag=direct bs="$LVS_DEFAULT_CLUSTER_SIZE" count=1 seek=2
nbd_stop_disks "$DEFAULT_RPC_ADDR" /dev/nbd0
# Set lvol as read only to perform the copy
rpc_cmd bdev_lvol_set_read_only "$lvol_uuid"
# Create external bdev to make a shallow copy of lvol on
ext_malloc_name=$(rpc_cmd bdev_malloc_create "$lvol_size" $MALLOC_BS)
# Make a shallow copy of lvol over external bdev
rpc_cmd bdev_lvol_shallow_copy "$lvol_uuid" "$ext_malloc_name"
# Create nbd devices of lvol and external bdev for comparison
nbd_start_disks "$DEFAULT_RPC_ADDR" "$lvol_uuid" /dev/nbd0
nbd_start_disks "$DEFAULT_RPC_ADDR" "$ext_malloc_name" /dev/nbd1
# Compare lvol and external bdev in first and third cluster
cmp -n "$LVS_DEFAULT_CLUSTER_SIZE" /dev/nbd0 /dev/nbd1
cmp -n "$LVS_DEFAULT_CLUSTER_SIZE" /dev/nbd0 /dev/nbd1 "$((LVS_DEFAULT_CLUSTER_SIZE * 2))" "$((LVS_DEFAULT_CLUSTER_SIZE * 2))"
# Check that second and fourth cluster of external bdev are zero filled
cmp -n "$LVS_DEFAULT_CLUSTER_SIZE" /dev/nbd1 /dev/zero "$LVS_DEFAULT_CLUSTER_SIZE"
cmp -n "$LVS_DEFAULT_CLUSTER_SIZE" /dev/nbd1 /dev/zero "$((LVS_DEFAULT_CLUSTER_SIZE * 3))"
# Stop nbd devices
nbd_stop_disks "$DEFAULT_RPC_ADDR" /dev/nbd1
nbd_stop_disks "$DEFAULT_RPC_ADDR" /dev/nbd0
}
$SPDK_BIN_DIR/spdk_tgt &
spdk_pid=$!
trap 'killprocess "$spdk_pid"; exit 1' SIGINT SIGTERM EXIT
waitforlisten $spdk_pid
modprobe nbd
run_test "test_shallow_copy_compare" test_shallow_copy_compare
trap - SIGINT SIGTERM EXIT
killprocess $spdk_pid

View File

@ -1,214 +0,0 @@
#!/usr/bin/env bash
# SPDX-License-Identifier: BSD-3-Clause
# Copyright (C) 2023 SUSE LLC.
# All rights reserved.
#
testdir=$(readlink -f $(dirname $0))
rootdir=$(readlink -f $testdir/../..)
source $rootdir/test/common/autotest_common.sh
source $rootdir/test/lvol/common.sh
source $rootdir/test/bdev/nbd_common.sh
NUM_CLUSTERS=10
LVS_DEFAULT_CLUSTER_SIZE_BTYE=$((LVS_DEFAULT_CLUSTER_SIZE_MB * 1024 * 1024))
function verify() {
local fragmap="$1"
local expected_cluster_size="$2"
local expected_num_clusters="$3"
local expected_num_allocated_clusters="$4"
local expected_fragmap="$5"
[ "$(jq '.cluster_size' <<< "$fragmap")" == "$expected_cluster_size" ]
[ "$(jq '.num_clusters' <<< "$fragmap")" == "$expected_num_clusters" ]
[ "$(jq '.num_allocated_clusters' <<< "$fragmap")" == "$expected_num_allocated_clusters" ]
[ "$(jq -r '.fragmap' <<< "$fragmap")" == "$expected_fragmap" ]
}
function test_fragmap_empty_lvol() {
# Create lvs
malloc_name=$(rpc_cmd bdev_malloc_create 80 $MALLOC_BS)
lvs_uuid=$(rpc_cmd bdev_lvol_create_lvstore "$malloc_name" lvs_test)
# Create lvol with 10 cluster
lvol_size=$((LVS_DEFAULT_CLUSTER_SIZE_MB * "$NUM_CLUSTERS"))
lvol_uuid=$(rpc_cmd bdev_lvol_create -u "$lvs_uuid" lvol_test "$lvol_size" -t)
nbd_start_disks "$DEFAULT_RPC_ADDR" "$lvol_uuid" /dev/nbd0
# Expected map: 00000000 00000000
fragmap=$(rpc_cmd bdev_lvol_get_fragmap $lvol_uuid)
verify "$fragmap" "$LVS_DEFAULT_CLUSTER_SIZE_BTYE" "$NUM_CLUSTERS" 0 "AAA="
# Stop nbd device
nbd_stop_disks "$DEFAULT_RPC_ADDR" /dev/nbd0
rpc_cmd bdev_lvol_delete_lvstore -u "$lvs_uuid"
rpc_cmd bdev_lvol_get_lvstores -u "$lvs_uuid" && false
rpc_cmd bdev_malloc_delete "$malloc_name"
check_leftover_devices
}
function test_fragmap_data_hole() {
# Create lvs
malloc_name=$(rpc_cmd bdev_malloc_create 80 $MALLOC_BS)
lvs_uuid=$(rpc_cmd bdev_lvol_create_lvstore "$malloc_name" lvs_test)
# Create lvol with 10 cluster
lvol_size=$((LVS_DEFAULT_CLUSTER_SIZE_MB * "$NUM_CLUSTERS"))
lvol_uuid=$(rpc_cmd bdev_lvol_create -u "$lvs_uuid" lvol_test "$lvol_size" -t)
nbd_start_disks "$DEFAULT_RPC_ADDR" "$lvol_uuid" /dev/nbd0
# Expected map: 00000001 00000000 (1st cluster is wriiten)
# Read entire fragmap
dd if=/dev/urandom of=/dev/nbd0 oflag=direct bs=4096 count=1
fragmap=$(rpc_cmd bdev_lvol_get_fragmap $lvol_uuid)
verify "$fragmap" "$LVS_DEFAULT_CLUSTER_SIZE_BTYE" "$NUM_CLUSTERS" "1" "AQA="
# Read fragmap [0, 5) clusters
size=$((LVS_DEFAULT_CLUSTER_SIZE_BTYE * 5))
fragmap=$(rpc_cmd bdev_lvol_get_fragmap --offset 0 --size $size $lvol_uuid)
verify "$fragmap" "$LVS_DEFAULT_CLUSTER_SIZE_BTYE" "5" "1" "AQ=="
# Read fragmap [5, 10) clusters
offset=$((LVS_DEFAULT_CLUSTER_SIZE_BTYE * 5))
fragmap=$(rpc_cmd bdev_lvol_get_fragmap --offset $offset --size $size $lvol_uuid)
verify "$fragmap" "$LVS_DEFAULT_CLUSTER_SIZE_BTYE" "5" "0" "AA=="
# Stop nbd device
nbd_stop_disks "$DEFAULT_RPC_ADDR" /dev/nbd0
rpc_cmd bdev_lvol_delete_lvstore -u "$lvs_uuid"
rpc_cmd bdev_lvol_get_lvstores -u "$lvs_uuid" && false
rpc_cmd bdev_malloc_delete "$malloc_name"
check_leftover_devices
}
function test_fragmap_hole_data() {
# Create lvs
malloc_name=$(rpc_cmd bdev_malloc_create 80 $MALLOC_BS)
lvs_uuid=$(rpc_cmd bdev_lvol_create_lvstore "$malloc_name" lvs_test)
# Create lvol with 10 cluster
lvol_size=$((LVS_DEFAULT_CLUSTER_SIZE_MB * "$NUM_CLUSTERS"))
lvol_uuid=$(rpc_cmd bdev_lvol_create -u "$lvs_uuid" lvol_test "$lvol_size" -t)
nbd_start_disks "$DEFAULT_RPC_ADDR" "$lvol_uuid" /dev/nbd0
# Expected map: 00000000 00000010 (10th cluster is wriiten)
# Read entire fragmap
dd if=/dev/urandom of=/dev/nbd0 oflag=direct bs=4096 count=1 seek=9216
fragmap=$(rpc_cmd bdev_lvol_get_fragmap $lvol_uuid)
verify "$fragmap" "$LVS_DEFAULT_CLUSTER_SIZE_BTYE" "$NUM_CLUSTERS" "1" "AAI="
# Read fragmap [0, 5) clusters
size=$((LVS_DEFAULT_CLUSTER_SIZE_BTYE * 5))
fragmap=$(rpc_cmd bdev_lvol_get_fragmap --offset 0 --size $size $lvol_uuid)
verify "$fragmap" "$LVS_DEFAULT_CLUSTER_SIZE_BTYE" "5" "0" "AA=="
# Read fragmap [5, 10) clusters
offset=$((LVS_DEFAULT_CLUSTER_SIZE_BTYE * 5))
fragmap=$(rpc_cmd bdev_lvol_get_fragmap --offset $offset --size $size $lvol_uuid)
verify "$fragmap" "$LVS_DEFAULT_CLUSTER_SIZE_BTYE" "5" "1" "EA=="
# Stop nbd device
nbd_stop_disks "$DEFAULT_RPC_ADDR" /dev/nbd0
rpc_cmd bdev_lvol_delete_lvstore -u "$lvs_uuid"
rpc_cmd bdev_lvol_get_lvstores -u "$lvs_uuid" && false
rpc_cmd bdev_malloc_delete "$malloc_name"
check_leftover_devices
}
function test_fragmap_hole_data_hole() {
# Create lvs
malloc_name=$(rpc_cmd bdev_malloc_create 80 $MALLOC_BS)
lvs_uuid=$(rpc_cmd bdev_lvol_create_lvstore "$malloc_name" lvs_test)
# Create lvol with 10 cluster
lvol_size=$((LVS_DEFAULT_CLUSTER_SIZE_MB * "$NUM_CLUSTERS"))
lvol_uuid=$(rpc_cmd bdev_lvol_create -u "$lvs_uuid" lvol_test "$lvol_size" -t)
nbd_start_disks "$DEFAULT_RPC_ADDR" "$lvol_uuid" /dev/nbd0
# Expected map: 01100000 00000000
# Read entire fragmap
dd if=/dev/urandom of=/dev/nbd0 oflag=direct bs=4096 count=2048 seek=5120
fragmap=$(rpc_cmd bdev_lvol_get_fragmap $lvol_uuid)
verify "$fragmap" "$LVS_DEFAULT_CLUSTER_SIZE_BTYE" "$NUM_CLUSTERS" "2" "YAA="
# Read fragmap [0, 5) clusters
size=$((LVS_DEFAULT_CLUSTER_SIZE_BTYE * 5))
fragmap=$(rpc_cmd bdev_lvol_get_fragmap --offset 0 --size $size $lvol_uuid)
verify "$fragmap" "$LVS_DEFAULT_CLUSTER_SIZE_BTYE" "5" "0" "AA=="
# Read fragmap [5, 10) clusters
offset=$((LVS_DEFAULT_CLUSTER_SIZE_BTYE * 5))
fragmap=$(rpc_cmd bdev_lvol_get_fragmap --offset $offset --size $size $lvol_uuid)
verify "$fragmap" "$LVS_DEFAULT_CLUSTER_SIZE_BTYE" "5" "2" "Aw=="
# Stop nbd device
nbd_stop_disks "$DEFAULT_RPC_ADDR" /dev/nbd0
rpc_cmd bdev_lvol_delete_lvstore -u "$lvs_uuid"
rpc_cmd bdev_lvol_get_lvstores -u "$lvs_uuid" && false
rpc_cmd bdev_malloc_delete "$malloc_name"
check_leftover_devices
}
function test_fragmap_data_hole_data() {
# Create lvs
malloc_name=$(rpc_cmd bdev_malloc_create 80 $MALLOC_BS)
lvs_uuid=$(rpc_cmd bdev_lvol_create_lvstore "$malloc_name" lvs_test)
# Create lvol with 10 cluster
lvol_size=$((LVS_DEFAULT_CLUSTER_SIZE_MB * "$NUM_CLUSTERS"))
lvol_uuid=$(rpc_cmd bdev_lvol_create -u "$lvs_uuid" lvol_test "$lvol_size" -t)
nbd_start_disks "$DEFAULT_RPC_ADDR" "$lvol_uuid" /dev/nbd0
# Expected map: 10000111 00000011
# Read entire fragmap
dd if=/dev/urandom of=/dev/nbd0 oflag=direct bs=4096 count=3072 seek=0
dd if=/dev/urandom of=/dev/nbd0 oflag=direct bs=4096 count=3072 seek=7168
fragmap=$(rpc_cmd bdev_lvol_get_fragmap $lvol_uuid)
verify "$fragmap" "$LVS_DEFAULT_CLUSTER_SIZE_BTYE" "$NUM_CLUSTERS" "6" "hwM="
# Read fragmap [0, 5) clusters
size=$((LVS_DEFAULT_CLUSTER_SIZE_BTYE * 5))
fragmap=$(rpc_cmd bdev_lvol_get_fragmap --offset 0 --size $size $lvol_uuid)
verify "$fragmap" "$LVS_DEFAULT_CLUSTER_SIZE_BTYE" "5" "3" "Bw=="
# Read fragmap [5, 10) clusters
offset=$((LVS_DEFAULT_CLUSTER_SIZE_BTYE * 5))
fragmap=$(rpc_cmd bdev_lvol_get_fragmap --offset $offset --size $size $lvol_uuid)
verify "$fragmap" "$LVS_DEFAULT_CLUSTER_SIZE_BTYE" "5" "3" "HA=="
# Stop nbd device
nbd_stop_disks "$DEFAULT_RPC_ADDR" /dev/nbd0
rpc_cmd bdev_lvol_delete_lvstore -u "$lvs_uuid"
rpc_cmd bdev_lvol_get_lvstores -u "$lvs_uuid" && false
rpc_cmd bdev_malloc_delete "$malloc_name"
check_leftover_devices
}
$SPDK_BIN_DIR/spdk_tgt &
spdk_pid=$!
trap 'killprocess "$spdk_pid"; exit 1' SIGINT SIGTERM EXIT
waitforlisten $spdk_pid
modprobe nbd
run_test "test_fragmap_empty_lvol" test_fragmap_empty_lvol
run_test "test_fragmap_data_hole" test_fragmap_data_hole
run_test "test_fragmap_hole_data" test_fragmap_hole_data
run_test "test_fragmap_hole_data_hole" test_fragmap_hole_data_hole
run_test "test_fragmap_data_hole_data" test_fragmap_data_hole_data
trap - SIGINT SIGTERM EXIT
killprocess $spdk_pid

View File

@ -20,9 +20,6 @@ run_test "lvol_rename" $rootdir/test/lvol/rename.sh
run_test "lvol_provisioning" $rootdir/test/lvol/thin_provisioning.sh
run_test "lvol_esnap" $rootdir/test/lvol/esnap/esnap
run_test "lvol_external_snapshot" $rootdir/test/lvol/external_snapshot.sh
run_test "lvol_external_copy" $rootdir/test/lvol/external_copy.sh
run_test "lvol_fragmap" $rootdir/test/lvol/fragmap.sh
run_test "lvol_xattr" $rootdir/test/lvol/xattr.sh
timing_exit basic
timing_exit lvol

View File

@ -1,86 +0,0 @@
#!/usr/bin/env bash
# SPDX-License-Identifier: BSD-3-Clause
# Copyright (C) 2023 SUSE LLC.
# All rights reserved.
#
testdir=$(readlink -f $(dirname $0))
rootdir=$(readlink -f $testdir/../..)
source $rootdir/test/common/autotest_common.sh
source $rootdir/test/lvol/common.sh
function is_rfc3339_formatted() {
# Define the RFC3339 regex pattern
rfc3339_pattern="^[0-9]{4}-[0-9]{2}-[0-9]{2}T[0-9]{2}:[0-9]{2}:[0-9]{2}Z$"
# Check if the input string matches the pattern
if [[ $1 =~ $rfc3339_pattern ]]; then
echo "The time string '$1' is in RFC3339 format."
return 0 # Success
else
echo "The time string '$1' is not in RFC3339 format."
return 1 # Failure
fi
}
function test_set_xattr() {
# Create lvs
malloc_name=$(rpc_cmd bdev_malloc_create 20 $MALLOC_BS)
lvs_uuid=$(rpc_cmd bdev_lvol_create_lvstore "$malloc_name" lvs_test)
# Create lvol with 4 cluster
lvol_size=$((LVS_DEFAULT_CLUSTER_SIZE_MB * 4))
lvol_uuid=$(rpc_cmd bdev_lvol_create -u "$lvs_uuid" lvol_test "$lvol_size" -t)
rpc_cmd bdev_lvol_set_xattr "$lvol_uuid" "foo" "bar"
value=$(rpc_cmd bdev_lvol_get_xattr "$lvol_uuid" "foo")
[ "\"bar\"" = "$value" ]
# Snapshot is read-only, so setting xattr should fail
snapshot_uuid=$(rpc_cmd bdev_lvol_snapshot lvs_test/lvol_test lvol_snapshot)
NOT rpc_cmd bdev_lvol_set_xattr "$snapshot_uuid" "foo" "bar"
rpc_cmd bdev_lvol_delete_lvstore -u "$lvs_uuid"
rpc_cmd bdev_malloc_delete "$malloc_name"
check_leftover_devices
}
function test_creation_time_xattr() {
# Create lvs
malloc_name=$(rpc_cmd bdev_malloc_create 20 $MALLOC_BS)
lvs_uuid=$(rpc_cmd bdev_lvol_create_lvstore "$malloc_name" lvs_test)
# Create lvol with 4 cluster
lvol_size=$((LVS_DEFAULT_CLUSTER_SIZE_MB * 4))
lvol_uuid=$(rpc_cmd bdev_lvol_create -u "$lvs_uuid" lvol_test "$lvol_size" -t)
value=$(rpc_cmd bdev_lvol_get_xattr "$lvol_uuid" "creation_time")
value="${value//\"/}"
is_rfc3339_formatted ${value}
# Create snapshots of lvol bdev
snapshot_uuid=$(rpc_cmd bdev_lvol_snapshot lvs_test/lvol_test lvol_snapshot)
value=$(rpc_cmd bdev_lvol_get_xattr "$snapshot_uuid" "creation_time")
value="${value//\"/}"
is_rfc3339_formatted ${value}
clone_uuid=$(rpc_cmd bdev_lvol_clone "$snapshot_uuid" lvol_clone)
value=$(rpc_cmd bdev_lvol_get_xattr "$snapshot_uuid" "creation_time")
value="${value//\"/}"
is_rfc3339_formatted ${value}
rpc_cmd bdev_lvol_delete_lvstore -u "$lvs_uuid"
rpc_cmd bdev_malloc_delete "$malloc_name"
check_leftover_devices
}
$SPDK_BIN_DIR/spdk_tgt &
spdk_pid=$!
trap 'killprocess "$spdk_pid"; exit 1' SIGINT SIGTERM EXIT
waitforlisten $spdk_pid
modprobe nbd
run_test "test_set_xattr" test_set_xattr
run_test "test_creation_time_xattr" test_creation_time_xattr
trap - SIGINT SIGTERM EXIT
killprocess $spdk_pid

View File

@ -5943,7 +5943,7 @@ bdev_register_uuid_alias(void)
bdev = allocate_bdev("bdev0");
/* Make sure an UUID was generated */
CU_ASSERT_FALSE(spdk_uuid_is_null(&bdev->uuid));
CU_ASSERT_FALSE(spdk_mem_all_zero(&bdev->uuid, sizeof(bdev->uuid)));
/* Check that an UUID alias was registered */
spdk_uuid_fmt_lower(uuid, sizeof(uuid), &bdev->uuid);

View File

@ -6,7 +6,7 @@
SPDK_ROOT_DIR := $(abspath $(CURDIR)/../../../../..)
include $(SPDK_ROOT_DIR)/mk/spdk.common.mk
DIRS-y = bdev_raid.c bdev_raid_sb.c concat.c raid1.c
DIRS-y = bdev_raid.c concat.c raid1.c
DIRS-$(CONFIG_RAID5F) += raid5f.c

View File

@ -12,7 +12,6 @@
#include "bdev/raid/bdev_raid.c"
#include "bdev/raid/bdev_raid_rpc.c"
#include "bdev/raid/raid0.c"
#include "bdev/raid/raid1.c"
#include "common/lib/ut_multithread.c"
#define MAX_BASE_DRIVES 32
@ -74,9 +73,6 @@ struct raid_io_ranges g_io_ranges[MAX_TEST_IO_RANGE];
uint32_t g_io_range_idx;
uint64_t g_lba_offset;
struct spdk_io_channel g_io_channel;
bool g_bdev_io_defer_completion;
TAILQ_HEAD(, spdk_bdev_io) g_deferred_ios = TAILQ_HEAD_INITIALIZER(g_deferred_ios);
struct spdk_io_channel *g_per_thread_base_bdev_channels;
DEFINE_STUB_V(spdk_bdev_module_examine_done, (struct spdk_bdev_module *module));
DEFINE_STUB_V(spdk_bdev_module_list_add, (struct spdk_bdev_module *bdev_module));
@ -101,7 +97,6 @@ DEFINE_STUB(spdk_json_decode_uint32, int, (const struct spdk_json_val *val, void
DEFINE_STUB(spdk_json_decode_array, int, (const struct spdk_json_val *values,
spdk_json_decode_fn decode_func,
void *out, size_t max_size, size_t *out_size, size_t stride), 0);
DEFINE_STUB(spdk_json_decode_bool, int, (const struct spdk_json_val *val, void *out), 0);
DEFINE_STUB(spdk_json_write_name, int, (struct spdk_json_write_ctx *w, const char *name), 0);
DEFINE_STUB(spdk_json_write_object_begin, int, (struct spdk_json_write_ctx *w), 0);
DEFINE_STUB(spdk_json_write_named_object_begin, int, (struct spdk_json_write_ctx *w,
@ -114,8 +109,6 @@ DEFINE_STUB(spdk_json_write_named_array_begin, int, (struct spdk_json_write_ctx
const char *name), 0);
DEFINE_STUB(spdk_json_write_bool, int, (struct spdk_json_write_ctx *w, bool val), 0);
DEFINE_STUB(spdk_json_write_null, int, (struct spdk_json_write_ctx *w), 0);
DEFINE_STUB(spdk_json_write_named_uint64, int, (struct spdk_json_write_ctx *w, const char *name,
uint64_t val), 0);
DEFINE_STUB(spdk_strerror, const char *, (int errnum), NULL);
DEFINE_STUB(spdk_bdev_queue_io_wait, int, (struct spdk_bdev *bdev, struct spdk_io_channel *ch,
struct spdk_bdev_io_wait_entry *entry), 0);
@ -128,52 +121,13 @@ DEFINE_STUB(spdk_bdev_get_dif_type, enum spdk_dif_type, (const struct spdk_bdev
SPDK_DIF_DISABLE);
DEFINE_STUB(spdk_bdev_is_dif_head_of_md, bool, (const struct spdk_bdev *bdev), false);
DEFINE_STUB(spdk_bdev_notify_blockcnt_change, int, (struct spdk_bdev *bdev, uint64_t size), 0);
DEFINE_STUB(spdk_bdev_first, struct spdk_bdev *, (void), NULL);
DEFINE_STUB(spdk_bdev_next, struct spdk_bdev *, (struct spdk_bdev *prev), NULL);
DEFINE_STUB_V(raid_bdev_sb_update_crc, (struct raid_bdev_superblock *sb));
int
raid_bdev_load_base_bdev_superblock(struct spdk_bdev_desc *desc, struct spdk_io_channel *ch,
raid_bdev_load_sb_cb cb, void *cb_ctx)
{
if (cb) {
cb(NULL, -EINVAL, cb_ctx);
}
return 0;
}
int
raid_bdev_save_base_bdev_superblock(struct spdk_bdev_desc *desc, struct spdk_io_channel *ch,
const struct raid_bdev_superblock *sb, raid_bdev_save_sb_cb cb, void *cb_ctx)
{
if (cb) {
cb(0, cb_ctx);
}
return 0;
}
const struct spdk_uuid *
spdk_bdev_get_uuid(const struct spdk_bdev *bdev)
{
return &bdev->uuid;
}
struct spdk_io_channel *
spdk_bdev_get_io_channel(struct spdk_bdev_desc *desc)
{
struct spdk_io_channel *ch;
g_io_channel.thread = spdk_get_thread();
if (g_per_thread_base_bdev_channels) {
ch = &g_per_thread_base_bdev_channels[g_ut_thread_id];
} else {
ch = &g_io_channel;
}
ch->thread = spdk_get_thread();
return ch;
return &g_io_channel;
}
static void
@ -226,8 +180,6 @@ set_globals(void)
g_json_decode_obj_err = 0;
g_json_decode_obj_create = 0;
g_lba_offset = 0;
g_bdev_io_defer_completion = false;
g_per_thread_base_bdev_channels = NULL;
}
static void
@ -243,8 +195,6 @@ base_bdevs_cleanup(void)
free(bdev);
}
}
free(g_per_thread_base_bdev_channels);
}
static void
@ -257,7 +207,7 @@ check_and_remove_raid_bdev(struct raid_bdev *raid_bdev)
RAID_FOR_EACH_BASE_BDEV(raid_bdev, base_info) {
if (base_info->bdev) {
raid_bdev_free_base_bdev_resource(base_info);
raid_bdev_free_base_bdev_resource(raid_bdev, base_info);
}
}
assert(raid_bdev->num_base_bdevs_discovered == 0);
@ -274,7 +224,6 @@ reset_globals(void)
}
g_rpc_req = NULL;
g_rpc_req_size = 0;
g_per_thread_base_bdev_channels = NULL;
}
void
@ -307,29 +256,6 @@ set_io_output(struct io_output *output,
output->iotype = iotype;
}
static void
child_io_complete(struct spdk_bdev_io *child_io, spdk_bdev_io_completion_cb cb, void *cb_arg)
{
if (g_bdev_io_defer_completion) {
child_io->internal.cb = cb;
child_io->internal.caller_ctx = cb_arg;
TAILQ_INSERT_TAIL(&g_deferred_ios, child_io, internal.link);
} else {
cb(child_io, g_child_io_status_flag, cb_arg);
}
}
static void
complete_deferred_ios(void)
{
struct spdk_bdev_io *child_io;
while ((child_io = TAILQ_FIRST(&g_deferred_ios))) {
TAILQ_REMOVE(&g_deferred_ios, child_io, internal.link);
child_io->internal.cb(child_io, g_child_io_status_flag, child_io->internal.caller_ctx);
}
}
/* It will cache the split IOs for verification */
int
spdk_bdev_writev_blocks(struct spdk_bdev_desc *desc, struct spdk_io_channel *ch,
@ -356,7 +282,7 @@ spdk_bdev_writev_blocks(struct spdk_bdev_desc *desc, struct spdk_io_channel *ch,
child_io = calloc(1, sizeof(struct spdk_bdev_io));
SPDK_CU_ASSERT_FATAL(child_io != NULL);
child_io_complete(child_io, cb, cb_arg);
cb(child_io, g_child_io_status_flag, cb_arg);
}
return g_bdev_io_submit_status;
@ -398,7 +324,7 @@ spdk_bdev_reset(struct spdk_bdev_desc *desc, struct spdk_io_channel *ch,
child_io = calloc(1, sizeof(struct spdk_bdev_io));
SPDK_CU_ASSERT_FATAL(child_io != NULL);
child_io_complete(child_io, cb, cb_arg);
cb(child_io, g_child_io_status_flag, cb_arg);
}
return g_bdev_io_submit_status;
@ -423,7 +349,7 @@ spdk_bdev_unmap_blocks(struct spdk_bdev_desc *desc, struct spdk_io_channel *ch,
child_io = calloc(1, sizeof(struct spdk_bdev_io));
SPDK_CU_ASSERT_FATAL(child_io != NULL);
child_io_complete(child_io, cb, cb_arg);
cb(child_io, g_child_io_status_flag, cb_arg);
}
return g_bdev_io_submit_status;
@ -472,6 +398,12 @@ spdk_bdev_desc_get_bdev(struct spdk_bdev_desc *desc)
return (void *)desc;
}
char *
spdk_sprintf_alloc(const char *format, ...)
{
return strdup(format);
}
int
spdk_json_write_named_uint32(struct spdk_json_write_ctx *w, const char *name, uint32_t val)
{
@ -512,18 +444,6 @@ spdk_json_write_named_string(struct spdk_json_write_ctx *w, const char *name, co
return 0;
}
int
spdk_json_write_named_bool(struct spdk_json_write_ctx *w, const char *name, bool val)
{
if (!g_test_multi_raids) {
struct rpc_bdev_raid_create *req = g_rpc_req;
if (strcmp(name, "superblock") == 0) {
CU_ASSERT(val == req->superblock);
}
}
return 0;
}
void
spdk_bdev_free_io(struct spdk_bdev_io *bdev_io)
{
@ -554,7 +474,7 @@ spdk_bdev_readv_blocks(struct spdk_bdev_desc *desc, struct spdk_io_channel *ch,
child_io = calloc(1, sizeof(struct spdk_bdev_io));
SPDK_CU_ASSERT_FATAL(child_io != NULL);
child_io_complete(child_io, cb, cb_arg);
cb(child_io, g_child_io_status_flag, cb_arg);
}
return g_bdev_io_submit_status;
@ -621,7 +541,6 @@ spdk_json_decode_object(const struct spdk_json_val *values,
SPDK_CU_ASSERT_FATAL(_out->name != NULL);
_out->strip_size_kb = req->strip_size_kb;
_out->level = req->level;
_out->superblock = req->superblock;
_out->base_bdevs.num_base_bdevs = req->base_bdevs.num_base_bdevs;
for (i = 0; i < req->base_bdevs.num_base_bdevs; i++) {
_out->base_bdevs.base_bdevs[i] = strdup(req->base_bdevs.base_bdevs[i]);
@ -947,23 +866,18 @@ verify_raid_bdev(struct rpc_bdev_raid_create *r, bool presence, uint32_t raid_st
bdev = spdk_bdev_get_by_name(base_info->bdev->name);
CU_ASSERT(bdev != NULL);
CU_ASSERT(base_info->remove_scheduled == false);
CU_ASSERT((pbdev->sb != NULL && base_info->data_offset != 0) ||
(pbdev->sb == NULL && base_info->data_offset == 0));
CU_ASSERT(base_info->data_offset + base_info->data_size == bdev->blockcnt);
if (bdev && base_info->data_size < min_blockcnt) {
min_blockcnt = base_info->data_size;
if (bdev && bdev->blockcnt < min_blockcnt) {
min_blockcnt = bdev->blockcnt;
}
}
if (r->strip_size_kb > 0) {
CU_ASSERT((((min_blockcnt / (r->strip_size_kb * 1024 / g_block_len)) *
(r->strip_size_kb * 1024 / g_block_len)) *
r->base_bdevs.num_base_bdevs) == pbdev->bdev.blockcnt);
}
CU_ASSERT((((min_blockcnt / (r->strip_size_kb * 1024 / g_block_len)) *
(r->strip_size_kb * 1024 / g_block_len)) *
r->base_bdevs.num_base_bdevs) == pbdev->bdev.blockcnt);
CU_ASSERT(strcmp(pbdev->bdev.product_name, "Raid Volume") == 0);
CU_ASSERT(pbdev->bdev.write_cache == 0);
CU_ASSERT(pbdev->bdev.blocklen == g_block_len);
if (pbdev->num_base_bdevs > 1 && pbdev->level != RAID1) {
if (pbdev->num_base_bdevs > 1) {
CU_ASSERT(pbdev->bdev.optimal_io_boundary == pbdev->strip_size);
CU_ASSERT(pbdev->bdev.split_on_optimal_io_boundary == true);
} else {
@ -1019,7 +933,6 @@ create_base_bdevs(uint32_t bbdev_start_idx)
base_bdev = calloc(1, sizeof(struct spdk_bdev));
SPDK_CU_ASSERT_FATAL(base_bdev != NULL);
base_bdev->name = strdup(name);
spdk_uuid_generate(&base_bdev->uuid);
SPDK_CU_ASSERT_FATAL(base_bdev->name != NULL);
base_bdev->blocklen = g_block_len;
base_bdev->blockcnt = BLOCK_CNT;
@ -1029,8 +942,7 @@ create_base_bdevs(uint32_t bbdev_start_idx)
static void
create_test_req(struct rpc_bdev_raid_create *r, const char *raid_name,
uint8_t bbdev_start_idx, bool create_base_bdev, bool superblock,
uint8_t num_base_bdev_to_use)
uint8_t bbdev_start_idx, bool create_base_bdev)
{
uint8_t i;
char name[16];
@ -1040,9 +952,8 @@ create_test_req(struct rpc_bdev_raid_create *r, const char *raid_name,
SPDK_CU_ASSERT_FATAL(r->name != NULL);
r->strip_size_kb = (g_strip_size * g_block_len) / 1024;
r->level = RAID0;
r->superblock = superblock;
r->base_bdevs.num_base_bdevs = num_base_bdev_to_use;
for (i = 0; i < num_base_bdev_to_use; i++, bbdev_idx++) {
r->base_bdevs.num_base_bdevs = g_max_base_drives;
for (i = 0; i < g_max_base_drives; i++, bbdev_idx++) {
snprintf(name, 16, "%s%u%s", "Nvme", bbdev_idx, "n1");
r->base_bdevs.base_bdevs[i] = strdup(name);
SPDK_CU_ASSERT_FATAL(r->base_bdevs.base_bdevs[i] != NULL);
@ -1055,13 +966,11 @@ create_test_req(struct rpc_bdev_raid_create *r, const char *raid_name,
}
static void
_create_raid_bdev_create_req(struct rpc_bdev_raid_create *r, const char *raid_name,
uint8_t bbdev_start_idx, bool create_base_bdev,
uint8_t json_decode_obj_err, bool superblock,
uint8_t num_base_bdev_to_use)
create_raid_bdev_create_req(struct rpc_bdev_raid_create *r, const char *raid_name,
uint8_t bbdev_start_idx, bool create_base_bdev,
uint8_t json_decode_obj_err)
{
create_test_req(r, raid_name, bbdev_start_idx, create_base_bdev, superblock,
num_base_bdev_to_use);
create_test_req(r, raid_name, bbdev_start_idx, create_base_bdev);
g_rpc_err = 0;
g_json_decode_obj_create = 1;
@ -1070,15 +979,6 @@ _create_raid_bdev_create_req(struct rpc_bdev_raid_create *r, const char *raid_na
g_test_multi_raids = 0;
}
static void
create_raid_bdev_create_req(struct rpc_bdev_raid_create *r, const char *raid_name,
uint8_t bbdev_start_idx, bool create_base_bdev,
uint8_t json_decode_obj_err, bool superblock)
{
_create_raid_bdev_create_req(r, raid_name, bbdev_start_idx, create_base_bdev,
json_decode_obj_err, superblock, g_max_base_drives);
}
static void
free_test_req(struct rpc_bdev_raid_create *r)
{
@ -1133,7 +1033,7 @@ test_create_raid(void)
CU_ASSERT(raid_bdev_init() == 0);
verify_raid_bdev_present("raid1", false);
create_raid_bdev_create_req(&req, "raid1", 0, true, 0, false);
create_raid_bdev_create_req(&req, "raid1", 0, true, 0);
rpc_bdev_raid_create(NULL, NULL);
CU_ASSERT(g_rpc_err == 0);
verify_raid_bdev(&req, true, RAID_BDEV_STATE_ONLINE);
@ -1157,7 +1057,7 @@ test_delete_raid(void)
CU_ASSERT(raid_bdev_init() == 0);
verify_raid_bdev_present("raid1", false);
create_raid_bdev_create_req(&construct_req, "raid1", 0, true, 0, false);
create_raid_bdev_create_req(&construct_req, "raid1", 0, true, 0);
rpc_bdev_raid_create(NULL, NULL);
CU_ASSERT(g_rpc_err == 0);
verify_raid_bdev(&construct_req, true, RAID_BDEV_STATE_ONLINE);
@ -1184,44 +1084,44 @@ test_create_raid_invalid_args(void)
CU_ASSERT(raid_bdev_init() == 0);
verify_raid_bdev_present("raid1", false);
create_raid_bdev_create_req(&req, "raid1", 0, true, 0, false);
create_raid_bdev_create_req(&req, "raid1", 0, true, 0);
req.level = INVALID_RAID_LEVEL;
rpc_bdev_raid_create(NULL, NULL);
CU_ASSERT(g_rpc_err == 1);
free_test_req(&req);
verify_raid_bdev_present("raid1", false);
create_raid_bdev_create_req(&req, "raid1", 0, false, 1, false);
create_raid_bdev_create_req(&req, "raid1", 0, false, 1);
rpc_bdev_raid_create(NULL, NULL);
CU_ASSERT(g_rpc_err == 1);
free_test_req(&req);
verify_raid_bdev_present("raid1", false);
create_raid_bdev_create_req(&req, "raid1", 0, false, 0, false);
create_raid_bdev_create_req(&req, "raid1", 0, false, 0);
req.strip_size_kb = 1231;
rpc_bdev_raid_create(NULL, NULL);
CU_ASSERT(g_rpc_err == 1);
free_test_req(&req);
verify_raid_bdev_present("raid1", false);
create_raid_bdev_create_req(&req, "raid1", 0, false, 0, false);
create_raid_bdev_create_req(&req, "raid1", 0, false, 0);
rpc_bdev_raid_create(NULL, NULL);
CU_ASSERT(g_rpc_err == 0);
verify_raid_bdev(&req, true, RAID_BDEV_STATE_ONLINE);
free_test_req(&req);
create_raid_bdev_create_req(&req, "raid1", 0, false, 0, false);
create_raid_bdev_create_req(&req, "raid1", 0, false, 0);
rpc_bdev_raid_create(NULL, NULL);
CU_ASSERT(g_rpc_err == 1);
free_test_req(&req);
create_raid_bdev_create_req(&req, "raid2", 0, false, 0, false);
create_raid_bdev_create_req(&req, "raid2", 0, false, 0);
rpc_bdev_raid_create(NULL, NULL);
CU_ASSERT(g_rpc_err == 1);
free_test_req(&req);
verify_raid_bdev_present("raid2", false);
create_raid_bdev_create_req(&req, "raid2", g_max_base_drives, true, 0, false);
create_raid_bdev_create_req(&req, "raid2", g_max_base_drives, true, 0);
free(req.base_bdevs.base_bdevs[g_max_base_drives - 1]);
req.base_bdevs.base_bdevs[g_max_base_drives - 1] = strdup("Nvme0n1");
SPDK_CU_ASSERT_FATAL(req.base_bdevs.base_bdevs[g_max_base_drives - 1] != NULL);
@ -1230,7 +1130,7 @@ test_create_raid_invalid_args(void)
free_test_req(&req);
verify_raid_bdev_present("raid2", false);
create_raid_bdev_create_req(&req, "raid2", g_max_base_drives, true, 0, false);
create_raid_bdev_create_req(&req, "raid2", g_max_base_drives, true, 0);
free(req.base_bdevs.base_bdevs[g_max_base_drives - 1]);
req.base_bdevs.base_bdevs[g_max_base_drives - 1] = strdup("Nvme100000n1");
SPDK_CU_ASSERT_FATAL(req.base_bdevs.base_bdevs[g_max_base_drives - 1] != NULL);
@ -1242,7 +1142,7 @@ test_create_raid_invalid_args(void)
SPDK_CU_ASSERT_FATAL(raid_bdev != NULL);
check_and_remove_raid_bdev(raid_bdev);
create_raid_bdev_create_req(&req, "raid2", g_max_base_drives, false, 0, false);
create_raid_bdev_create_req(&req, "raid2", g_max_base_drives, false, 0);
rpc_bdev_raid_create(NULL, NULL);
CU_ASSERT(g_rpc_err == 0);
free_test_req(&req);
@ -1268,7 +1168,7 @@ test_delete_raid_invalid_args(void)
CU_ASSERT(raid_bdev_init() == 0);
verify_raid_bdev_present("raid1", false);
create_raid_bdev_create_req(&construct_req, "raid1", 0, true, 0, false);
create_raid_bdev_create_req(&construct_req, "raid1", 0, true, 0);
rpc_bdev_raid_create(NULL, NULL);
CU_ASSERT(g_rpc_err == 0);
verify_raid_bdev(&construct_req, true, RAID_BDEV_STATE_ONLINE);
@ -1306,7 +1206,7 @@ test_io_channel(void)
set_globals();
CU_ASSERT(raid_bdev_init() == 0);
create_raid_bdev_create_req(&req, "raid1", 0, true, 0, false);
create_raid_bdev_create_req(&req, "raid1", 0, true, 0);
verify_raid_bdev_present("raid1", false);
rpc_bdev_raid_create(NULL, NULL);
CU_ASSERT(g_rpc_err == 0);
@ -1358,7 +1258,7 @@ test_write_io(void)
set_globals();
CU_ASSERT(raid_bdev_init() == 0);
create_raid_bdev_create_req(&req, "raid1", 0, true, 0, false);
create_raid_bdev_create_req(&req, "raid1", 0, true, 0);
verify_raid_bdev_present("raid1", false);
rpc_bdev_raid_create(NULL, NULL);
CU_ASSERT(g_rpc_err == 0);
@ -1434,7 +1334,7 @@ test_read_io(void)
CU_ASSERT(raid_bdev_init() == 0);
verify_raid_bdev_present("raid1", false);
create_raid_bdev_create_req(&req, "raid1", 0, true, 0, false);
create_raid_bdev_create_req(&req, "raid1", 0, true, 0);
rpc_bdev_raid_create(NULL, NULL);
CU_ASSERT(g_rpc_err == 0);
verify_raid_bdev(&req, true, RAID_BDEV_STATE_ONLINE);
@ -1584,7 +1484,7 @@ test_unmap_io(void)
CU_ASSERT(raid_bdev_init() == 0);
verify_raid_bdev_present("raid1", false);
create_raid_bdev_create_req(&req, "raid1", 0, true, 0, false);
create_raid_bdev_create_req(&req, "raid1", 0, true, 0);
rpc_bdev_raid_create(NULL, NULL);
CU_ASSERT(g_rpc_err == 0);
verify_raid_bdev(&req, true, RAID_BDEV_STATE_ONLINE);
@ -1655,7 +1555,7 @@ test_io_failure(void)
CU_ASSERT(raid_bdev_init() == 0);
verify_raid_bdev_present("raid1", false);
create_raid_bdev_create_req(&req, "raid1", 0, true, 0, false);
create_raid_bdev_create_req(&req, "raid1", 0, true, 0);
rpc_bdev_raid_create(NULL, NULL);
CU_ASSERT(g_rpc_err == 0);
verify_raid_bdev(&req, true, RAID_BDEV_STATE_ONLINE);
@ -1737,7 +1637,7 @@ test_reset_io(void)
CU_ASSERT(raid_bdev_init() == 0);
verify_raid_bdev_present("raid1", false);
create_raid_bdev_create_req(&req, "raid1", 0, true, 0, false);
create_raid_bdev_create_req(&req, "raid1", 0, true, 0);
rpc_bdev_raid_create(NULL, NULL);
CU_ASSERT(g_rpc_err == 0);
verify_raid_bdev(&req, true, RAID_BDEV_STATE_ONLINE);
@ -1804,7 +1704,7 @@ test_multi_raid_no_io(void)
for (i = 0; i < g_max_raids; i++) {
snprintf(name, 16, "%s%u", "raid", i);
verify_raid_bdev_present(name, false);
create_raid_bdev_create_req(&construct_req[i], name, bbdev_idx, true, 0, false);
create_raid_bdev_create_req(&construct_req[i], name, bbdev_idx, true, 0);
bbdev_idx += g_max_base_drives;
rpc_bdev_raid_create(NULL, NULL);
CU_ASSERT(g_rpc_err == 0);
@ -1907,7 +1807,7 @@ test_multi_raid_with_io(void)
for (i = 0; i < g_max_raids; i++) {
snprintf(name, 16, "%s%u", "raid", i);
verify_raid_bdev_present(name, false);
create_raid_bdev_create_req(&construct_req[i], name, bbdev_idx, true, 0, false);
create_raid_bdev_create_req(&construct_req[i], name, bbdev_idx, true, 0);
bbdev_idx += g_max_base_drives;
rpc_bdev_raid_create(NULL, NULL);
CU_ASSERT(g_rpc_err == 0);
@ -1998,7 +1898,7 @@ test_raid_json_dump_info(void)
CU_ASSERT(raid_bdev_init() == 0);
verify_raid_bdev_present("raid1", false);
create_raid_bdev_create_req(&req, "raid1", 0, true, 0, false);
create_raid_bdev_create_req(&req, "raid1", 0, true, 0);
rpc_bdev_raid_create(NULL, NULL);
CU_ASSERT(g_rpc_err == 0);
verify_raid_bdev(&req, true, RAID_BDEV_STATE_ONLINE);
@ -2048,377 +1948,6 @@ test_raid_level_conversions(void)
CU_ASSERT(raid_str != NULL && strcmp(raid_str, "raid0") == 0);
}
static void
test_create_raid_superblock(void)
{
struct rpc_bdev_raid_create req;
struct rpc_bdev_raid_delete delete_req;
set_globals();
CU_ASSERT(raid_bdev_init() == 0);
verify_raid_bdev_present("raid1", false);
create_raid_bdev_create_req(&req, "raid1", 0, true, 0, true);
rpc_bdev_raid_create(NULL, NULL);
CU_ASSERT(g_rpc_err == 0);
verify_raid_bdev(&req, true, RAID_BDEV_STATE_ONLINE);
free_test_req(&req);
create_raid_bdev_delete_req(&delete_req, "raid1", 0);
rpc_bdev_raid_delete(NULL, NULL);
CU_ASSERT(g_rpc_err == 0);
raid_bdev_exit();
base_bdevs_cleanup();
reset_globals();
}
static void
suspend_cb(struct raid_bdev *raid_bdev, void *ctx)
{
*(bool *)ctx = true;
}
static void
test_raid_suspend_resume(void)
{
struct rpc_bdev_raid_create req;
struct rpc_bdev_raid_delete destroy_req;
struct raid_bdev *pbdev;
struct spdk_io_channel *ch;
struct raid_bdev_io_channel *raid_ch;
struct spdk_bdev_io *bdev_io;
bool suspend_cb_called, suspend_cb_called2;
int rc;
set_globals();
CU_ASSERT(raid_bdev_init() == 0);
verify_raid_bdev_present("raid1", false);
create_raid_bdev_create_req(&req, "raid1", 0, true, 0, false);
rpc_bdev_raid_create(NULL, NULL);
CU_ASSERT(g_rpc_err == 0);
verify_raid_bdev(&req, true, RAID_BDEV_STATE_ONLINE);
TAILQ_FOREACH(pbdev, &g_raid_bdev_list, global_link) {
if (strcmp(pbdev->bdev.name, "raid1") == 0) {
break;
}
}
CU_ASSERT(pbdev != NULL);
/* suspend/resume with no io channels */
suspend_cb_called = false;
rc = raid_bdev_suspend(pbdev, suspend_cb, &suspend_cb_called);
SPDK_CU_ASSERT_FATAL(rc == 0);
poll_threads();
CU_ASSERT(suspend_cb_called == true);
raid_bdev_resume(pbdev, NULL, NULL);
poll_threads();
/* suspend/resume with one idle io channel */
ch = spdk_get_io_channel(pbdev);
SPDK_CU_ASSERT_FATAL(ch != NULL);
raid_ch = spdk_io_channel_get_ctx(ch);
suspend_cb_called = false;
rc = raid_bdev_suspend(pbdev, suspend_cb, &suspend_cb_called);
SPDK_CU_ASSERT_FATAL(rc == 0);
poll_threads();
CU_ASSERT(suspend_cb_called == true);
CU_ASSERT(raid_ch->is_suspended == true);
raid_bdev_resume(pbdev, NULL, NULL);
poll_threads();
CU_ASSERT(raid_ch->is_suspended == false);
/* suspend/resume multiple */
suspend_cb_called = false;
suspend_cb_called2 = false;
rc = raid_bdev_suspend(pbdev, suspend_cb, &suspend_cb_called);
SPDK_CU_ASSERT_FATAL(rc == 0);
rc = raid_bdev_suspend(pbdev, suspend_cb, &suspend_cb_called2);
SPDK_CU_ASSERT_FATAL(rc == 0);
poll_threads();
CU_ASSERT(suspend_cb_called == true);
CU_ASSERT(suspend_cb_called2 == true);
CU_ASSERT(raid_ch->is_suspended == true);
suspend_cb_called = false;
rc = raid_bdev_suspend(pbdev, suspend_cb, &suspend_cb_called);
SPDK_CU_ASSERT_FATAL(rc == 0);
CU_ASSERT(suspend_cb_called == true);
raid_bdev_resume(pbdev, NULL, NULL);
poll_threads();
CU_ASSERT(raid_ch->is_suspended == true);
raid_bdev_resume(pbdev, NULL, NULL);
poll_threads();
CU_ASSERT(raid_ch->is_suspended == true);
raid_bdev_resume(pbdev, NULL, NULL);
poll_threads();
CU_ASSERT(raid_ch->is_suspended == false);
/* suspend/resume with io before and after suspend */
bdev_io = calloc(1, sizeof(struct spdk_bdev_io) + sizeof(struct raid_bdev_io));
SPDK_CU_ASSERT_FATAL(bdev_io != NULL);
bdev_io_initialize(bdev_io, ch, &pbdev->bdev, 0, 1, SPDK_BDEV_IO_TYPE_READ);
memset(g_io_output, 0, ((g_max_io_size / g_strip_size) + 1) * sizeof(struct io_output));
g_io_output_index = 0;
g_bdev_io_defer_completion = true;
raid_bdev_submit_request(ch, bdev_io);
CU_ASSERT(raid_ch->num_ios == 1);
suspend_cb_called = false;
rc = raid_bdev_suspend(pbdev, suspend_cb, &suspend_cb_called);
SPDK_CU_ASSERT_FATAL(rc == 0);
poll_threads();
CU_ASSERT(raid_ch->is_suspended == true);
CU_ASSERT(suspend_cb_called == false);
complete_deferred_ios();
verify_io(bdev_io, req.base_bdevs.num_base_bdevs, raid_ch, pbdev, g_child_io_status_flag);
poll_threads();
CU_ASSERT(suspend_cb_called == true);
g_io_output_index = 0;
raid_bdev_submit_request(ch, bdev_io);
CU_ASSERT(raid_ch->num_ios == 0);
CU_ASSERT(TAILQ_FIRST(&raid_ch->suspended_ios) == (struct raid_bdev_io *)bdev_io->driver_ctx);
raid_bdev_resume(pbdev, NULL, NULL);
poll_threads();
CU_ASSERT(raid_ch->is_suspended == false);
verify_io(bdev_io, req.base_bdevs.num_base_bdevs, raid_ch, pbdev, g_child_io_status_flag);
bdev_io_cleanup(bdev_io);
spdk_put_io_channel(ch);
free_test_req(&req);
create_raid_bdev_delete_req(&destroy_req, "raid1", 0);
rpc_bdev_raid_delete(NULL, NULL);
CU_ASSERT(g_rpc_err == 0);
verify_raid_bdev_present("raid1", false);
raid_bdev_exit();
base_bdevs_cleanup();
reset_globals();
}
static void
test_raid_suspend_resume_create_ch(void)
{
struct rpc_bdev_raid_create req;
struct rpc_bdev_raid_delete destroy_req;
struct raid_bdev *pbdev;
struct spdk_io_channel *ch1, *ch2;
struct raid_bdev_io_channel *raid_ch1, *raid_ch2;
bool suspend_cb_called;
int rc;
free_threads();
allocate_threads(3);
set_thread(0);
set_globals();
CU_ASSERT(raid_bdev_init() == 0);
g_per_thread_base_bdev_channels = calloc(3, sizeof(struct spdk_io_channel));
verify_raid_bdev_present("raid1", false);
create_raid_bdev_create_req(&req, "raid1", 0, true, 0, false);
rpc_bdev_raid_create(NULL, NULL);
CU_ASSERT(g_rpc_err == 0);
verify_raid_bdev(&req, true, RAID_BDEV_STATE_ONLINE);
TAILQ_FOREACH(pbdev, &g_raid_bdev_list, global_link) {
if (strcmp(pbdev->bdev.name, "raid1") == 0) {
break;
}
}
CU_ASSERT(pbdev != NULL);
set_thread(1);
ch1 = spdk_get_io_channel(pbdev);
SPDK_CU_ASSERT_FATAL(ch1 != NULL);
raid_ch1 = spdk_io_channel_get_ctx(ch1);
/* create a new io channel during suspend */
set_thread(0);
suspend_cb_called = false;
rc = raid_bdev_suspend(pbdev, suspend_cb, &suspend_cb_called);
SPDK_CU_ASSERT_FATAL(rc == 0);
poll_thread(1);
CU_ASSERT(raid_ch1->is_suspended == true);
CU_ASSERT(suspend_cb_called == false);
set_thread(2);
ch2 = spdk_get_io_channel(pbdev);
SPDK_CU_ASSERT_FATAL(ch2 != NULL);
raid_ch2 = spdk_io_channel_get_ctx(ch2);
poll_threads();
CU_ASSERT(suspend_cb_called == true);
CU_ASSERT(raid_ch1->is_suspended == true);
CU_ASSERT(raid_ch2->is_suspended == true);
set_thread(0);
raid_bdev_resume(pbdev, NULL, NULL);
poll_threads();
CU_ASSERT(raid_ch1->is_suspended == false);
CU_ASSERT(raid_ch2->is_suspended == false);
set_thread(2);
spdk_put_io_channel(ch2);
poll_threads();
/* create a new io channel during resume */
set_thread(0);
suspend_cb_called = false;
rc = raid_bdev_suspend(pbdev, suspend_cb, &suspend_cb_called);
SPDK_CU_ASSERT_FATAL(rc == 0);
poll_threads();
CU_ASSERT(suspend_cb_called == true);
CU_ASSERT(raid_ch1->is_suspended == true);
raid_bdev_resume(pbdev, NULL, NULL);
set_thread(2);
ch2 = spdk_get_io_channel(pbdev);
SPDK_CU_ASSERT_FATAL(ch2 != NULL);
raid_ch2 = spdk_io_channel_get_ctx(ch2);
CU_ASSERT(raid_ch1->is_suspended == true);
CU_ASSERT(raid_ch2->is_suspended == false);
poll_threads();
CU_ASSERT(raid_ch1->is_suspended == false);
CU_ASSERT(raid_ch2->is_suspended == false);
set_thread(2);
spdk_put_io_channel(ch2);
poll_threads();
set_thread(1);
spdk_put_io_channel(ch1);
poll_threads();
set_thread(0);
free_test_req(&req);
create_raid_bdev_delete_req(&destroy_req, "raid1", 0);
rpc_bdev_raid_delete(NULL, NULL);
CU_ASSERT(g_rpc_err == 0);
verify_raid_bdev_present("raid1", false);
raid_bdev_exit();
base_bdevs_cleanup();
reset_globals();
free_threads();
allocate_threads(1);
set_thread(0);
}
static void
test_raid_grow_base_bdev_not_supported(void)
{
struct rpc_bdev_raid_create req;
struct rpc_bdev_raid_delete destroy_req;
struct raid_bdev *pbdev;
int rc;
set_globals();
CU_ASSERT(raid_bdev_init() == 0);
verify_raid_bdev_present("raid1", false);
create_raid_bdev_create_req(&req, "raid1", 0, true, 0, false);
req.level = RAID0;
rpc_bdev_raid_create(NULL, NULL);
CU_ASSERT(g_rpc_err == 0);
TAILQ_FOREACH(pbdev, &g_raid_bdev_list, global_link) {
if (strcmp(pbdev->bdev.name, "raid1") == 0) {
break;
}
}
CU_ASSERT(pbdev != NULL);
/* Only RAID1 level actually support grow base bdev operation */
rc = raid_bdev_grow_base_bdev(pbdev, "", NULL, NULL);
CU_ASSERT(rc == -EPERM);
free_test_req(&req);
create_raid_bdev_delete_req(&destroy_req, "raid1", 0);
rpc_bdev_raid_delete(NULL, NULL);
CU_ASSERT(g_rpc_err == 0);
verify_raid_bdev_present("raid1", false);
raid_bdev_exit();
base_bdevs_cleanup();
reset_globals();
}
static void
grow_base_bdev_cb(void *cb_arg, int rc)
{
*(int *)cb_arg = rc;
}
static void
test_raid_grow_base_bdev(void)
{
struct rpc_bdev_raid_create req;
struct rpc_bdev_raid_delete destroy_req;
struct raid_bdev *pbdev;
char name[16];
struct spdk_io_channel *ch;
int grow_base_bdev_cb_output;
int rc;
set_globals();
CU_ASSERT(raid_bdev_init() == 0);
snprintf(name, 16, "%s%u%s", "Nvme", g_max_base_drives - 1, "n1");
/* Create a raid with RAID1 level */
verify_raid_bdev_present("raid1", false);
_create_raid_bdev_create_req(&req, "raid1", 0, true, 0, false,
g_max_base_drives - 1);
req.strip_size_kb = 0;
req.level = RAID1;
rpc_bdev_raid_create(NULL, NULL);
CU_ASSERT(g_rpc_err == 0);
verify_raid_bdev(&req, true, RAID_BDEV_STATE_ONLINE);
TAILQ_FOREACH(pbdev, &g_raid_bdev_list, global_link) {
if (strcmp(pbdev->bdev.name, "raid1") == 0) {
break;
}
}
CU_ASSERT(pbdev != NULL);
/* Grow raid adding base bdev successfully */
ch = spdk_get_io_channel(pbdev);
SPDK_CU_ASSERT_FATAL(ch != NULL);
grow_base_bdev_cb_output = 1;
rc = raid_bdev_grow_base_bdev(pbdev, name, grow_base_bdev_cb,
&grow_base_bdev_cb_output);
CU_ASSERT(rc == 0);
/* Grow base bdev with another operation running */
rc = raid_bdev_grow_base_bdev(pbdev, name, NULL, NULL);
CU_ASSERT(rc == -EBUSY);
/* Check that new base bdev has been correctly added */
poll_threads();
CU_ASSERT(grow_base_bdev_cb_output == 0);
spdk_put_io_channel(ch);
free_test_req(&req);
create_raid_bdev_delete_req(&destroy_req, "raid1", 0);
rpc_bdev_raid_delete(NULL, NULL);
CU_ASSERT(g_rpc_err == 0);
verify_raid_bdev_present("raid1", false);
raid_bdev_exit();
base_bdevs_cleanup();
reset_globals();
}
int
main(int argc, char **argv)
{
@ -2431,7 +1960,6 @@ main(int argc, char **argv)
suite = CU_add_suite("raid", NULL, NULL);
CU_ADD_TEST(suite, test_create_raid);
CU_ADD_TEST(suite, test_create_raid_superblock);
CU_ADD_TEST(suite, test_delete_raid);
CU_ADD_TEST(suite, test_create_raid_invalid_args);
CU_ADD_TEST(suite, test_delete_raid_invalid_args);
@ -2447,10 +1975,6 @@ main(int argc, char **argv)
CU_ADD_TEST(suite, test_raid_json_dump_info);
CU_ADD_TEST(suite, test_context_size);
CU_ADD_TEST(suite, test_raid_level_conversions);
CU_ADD_TEST(suite, test_raid_suspend_resume);
CU_ADD_TEST(suite, test_raid_suspend_resume_create_ch);
CU_ADD_TEST(suite, test_raid_grow_base_bdev_not_supported);
CU_ADD_TEST(suite, test_raid_grow_base_bdev);
allocate_threads(1);
set_thread(0);

View File

@ -1,10 +0,0 @@
# SPDX-License-Identifier: BSD-3-Clause
# Copyright (C) 2022 Intel Corporation.
# All rights reserved.
#
SPDK_ROOT_DIR := $(abspath $(CURDIR)/../../../../../..)
TEST_FILE = bdev_raid_sb_ut.c
include $(SPDK_ROOT_DIR)/mk/spdk.unittest.mk

View File

@ -1,248 +0,0 @@
/* SPDX-License-Identifier: BSD-3-Clause
* Copyright (C) 2022 Intel Corporation.
* All rights reserved.
*/
#include "spdk/stdinc.h"
#include "spdk_cunit.h"
#include "spdk/env.h"
#include "spdk_internal/mock.h"
#include "common/lib/test_env.c"
#include "bdev/raid/bdev_raid_sb.c"
#define TEST_BUF_ALIGN 64
#define TEST_BLOCK_SIZE 512
DEFINE_STUB(spdk_bdev_desc_get_bdev, struct spdk_bdev *, (struct spdk_bdev_desc *desc), NULL);
DEFINE_STUB(spdk_bdev_get_name, const char *, (const struct spdk_bdev *bdev), "test_bdev");
DEFINE_STUB(spdk_bdev_get_buf_align, size_t, (const struct spdk_bdev *bdev), TEST_BUF_ALIGN);
DEFINE_STUB(spdk_bdev_get_block_size, uint32_t, (const struct spdk_bdev *bdev), TEST_BLOCK_SIZE);
DEFINE_STUB_V(spdk_bdev_free_io, (struct spdk_bdev_io *g_bdev_io));
void *g_buf;
int g_read_counter;
static int
test_setup(void)
{
g_buf = spdk_dma_zmalloc(RAID_BDEV_SB_MAX_LENGTH, TEST_BUF_ALIGN, NULL);
if (!g_buf) {
return -ENOMEM;
}
return 0;
}
static int
test_cleanup(void)
{
spdk_dma_free(g_buf);
return 0;
}
int
spdk_bdev_read(struct spdk_bdev_desc *desc, struct spdk_io_channel *ch,
void *buf, uint64_t offset, uint64_t nbytes,
spdk_bdev_io_completion_cb cb, void *cb_arg)
{
g_read_counter++;
memcpy(buf, g_buf + offset, nbytes);
cb(NULL, true, cb_arg);
return 0;
}
int
spdk_bdev_write(struct spdk_bdev_desc *desc, struct spdk_io_channel *ch,
void *buf, uint64_t offset, uint64_t nbytes,
spdk_bdev_io_completion_cb cb, void *cb_arg)
{
struct raid_bdev_superblock *sb = buf;
CU_ASSERT(offset == 0);
CU_ASSERT(nbytes / TEST_BLOCK_SIZE == spdk_divide_round_up(sb->length, TEST_BLOCK_SIZE));
cb(NULL, true, cb_arg);
return 0;
}
static void
prepare_sb(struct raid_bdev_superblock *sb)
{
/* prepare a simplest valid sb */
memset(sb, 0, RAID_BDEV_SB_MAX_LENGTH);
memcpy(sb->signature, RAID_BDEV_SB_SIG, sizeof(sb->signature));
sb->version.major = RAID_BDEV_SB_VERSION_MAJOR;
sb->version.minor = RAID_BDEV_SB_VERSION_MINOR;
sb->length = sizeof(*sb);
sb->crc = spdk_crc32c_update(sb, sb->length, 0);
}
static void
save_sb_cb(int status, void *ctx)
{
int *status_out = ctx;
*status_out = status;
}
static void
test_raid_bdev_save_base_bdev_superblock(void)
{
struct raid_bdev_superblock *sb = g_buf;
int rc;
int status;
prepare_sb(sb);
status = INT_MAX;
rc = raid_bdev_save_base_bdev_superblock(NULL, NULL, sb, save_sb_cb, &status);
CU_ASSERT(rc == 0);
CU_ASSERT(status == 0);
}
static void
load_sb_cb(const struct raid_bdev_superblock *sb, int status, void *ctx)
{
int *status_out = ctx;
if (status == 0) {
CU_ASSERT(memcmp(sb, g_buf, sb->length) == 0);
}
*status_out = status;
}
static void
test_raid_bdev_load_base_bdev_superblock(void)
{
struct raid_bdev_superblock *sb = g_buf;
int rc;
int status;
/* valid superblock */
prepare_sb(sb);
g_read_counter = 0;
status = INT_MAX;
rc = raid_bdev_load_base_bdev_superblock(NULL, NULL, load_sb_cb, &status);
CU_ASSERT(rc == 0);
CU_ASSERT(status == 0);
CU_ASSERT(g_read_counter == 1);
/* invalid signature */
prepare_sb(sb);
sb->signature[3] = 'Z';
raid_bdev_sb_update_crc(sb);
g_read_counter = 0;
status = INT_MAX;
rc = raid_bdev_load_base_bdev_superblock(NULL, NULL, load_sb_cb, &status);
CU_ASSERT(rc == 0);
CU_ASSERT(status == -EINVAL);
CU_ASSERT(g_read_counter == 1);
/* make the sb longer than 1 bdev block - expect 2 reads */
prepare_sb(sb);
sb->length = TEST_BLOCK_SIZE * 3;
raid_bdev_sb_update_crc(sb);
g_read_counter = 0;
status = INT_MAX;
rc = raid_bdev_load_base_bdev_superblock(NULL, NULL, load_sb_cb, &status);
CU_ASSERT(rc == 0);
CU_ASSERT(status == 0);
CU_ASSERT(g_read_counter == 2);
/* corrupted sb contents, length > 1 bdev block - expect 2 reads */
prepare_sb(sb);
sb->length = TEST_BLOCK_SIZE * 3;
raid_bdev_sb_update_crc(sb);
sb->reserved[0] = 0xff;
g_read_counter = 0;
status = INT_MAX;
rc = raid_bdev_load_base_bdev_superblock(NULL, NULL, load_sb_cb, &status);
CU_ASSERT(rc == 0);
CU_ASSERT(status == -EINVAL);
CU_ASSERT(g_read_counter == 2);
/* invalid signature, length > 1 bdev block - expect 1 read */
prepare_sb(sb);
sb->signature[3] = 'Z';
sb->length = TEST_BLOCK_SIZE * 3;
raid_bdev_sb_update_crc(sb);
g_read_counter = 0;
status = INT_MAX;
rc = raid_bdev_load_base_bdev_superblock(NULL, NULL, load_sb_cb, &status);
CU_ASSERT(rc == 0);
CU_ASSERT(status == -EINVAL);
CU_ASSERT(g_read_counter == 1);
}
static void
test_raid_bdev_parse_superblock(void)
{
struct raid_bdev_superblock *sb = g_buf;
struct raid_bdev_read_sb_ctx ctx = {
.buf = g_buf,
.buf_size = TEST_BLOCK_SIZE,
};
/* valid superblock */
prepare_sb(sb);
CU_ASSERT(raid_bdev_parse_superblock(&ctx) == 0);
/* invalid signature */
prepare_sb(sb);
sb->signature[3] = 'Z';
raid_bdev_sb_update_crc(sb);
CU_ASSERT(raid_bdev_parse_superblock(&ctx) == -EINVAL);
/* invalid crc */
prepare_sb(sb);
sb->crc = 0xdeadbeef;
CU_ASSERT(raid_bdev_parse_superblock(&ctx) == -EINVAL);
/* corrupted sb contents */
prepare_sb(sb);
sb->reserved[0] = 0xff;
CU_ASSERT(raid_bdev_parse_superblock(&ctx) == -EINVAL);
/* invalid major version */
prepare_sb(sb);
sb->version.major = 9999;
raid_bdev_sb_update_crc(sb);
CU_ASSERT(raid_bdev_parse_superblock(&ctx) == -EINVAL);
/* sb longer than 1 bdev block */
prepare_sb(sb);
sb->length = TEST_BLOCK_SIZE * 3;
raid_bdev_sb_update_crc(sb);
CU_ASSERT(raid_bdev_parse_superblock(&ctx) == -EAGAIN);
ctx.buf_size = sb->length;
CU_ASSERT(raid_bdev_parse_superblock(&ctx) == 0);
}
int
main(int argc, char **argv)
{
CU_pSuite suite = NULL;
unsigned int num_failures;
CU_set_error_action(CUEA_ABORT);
CU_initialize_registry();
suite = CU_add_suite("raid_sb", test_setup, test_cleanup);
CU_ADD_TEST(suite, test_raid_bdev_save_base_bdev_superblock);
CU_ADD_TEST(suite, test_raid_bdev_load_base_bdev_superblock);
CU_ADD_TEST(suite, test_raid_bdev_parse_superblock);
CU_basic_set_mode(CU_BRM_VERBOSE);
CU_basic_run_tests();
num_failures = CU_get_number_of_failures();
CU_cleanup_registry();
return num_failures;
}

View File

@ -105,8 +105,6 @@ raid_test_create_raid_bdev(struct raid_params *params, struct raid_bdev_module *
base_info->bdev = bdev;
base_info->desc = desc;
base_info->data_offset = 0;
base_info->data_size = bdev->blockcnt;
}
raid_bdev->strip_size = params->strip_size;

View File

@ -6,16 +6,17 @@
#include "spdk/stdinc.h"
#include "spdk_cunit.h"
#include "spdk/env.h"
#include "common/lib/ut_multithread.c"
#include "spdk_internal/mock.h"
#include "bdev/raid/raid1.c"
#include "../common.c"
DEFINE_STUB_V(raid_bdev_module_list_add, (struct raid_bdev_module *raid_module));
DEFINE_STUB_V(raid_bdev_module_stop_done, (struct raid_bdev *raid_bdev));
DEFINE_STUB_V(raid_bdev_io_complete, (struct raid_bdev_io *raid_io,
enum spdk_bdev_io_status status));
DEFINE_STUB(raid_bdev_io_complete_part, bool, (struct raid_bdev_io *raid_io, uint64_t completed,
enum spdk_bdev_io_status status), true);
DEFINE_STUB_V(spdk_bdev_free_io, (struct spdk_bdev_io *bdev_io));
DEFINE_STUB_V(raid_bdev_queue_io_wait, (struct raid_bdev_io *raid_io, struct spdk_bdev *bdev,
struct spdk_io_channel *ch, spdk_bdev_io_wait_cb cb_fn));
DEFINE_STUB(spdk_bdev_readv_blocks_with_md, int, (struct spdk_bdev_desc *desc,
@ -92,9 +93,9 @@ create_raid1(struct raid_params *params)
}
static void
delete_raid1(struct raid1_info *r1info)
delete_raid1(struct raid1_info *r1_info)
{
struct raid_bdev *raid_bdev = r1info->raid_bdev;
struct raid_bdev *raid_bdev = r1_info->raid_bdev;
raid1_stop(raid_bdev);
@ -107,171 +108,20 @@ test_raid1_start(void)
struct raid_params *params;
RAID_PARAMS_FOR_EACH(params) {
struct raid1_info *r1info;
struct raid1_info *r1_info;
r1info = create_raid1(params);
r1_info = create_raid1(params);
SPDK_CU_ASSERT_FATAL(r1info != NULL);
SPDK_CU_ASSERT_FATAL(r1_info != NULL);
CU_ASSERT_EQUAL(r1info->raid_bdev->level, RAID1);
CU_ASSERT_EQUAL(r1info->raid_bdev->bdev.blockcnt, params->base_bdev_blockcnt);
CU_ASSERT_PTR_EQUAL(r1info->raid_bdev->module, &g_raid1_module);
CU_ASSERT_EQUAL(r1_info->raid_bdev->level, RAID1);
CU_ASSERT_EQUAL(r1_info->raid_bdev->bdev.blockcnt, params->base_bdev_blockcnt);
CU_ASSERT_PTR_EQUAL(r1_info->raid_bdev->module, &g_raid1_module);
delete_raid1(r1info);
delete_raid1(r1_info);
}
}
void
spdk_bdev_free_io(struct spdk_bdev_io *bdev_io)
{
free(bdev_io);
}
void
raid_bdev_io_complete(struct raid_bdev_io *raid_io, enum spdk_bdev_io_status status)
{
struct spdk_bdev_io *bdev_io = spdk_bdev_io_from_ctx(raid_io);
spdk_bdev_free_io(bdev_io);
}
static struct raid_bdev_io *
get_raid_io(struct raid1_info *r1info, struct raid_bdev_io_channel *raid_ch,
enum spdk_bdev_io_type io_type, uint64_t num_blocks)
{
struct spdk_bdev_io *bdev_io;
struct raid_bdev_io *raid_io;
bdev_io = calloc(1, sizeof(struct spdk_bdev_io) + sizeof(struct raid_bdev_io));
SPDK_CU_ASSERT_FATAL(bdev_io != NULL);
bdev_io->bdev = &r1info->raid_bdev->bdev;
bdev_io->type = io_type;
bdev_io->u.bdev.offset_blocks = 0;
bdev_io->u.bdev.num_blocks = num_blocks;
bdev_io->internal.cb = NULL;
bdev_io->internal.caller_ctx = NULL;
raid_io = (void *)bdev_io->driver_ctx;
raid_io->raid_bdev = r1info->raid_bdev;
raid_io->raid_ch = raid_ch;
return raid_io;
}
static void
run_for_each_raid1_config(void (*test_fn)(struct raid_bdev *raid_bdev,
struct raid_bdev_io_channel *raid_ch))
{
struct raid_params *params;
RAID_PARAMS_FOR_EACH(params) {
struct raid1_info *r1info;
struct raid_bdev_io_channel raid_ch = { 0 };
int i;
r1info = create_raid1(params);
raid_ch.num_channels = params->num_base_bdevs;
raid_ch.base_channel = calloc(params->num_base_bdevs, sizeof(struct spdk_io_channel *));
SPDK_CU_ASSERT_FATAL(raid_ch.base_channel != NULL);
for (i = 0; i < raid_ch.num_channels; i++) {
raid_ch.base_channel[i] = calloc(1, sizeof(*raid_ch.base_channel));
}
raid_ch.module_channel = raid1_get_io_channel(r1info->raid_bdev);
SPDK_CU_ASSERT_FATAL(raid_ch.module_channel);
test_fn(r1info->raid_bdev, &raid_ch);
spdk_put_io_channel(raid_ch.module_channel);
poll_threads();
for (i = 0; i < raid_ch.num_channels; i++) {
free(raid_ch.base_channel[i]);
}
free(raid_ch.base_channel);
delete_raid1(r1info);
}
}
static void
__test_raid1_read_balancing(struct raid_bdev *raid_bdev, struct raid_bdev_io_channel *raid_ch)
{
struct raid1_info *r1info = raid_bdev->module_private;
struct raid_bdev_io *raid_io;
struct raid1_io_channel *raid1_ch = spdk_io_channel_get_ctx(raid_ch->module_channel);
uint8_t overloaded_ch_idx = 0;
uint64_t big_io_blocks = 256;
uint64_t small_io_blocks = 4;
bool found_greater = false;
raid_io = get_raid_io(r1info, raid_ch, SPDK_BDEV_IO_TYPE_READ, big_io_blocks);
raid1_submit_rw_request(raid_io);
raid_bdev_io_complete(raid_io, SPDK_BDEV_IO_STATUS_SUCCESS);
overloaded_ch_idx = raid1_ch->base_bdev_read_idx;
do {
raid_io = get_raid_io(r1info, raid_ch, SPDK_BDEV_IO_TYPE_READ, small_io_blocks);
raid1_submit_rw_request(raid_io);
raid_bdev_io_complete(raid_io, SPDK_BDEV_IO_STATUS_SUCCESS);
} while (raid1_ch->base_bdev_read_idx != overloaded_ch_idx);
for (uint8_t i = 0; i < raid_ch->num_channels; i++) {
if (i == overloaded_ch_idx) {
continue;
}
if (raid1_ch->base_bdev_read_bw[i] >= raid1_ch->base_bdev_read_bw[overloaded_ch_idx] -
small_io_blocks) {
found_greater = true;
break;
}
}
CU_ASSERT_TRUE(found_greater);
}
static void
test_raid1_read_balancing(void)
{
run_for_each_raid1_config(__test_raid1_read_balancing);
}
static void
__test_raid1_read_balancing_limit_reset(struct raid_bdev *raid_bdev,
struct raid_bdev_io_channel *raid_ch)
{
struct raid1_info *r1info = raid_bdev->module_private;
struct raid_bdev_io *raid_io;
struct raid1_io_channel *raid1_ch = spdk_io_channel_get_ctx(raid_ch->module_channel);
uint64_t read_io_blocks = 64;
raid1_ch->base_bdev_max_read_bw = UINT64_MAX - (read_io_blocks / 2);
for (uint8_t i = 0; i < raid_ch->num_channels; i++) {
raid1_ch->base_bdev_read_bw[i] = UINT64_MAX - (read_io_blocks / 2);
}
raid_io = get_raid_io(r1info, raid_ch, SPDK_BDEV_IO_TYPE_READ, read_io_blocks);
raid1_submit_rw_request(raid_io);
raid_bdev_io_complete(raid_io, SPDK_BDEV_IO_STATUS_SUCCESS);
for (uint8_t i = 0; i < raid_ch->num_channels; i++) {
if (i == raid1_ch->base_bdev_read_idx) {
continue;
}
CU_ASSERT_EQUAL(raid1_ch->base_bdev_read_bw[i], 0);
}
}
static void
test_raid1_read_balancing_limit_reset(void)
{
run_for_each_raid1_config(__test_raid1_read_balancing_limit_reset);
}
int
main(int argc, char **argv)
{
@ -283,11 +133,6 @@ main(int argc, char **argv)
suite = CU_add_suite("raid1", test_setup, test_cleanup);
CU_ADD_TEST(suite, test_raid1_start);
CU_ADD_TEST(suite, test_raid1_read_balancing);
CU_ADD_TEST(suite, test_raid1_read_balancing_limit_reset);
allocate_threads(1);
set_thread(0);
CU_basic_set_mode(CU_BRM_VERBOSE);
CU_basic_run_tests();

View File

@ -6,27 +6,15 @@
#include "spdk/stdinc.h"
#include "spdk_cunit.h"
#include "spdk/env.h"
#include "spdk/xor.h"
#include "common/lib/ut_multithread.c"
#include "bdev/raid/raid5f.c"
#include "../common.c"
static void *g_accel_p = (void *)0xdeadbeaf;
static bool g_test_degraded;
DEFINE_STUB_V(raid_bdev_module_list_add, (struct raid_bdev_module *raid_module));
DEFINE_STUB(spdk_bdev_get_buf_align, size_t, (const struct spdk_bdev *bdev), 0);
DEFINE_STUB_V(raid_bdev_module_stop_done, (struct raid_bdev *raid_bdev));
DEFINE_STUB(accel_channel_create, int, (void *io_device, void *ctx_buf), 0);
DEFINE_STUB_V(accel_channel_destroy, (void *io_device, void *ctx_buf));
struct spdk_io_channel *
spdk_accel_get_io_channel(void)
{
return spdk_get_io_channel(g_accel_p);
}
void *
spdk_bdev_io_get_md_buf(struct spdk_bdev_io *bdev_io)
@ -40,38 +28,6 @@ spdk_bdev_get_md_size(const struct spdk_bdev *bdev)
return bdev->md_len;
}
struct xor_ctx {
spdk_accel_completion_cb cb_fn;
void *cb_arg;
};
static void
finish_xor(void *_ctx)
{
struct xor_ctx *ctx = _ctx;
ctx->cb_fn(ctx->cb_arg, 0);
free(ctx);
}
int
spdk_accel_submit_xor(struct spdk_io_channel *ch, void *dst, void **sources, uint32_t nsrcs,
uint64_t nbytes, spdk_accel_completion_cb cb_fn, void *cb_arg)
{
struct xor_ctx *ctx;
ctx = malloc(sizeof(*ctx));
SPDK_CU_ASSERT_FATAL(ctx != NULL);
ctx->cb_fn = cb_fn;
ctx->cb_arg = cb_arg;
SPDK_CU_ASSERT_FATAL(spdk_xor_gen(dst, sources, nsrcs, nbytes) == 0);
spdk_thread_send_msg(spdk_get_thread(), finish_xor, ctx);
return 0;
}
void
raid_bdev_io_complete(struct raid_bdev_io *raid_io, enum spdk_bdev_io_status status)
{
@ -101,21 +57,8 @@ raid_bdev_io_complete_part(struct raid_bdev_io *raid_io, uint64_t completed,
}
}
static void
init_accel(void)
{
spdk_io_device_register(g_accel_p, accel_channel_create, accel_channel_destroy,
sizeof(int), "accel_p");
}
static void
fini_accel(void)
{
spdk_io_device_unregister(g_accel_p, NULL);
}
static int
test_suite_init(void)
test_setup(void)
{
uint8_t num_base_bdevs_values[] = { 3, 4, 5 };
uint64_t base_bdev_blockcnt_values[] = { 1, 1024, 1024 * 1024 };
@ -162,25 +105,16 @@ test_suite_init(void)
}
}
init_accel();
return 0;
}
static int
test_suite_cleanup(void)
test_cleanup(void)
{
fini_accel();
raid_test_params_free();
return 0;
}
static void
test_setup(void)
{
g_test_degraded = false;
}
static struct raid5f_info *
create_raid5f(struct raid_params *params)
{
@ -237,9 +171,7 @@ struct raid_io_info {
struct raid5f_info *r5f_info;
struct raid_bdev_io_channel *raid_ch;
enum spdk_bdev_io_type io_type;
uint64_t stripe_index;
uint64_t offset_blocks;
uint64_t stripe_offset_blocks;
uint64_t num_blocks;
void *src_buf;
void *dest_buf;
@ -252,9 +184,9 @@ struct raid_io_info {
void *parity_md_buf;
void *reference_md_parity;
size_t parity_md_buf_size;
void *degraded_buf;
void *degraded_md_buf;
enum spdk_bdev_io_status status;
bool failed;
int remaining;
TAILQ_HEAD(, spdk_bdev_io) bdev_io_queue;
TAILQ_HEAD(, spdk_bdev_io_wait_entry) bdev_io_wait_queue;
struct {
@ -294,20 +226,31 @@ raid_bdev_io_completion_cb(struct spdk_bdev_io *bdev_io, bool success, void *cb_
spdk_bdev_free_io(bdev_io);
if (!success) {
io_info->status = SPDK_BDEV_IO_STATUS_FAILED;
} else {
io_info->status = SPDK_BDEV_IO_STATUS_SUCCESS;
io_info->failed = true;
}
if (--io_info->remaining == 0) {
if (io_info->failed) {
io_info->status = SPDK_BDEV_IO_STATUS_FAILED;
} else {
io_info->status = SPDK_BDEV_IO_STATUS_SUCCESS;
}
}
}
static struct raid_bdev_io *
get_raid_io(struct raid_io_info *io_info)
get_raid_io(struct raid_io_info *io_info, uint64_t offset_blocks_split, uint64_t num_blocks)
{
struct spdk_bdev_io *bdev_io;
struct raid_bdev_io *raid_io;
struct raid_bdev *raid_bdev = io_info->r5f_info->raid_bdev;
uint32_t blocklen = raid_bdev->bdev.blocklen;
struct test_raid_bdev_io *test_raid_bdev_io;
void *src_buf = io_info->src_buf + offset_blocks_split * blocklen;
void *dest_buf = io_info->dest_buf + offset_blocks_split * blocklen;
void *src_md_buf = io_info->src_md_buf + offset_blocks_split * raid_bdev->bdev.md_len;
void *dest_md_buf = io_info->dest_md_buf + offset_blocks_split * raid_bdev->bdev.md_len;
test_raid_bdev_io = calloc(1, sizeof(*test_raid_bdev_io));
SPDK_CU_ASSERT_FATAL(test_raid_bdev_io != NULL);
@ -316,8 +259,8 @@ get_raid_io(struct raid_io_info *io_info)
bdev_io = (struct spdk_bdev_io *)test_raid_bdev_io->bdev_io_buf;
bdev_io->bdev = &raid_bdev->bdev;
bdev_io->type = io_info->io_type;
bdev_io->u.bdev.offset_blocks = io_info->offset_blocks;
bdev_io->u.bdev.num_blocks = io_info->num_blocks;
bdev_io->u.bdev.offset_blocks = io_info->offset_blocks + offset_blocks_split;
bdev_io->u.bdev.num_blocks = num_blocks;
bdev_io->internal.cb = raid_bdev_io_completion_cb;
bdev_io->internal.caller_ctx = io_info;
@ -329,20 +272,22 @@ get_raid_io(struct raid_io_info *io_info)
test_raid_bdev_io->io_info = io_info;
if (io_info->io_type == SPDK_BDEV_IO_TYPE_READ) {
test_raid_bdev_io->buf = io_info->src_buf;
test_raid_bdev_io->buf_md = io_info->src_md_buf;
bdev_io->iov.iov_base = io_info->dest_buf;
bdev_io->u.bdev.md_buf = io_info->dest_md_buf;
test_raid_bdev_io->buf = src_buf;
test_raid_bdev_io->buf_md = src_md_buf;
bdev_io->u.bdev.md_buf = dest_md_buf;
bdev_io->iov.iov_base = dest_buf;
} else {
test_raid_bdev_io->buf = io_info->dest_buf;
test_raid_bdev_io->buf_md = io_info->dest_md_buf;
bdev_io->iov.iov_base = io_info->src_buf;
bdev_io->u.bdev.md_buf = io_info->src_md_buf;
test_raid_bdev_io->buf = dest_buf;
test_raid_bdev_io->buf_md = dest_md_buf;
bdev_io->u.bdev.md_buf = src_md_buf;
bdev_io->iov.iov_base = src_buf;
}
bdev_io->u.bdev.iovs = &bdev_io->iov;
bdev_io->u.bdev.iovcnt = 1;
bdev_io->iov.iov_len = io_info->num_blocks * blocklen;
bdev_io->iov.iov_len = num_blocks * blocklen;
io_info->remaining++;
return raid_io;
}
@ -398,8 +343,6 @@ process_io_completions(struct raid_io_info *io_info)
bdev_io->internal.cb(bdev_io, success, bdev_io->internal.caller_ctx);
}
poll_threads();
if (io_info->error.type == TEST_BDEV_ERROR_NOMEM) {
struct spdk_bdev_io_wait_entry *waitq_entry, *tmp;
struct spdk_bdev *enomem_bdev = io_info->error.bdev;
@ -435,30 +378,41 @@ spdk_bdev_writev_blocks_with_md(struct spdk_bdev_desc *desc, struct spdk_io_chan
struct test_raid_bdev_io *test_raid_bdev_io;
struct raid_io_info *io_info;
struct raid_bdev *raid_bdev;
uint64_t stripe_idx_off;
uint8_t data_chunk_idx;
uint64_t data_offset;
void *dest_buf, *dest_md_buf;
SPDK_CU_ASSERT_FATAL(cb == raid5f_chunk_complete_bdev_io);
SPDK_CU_ASSERT_FATAL(cb == raid5f_chunk_write_complete_bdev_io);
SPDK_CU_ASSERT_FATAL(iovcnt == 1);
stripe_req = raid5f_chunk_stripe_req(chunk);
test_raid_bdev_io = (struct test_raid_bdev_io *)spdk_bdev_io_from_ctx(stripe_req->raid_io);
io_info = test_raid_bdev_io->io_info;
raid_bdev = io_info->r5f_info->raid_bdev;
stripe_idx_off = offset_blocks / raid_bdev->strip_size -
io_info->offset_blocks / io_info->r5f_info->stripe_blocks;
if (chunk == stripe_req->parity_chunk) {
if (io_info->parity_buf == NULL) {
goto submit;
}
dest_buf = io_info->parity_buf;
data_offset = stripe_idx_off * raid_bdev->strip_size_kb * 1024;
dest_buf = io_info->parity_buf + data_offset;
if (md_buf != NULL) {
dest_md_buf = io_info->parity_md_buf;
data_offset = DATA_OFFSET_TO_MD_OFFSET(raid_bdev, data_offset);
dest_md_buf = io_info->parity_md_buf + data_offset;
}
} else {
data_chunk_idx = chunk < stripe_req->parity_chunk ? chunk->index : chunk->index - 1;
data_offset = data_chunk_idx * raid_bdev->strip_size * raid_bdev->bdev.blocklen;
data_offset = (stripe_idx_off * io_info->r5f_info->stripe_blocks +
data_chunk_idx * raid_bdev->strip_size) *
raid_bdev->bdev.blocklen;
dest_buf = test_raid_bdev_io->buf + data_offset;
if (md_buf != NULL) {
data_offset = DATA_OFFSET_TO_MD_OFFSET(raid_bdev, data_offset);
dest_md_buf = test_raid_bdev_io->buf_md + data_offset;
@ -474,44 +428,6 @@ submit:
return submit_io(io_info, desc, cb, cb_arg);
}
static int
spdk_bdev_readv_blocks_degraded(struct spdk_bdev_desc *desc, struct spdk_io_channel *ch,
struct iovec *iov, int iovcnt,
uint64_t offset_blocks, uint64_t num_blocks,
spdk_bdev_io_completion_cb cb, void *cb_arg)
{
struct chunk *chunk = cb_arg;
struct stripe_request *stripe_req;
struct test_raid_bdev_io *test_raid_bdev_io;
struct raid_io_info *io_info;
struct raid_bdev *raid_bdev;
uint8_t data_chunk_idx;
void *buf;
SPDK_CU_ASSERT_FATAL(cb == raid5f_chunk_complete_bdev_io);
SPDK_CU_ASSERT_FATAL(iovcnt == 1);
stripe_req = raid5f_chunk_stripe_req(chunk);
test_raid_bdev_io = (struct test_raid_bdev_io *)spdk_bdev_io_from_ctx(stripe_req->raid_io);
io_info = test_raid_bdev_io->io_info;
raid_bdev = io_info->r5f_info->raid_bdev;
if (chunk == stripe_req->parity_chunk) {
buf = io_info->reference_parity;
} else {
data_chunk_idx = chunk < stripe_req->parity_chunk ? chunk->index : chunk->index - 1;
buf = io_info->degraded_buf +
data_chunk_idx * raid_bdev->strip_size * raid_bdev->bdev.blocklen;
}
buf += (offset_blocks % raid_bdev->strip_size) * raid_bdev->bdev.blocklen;
SPDK_CU_ASSERT_FATAL(num_blocks * raid_bdev->bdev.blocklen <= iov->iov_len);
memcpy(iov->iov_base, buf, num_blocks * raid_bdev->bdev.blocklen);
return submit_io(io_info, desc, cb, cb_arg);
}
int
spdk_bdev_writev_blocks(struct spdk_bdev_desc *desc, struct spdk_io_channel *ch,
struct iovec *iov, int iovcnt,
@ -544,16 +460,11 @@ spdk_bdev_readv_blocks_with_md(struct spdk_bdev_desc *desc, struct spdk_io_chann
struct raid_bdev_io *raid_io = cb_arg;
struct test_raid_bdev_io *test_raid_bdev_io;
if (cb == raid5f_chunk_complete_bdev_io) {
return spdk_bdev_readv_blocks_degraded(desc, ch, iov, iovcnt, offset_blocks, num_blocks, cb,
cb_arg);
}
test_raid_bdev_io = (struct test_raid_bdev_io *)spdk_bdev_io_from_ctx(raid_io);
SPDK_CU_ASSERT_FATAL(cb == raid5f_chunk_read_complete);
SPDK_CU_ASSERT_FATAL(iovcnt == 1);
test_raid_bdev_io = (struct test_raid_bdev_io *)spdk_bdev_io_from_ctx(raid_io);
memcpy(iov->iov_base, test_raid_bdev_io->buf, iov->iov_len);
if (md_buf != NULL) {
memcpy(md_buf, test_raid_bdev_io->buf_md, DATA_OFFSET_TO_MD_OFFSET(raid_io->raid_bdev,
@ -601,45 +512,12 @@ test_raid5f_write_request(struct raid_io_info *io_info)
SPDK_CU_ASSERT_FATAL(io_info->num_blocks / io_info->r5f_info->stripe_blocks == 1);
raid_io = get_raid_io(io_info);
raid_io = get_raid_io(io_info, 0, io_info->num_blocks);
raid5f_submit_rw_request(raid_io);
poll_threads();
process_io_completions(io_info);
if (g_test_degraded) {
struct raid_bdev *raid_bdev = io_info->r5f_info->raid_bdev;
uint8_t p_idx;
uint8_t i;
off_t offset;
uint32_t strip_len;
for (i = 0; i < raid_bdev->num_base_bdevs; i++) {
if (io_info->raid_ch->base_channel[i] == NULL) {
break;
}
}
SPDK_CU_ASSERT_FATAL(i != raid_bdev->num_base_bdevs);
p_idx = raid5f_stripe_parity_chunk_index(raid_bdev, io_info->stripe_index);
if (i == p_idx) {
return;
}
if (i >= p_idx) {
i--;
}
strip_len = raid_bdev->strip_size_kb * 1024;
offset = i * strip_len;
memcpy(io_info->dest_buf + offset, io_info->src_buf + offset, strip_len);
}
if (io_info->status == SPDK_BDEV_IO_STATUS_SUCCESS) {
if (io_info->parity_buf) {
CU_ASSERT(memcmp(io_info->parity_buf, io_info->reference_parity,
@ -655,13 +533,22 @@ test_raid5f_write_request(struct raid_io_info *io_info)
static void
test_raid5f_read_request(struct raid_io_info *io_info)
{
struct raid_bdev_io *raid_io;
uint32_t strip_size = io_info->r5f_info->raid_bdev->strip_size;
uint64_t num_blocks = io_info->num_blocks;
uint64_t offset_blocks_split = 0;
SPDK_CU_ASSERT_FATAL(io_info->num_blocks <= io_info->r5f_info->raid_bdev->strip_size);
while (num_blocks) {
uint64_t chunk_offset = offset_blocks_split % strip_size;
uint64_t num_blocks_split = spdk_min(num_blocks, strip_size - chunk_offset);
struct raid_bdev_io *raid_io;
raid_io = get_raid_io(io_info);
raid_io = get_raid_io(io_info, offset_blocks_split, num_blocks_split);
raid5f_submit_rw_request(raid_io);
raid5f_submit_rw_request(raid_io);
num_blocks -= num_blocks_split;
offset_blocks_split += num_blocks_split;
}
process_io_completions(io_info);
}
@ -677,14 +564,12 @@ deinit_io_info(struct raid_io_info *io_info)
free(io_info->reference_parity);
free(io_info->parity_md_buf);
free(io_info->reference_md_parity);
free(io_info->degraded_buf);
free(io_info->degraded_md_buf);
}
static void
init_io_info(struct raid_io_info *io_info, struct raid5f_info *r5f_info,
struct raid_bdev_io_channel *raid_ch, enum spdk_bdev_io_type io_type,
uint64_t stripe_index, uint64_t stripe_offset_blocks, uint64_t num_blocks)
uint64_t offset_blocks, uint64_t num_blocks)
{
struct raid_bdev *raid_bdev = r5f_info->raid_bdev;
uint32_t blocklen = raid_bdev->bdev.blocklen;
@ -695,8 +580,6 @@ init_io_info(struct raid_io_info *io_info, struct raid5f_info *r5f_info,
uint64_t block;
uint64_t i;
SPDK_CU_ASSERT_FATAL(stripe_offset_blocks < r5f_info->stripe_blocks);
memset(io_info, 0, sizeof(*io_info));
if (buf_size) {
@ -734,9 +617,7 @@ init_io_info(struct raid_io_info *io_info, struct raid5f_info *r5f_info,
io_info->r5f_info = r5f_info;
io_info->raid_ch = raid_ch;
io_info->io_type = io_type;
io_info->stripe_index = stripe_index;
io_info->offset_blocks = stripe_index * r5f_info->stripe_blocks + stripe_offset_blocks;
io_info->stripe_offset_blocks = stripe_offset_blocks;
io_info->offset_blocks = offset_blocks;
io_info->num_blocks = num_blocks;
io_info->src_buf = src_buf;
io_info->dest_buf = dest_buf;
@ -750,79 +631,49 @@ init_io_info(struct raid_io_info *io_info, struct raid5f_info *r5f_info,
}
static void
io_info_setup_parity(struct raid_io_info *io_info, void *src, void *src_md)
io_info_setup_parity(struct raid_io_info *io_info)
{
struct raid5f_info *r5f_info = io_info->r5f_info;
struct raid_bdev *raid_bdev = r5f_info->raid_bdev;
uint32_t blocklen = raid_bdev->bdev.blocklen;
uint64_t num_stripes = io_info->num_blocks / r5f_info->stripe_blocks;
size_t strip_len = raid_bdev->strip_size * blocklen;
unsigned i;
size_t strip_md_len = raid_bdev->strip_size * raid_bdev->bdev.md_len;
void *src = io_info->src_buf;
void *dest;
unsigned i, j;
io_info->parity_buf_size = strip_len;
io_info->parity_buf_size = num_stripes * strip_len;
io_info->parity_buf = calloc(1, io_info->parity_buf_size);
SPDK_CU_ASSERT_FATAL(io_info->parity_buf != NULL);
io_info->reference_parity = calloc(1, io_info->parity_buf_size);
SPDK_CU_ASSERT_FATAL(io_info->reference_parity != NULL);
for (i = 0; i < raid5f_stripe_data_chunks_num(raid_bdev); i++) {
xor_block(io_info->reference_parity, src, strip_len);
src += strip_len;
}
if (src_md) {
size_t strip_md_len = raid_bdev->strip_size * raid_bdev->bdev.md_len;
io_info->parity_md_buf_size = strip_md_len;
io_info->parity_md_buf = calloc(1, io_info->parity_md_buf_size);
SPDK_CU_ASSERT_FATAL(io_info->parity_md_buf != NULL);
io_info->reference_md_parity = calloc(1, io_info->parity_md_buf_size);
SPDK_CU_ASSERT_FATAL(io_info->reference_md_parity != NULL);
for (i = 0; i < raid5f_stripe_data_chunks_num(raid_bdev); i++) {
xor_block(io_info->reference_md_parity, src_md, strip_md_len);
src_md += strip_md_len;
dest = io_info->reference_parity;
for (i = 0; i < num_stripes; i++) {
for (j = 0; j < raid5f_stripe_data_chunks_num(raid_bdev); j++) {
xor_block(dest, src, strip_len);
src += strip_len;
}
}
}
static void
io_info_setup_degraded(struct raid_io_info *io_info)
{
struct raid5f_info *r5f_info = io_info->r5f_info;
struct raid_bdev *raid_bdev = r5f_info->raid_bdev;
uint32_t blocklen = raid_bdev->bdev.blocklen;
uint32_t md_len = raid_bdev->bdev.md_len;
size_t stripe_len = r5f_info->stripe_blocks * blocklen;
size_t stripe_md_len = r5f_info->stripe_blocks * md_len;
io_info->degraded_buf = malloc(stripe_len);
SPDK_CU_ASSERT_FATAL(io_info->degraded_buf != NULL);
memset(io_info->degraded_buf, 0xab, stripe_len);
memcpy(io_info->degraded_buf + io_info->stripe_offset_blocks * blocklen,
io_info->src_buf, io_info->num_blocks * blocklen);
if (stripe_md_len != 0) {
io_info->degraded_md_buf = malloc(stripe_md_len);
SPDK_CU_ASSERT_FATAL(io_info->degraded_md_buf != NULL);
memset(io_info->degraded_md_buf, 0xab, stripe_md_len);
memcpy(io_info->degraded_md_buf + io_info->stripe_offset_blocks * md_len,
io_info->src_md_buf, io_info->num_blocks * md_len);
dest += strip_len;
}
io_info_setup_parity(io_info, io_info->degraded_buf, io_info->degraded_md_buf);
io_info->parity_md_buf_size = num_stripes * strip_md_len;
io_info->parity_md_buf = calloc(1, io_info->parity_md_buf_size);
SPDK_CU_ASSERT_FATAL(io_info->parity_md_buf != NULL);
memset(io_info->degraded_buf + io_info->stripe_offset_blocks * blocklen,
0xcd, io_info->num_blocks * blocklen);
io_info->reference_md_parity = calloc(1, io_info->parity_md_buf_size);
SPDK_CU_ASSERT_FATAL(io_info->reference_md_parity != NULL);
if (stripe_md_len != 0) {
memset(io_info->degraded_md_buf + io_info->stripe_offset_blocks * md_len,
0xcd, io_info->num_blocks * md_len);
src = io_info->src_md_buf;
dest = io_info->reference_md_parity;
for (i = 0; i < num_stripes; i++) {
for (j = 0; j < raid5f_stripe_data_chunks_num(raid_bdev); j++) {
xor_block(dest, src, strip_md_len);
src += strip_md_len;
}
dest += strip_md_len;
}
}
@ -831,19 +682,17 @@ test_raid5f_submit_rw_request(struct raid5f_info *r5f_info, struct raid_bdev_io_
enum spdk_bdev_io_type io_type, uint64_t stripe_index, uint64_t stripe_offset_blocks,
uint64_t num_blocks)
{
uint64_t offset_blocks = stripe_index * r5f_info->stripe_blocks + stripe_offset_blocks;
struct raid_io_info io_info;
init_io_info(&io_info, r5f_info, raid_ch, io_type, stripe_index, stripe_offset_blocks, num_blocks);
init_io_info(&io_info, r5f_info, raid_ch, io_type, offset_blocks, num_blocks);
switch (io_type) {
case SPDK_BDEV_IO_TYPE_READ:
if (g_test_degraded) {
io_info_setup_degraded(&io_info);
}
test_raid5f_read_request(&io_info);
break;
case SPDK_BDEV_IO_TYPE_WRITE:
io_info_setup_parity(&io_info, io_info.src_buf, io_info.src_md_buf);
io_info_setup_parity(&io_info);
test_raid5f_write_request(&io_info);
break;
default:
@ -868,7 +717,6 @@ run_for_each_raid5f_config(void (*test_fn)(struct raid_bdev *raid_bdev,
RAID_PARAMS_FOR_EACH(params) {
struct raid5f_info *r5f_info;
struct raid_bdev_io_channel raid_ch = { 0 };
int i;
r5f_info = create_raid5f(params);
@ -876,13 +724,6 @@ run_for_each_raid5f_config(void (*test_fn)(struct raid_bdev *raid_bdev,
raid_ch.base_channel = calloc(params->num_base_bdevs, sizeof(struct spdk_io_channel *));
SPDK_CU_ASSERT_FATAL(raid_ch.base_channel != NULL);
for (i = 0; i < params->num_base_bdevs; i++) {
if (g_test_degraded && i == 0) {
continue;
}
raid_ch.base_channel[i] = (void *)1;
}
raid_ch.module_channel = raid5f_get_io_channel(r5f_info->raid_bdev);
SPDK_CU_ASSERT_FATAL(raid_ch.module_channel);
@ -900,31 +741,41 @@ run_for_each_raid5f_config(void (*test_fn)(struct raid_bdev *raid_bdev,
#define RAID5F_TEST_FOR_EACH_STRIPE(raid_bdev, i) \
for (i = 0; i < spdk_min(raid_bdev->num_base_bdevs, ((struct raid5f_info *)raid_bdev->module_private)->total_stripes); i++)
struct test_request_conf {
uint64_t stripe_offset_blocks;
uint64_t num_blocks;
};
static void
__test_raid5f_submit_read_request(struct raid_bdev *raid_bdev, struct raid_bdev_io_channel *raid_ch)
{
struct raid5f_info *r5f_info = raid_bdev->module_private;
uint32_t strip_size = raid_bdev->strip_size;
uint64_t stripe_index;
unsigned int i;
for (i = 0; i < raid5f_stripe_data_chunks_num(raid_bdev); i++) {
uint64_t stripe_offset = i * strip_size;
struct test_request_conf test_requests[] = {
{ 0, 1 },
{ 0, strip_size },
{ 0, strip_size + 1 },
{ 0, r5f_info->stripe_blocks },
{ 1, 1 },
{ 1, strip_size },
{ 1, strip_size + 1 },
{ strip_size, 1 },
{ strip_size, strip_size },
{ strip_size, strip_size + 1 },
{ strip_size - 1, 1 },
{ strip_size - 1, strip_size },
{ strip_size - 1, strip_size + 1 },
{ strip_size - 1, 2 },
};
for (i = 0; i < SPDK_COUNTOF(test_requests); i++) {
struct test_request_conf *t = &test_requests[i];
uint64_t stripe_index;
RAID5F_TEST_FOR_EACH_STRIPE(raid_bdev, stripe_index) {
test_raid5f_submit_rw_request(r5f_info, raid_ch, SPDK_BDEV_IO_TYPE_READ,
stripe_index, stripe_offset, 1);
test_raid5f_submit_rw_request(r5f_info, raid_ch, SPDK_BDEV_IO_TYPE_READ,
stripe_index, stripe_offset, strip_size);
test_raid5f_submit_rw_request(r5f_info, raid_ch, SPDK_BDEV_IO_TYPE_READ,
stripe_index, stripe_offset + strip_size - 1, 1);
if (strip_size <= 2) {
continue;
}
test_raid5f_submit_rw_request(r5f_info, raid_ch, SPDK_BDEV_IO_TYPE_READ,
stripe_index, stripe_offset + 1, strip_size - 2);
stripe_index, t->stripe_offset_blocks, t->num_blocks);
}
}
}
@ -955,14 +806,14 @@ __test_raid5f_stripe_request_map_iovecs(struct raid_bdev *raid_bdev,
size_t iovcnt = SPDK_COUNTOF(iovs);
int ret;
init_io_info(&io_info, r5f_info, raid_ch, SPDK_BDEV_IO_TYPE_WRITE, 0, 0, 0);
init_io_info(&io_info, r5f_info, raid_ch, SPDK_BDEV_IO_TYPE_WRITE, 0, 0);
raid_io = get_raid_io(&io_info);
raid_io = get_raid_io(&io_info, 0, 0);
bdev_io = spdk_bdev_io_from_ctx(raid_io);
bdev_io->u.bdev.iovs = iovs;
bdev_io->u.bdev.iovcnt = iovcnt;
stripe_req = raid5f_stripe_request_alloc(r5ch, STRIPE_REQ_WRITE);
stripe_req = raid5f_stripe_request_alloc(r5ch);
SPDK_CU_ASSERT_FATAL(stripe_req != NULL);
stripe_req->parity_chunk = &stripe_req->chunks[raid5f_stripe_data_chunks_num(raid_bdev)];
@ -1039,7 +890,7 @@ __test_raid5f_chunk_write_error(struct raid_bdev *raid_bdev, struct raid_bdev_io
RAID5F_TEST_FOR_EACH_STRIPE(raid_bdev, stripe_index) {
RAID_FOR_EACH_BASE_BDEV(raid_bdev, base_bdev_info) {
init_io_info(&io_info, r5f_info, raid_ch, SPDK_BDEV_IO_TYPE_WRITE,
stripe_index, 0, r5f_info->stripe_blocks);
stripe_index * r5f_info->stripe_blocks, r5f_info->stripe_blocks);
io_info.error.type = error_type;
io_info.error.bdev = base_bdev_info->bdev;
@ -1099,7 +950,7 @@ __test_raid5f_chunk_write_error_with_enomem(struct raid_bdev *raid_bdev,
}
init_io_info(&io_info, r5f_info, raid_ch, SPDK_BDEV_IO_TYPE_WRITE,
stripe_index, 0, r5f_info->stripe_blocks);
stripe_index * r5f_info->stripe_blocks, r5f_info->stripe_blocks);
io_info.error.type = TEST_BDEV_ERROR_NOMEM;
io_info.error.bdev = base_bdev_info->bdev;
@ -1123,20 +974,6 @@ test_raid5f_chunk_write_error_with_enomem(void)
run_for_each_raid5f_config(__test_raid5f_chunk_write_error_with_enomem);
}
static void
test_raid5f_submit_full_stripe_write_request_degraded(void)
{
g_test_degraded = true;
run_for_each_raid5f_config(__test_raid5f_submit_full_stripe_write_request);
}
static void
test_raid5f_submit_read_request_degraded(void)
{
g_test_degraded = true;
run_for_each_raid5f_config(__test_raid5f_submit_read_request);
}
int
main(int argc, char **argv)
{
@ -1146,16 +983,13 @@ main(int argc, char **argv)
CU_set_error_action(CUEA_ABORT);
CU_initialize_registry();
suite = CU_add_suite_with_setup_and_teardown("raid5f", test_suite_init, test_suite_cleanup,
test_setup, NULL);
suite = CU_add_suite("raid5f", test_setup, test_cleanup);
CU_ADD_TEST(suite, test_raid5f_start);
CU_ADD_TEST(suite, test_raid5f_submit_read_request);
CU_ADD_TEST(suite, test_raid5f_stripe_request_map_iovecs);
CU_ADD_TEST(suite, test_raid5f_submit_full_stripe_write_request);
CU_ADD_TEST(suite, test_raid5f_chunk_write_error);
CU_ADD_TEST(suite, test_raid5f_chunk_write_error_with_enomem);
CU_ADD_TEST(suite, test_raid5f_submit_full_stripe_write_request_degraded);
CU_ADD_TEST(suite, test_raid5f_submit_read_request_degraded);
allocate_threads(1);
set_thread(0);

View File

@ -893,23 +893,6 @@ spdk_lvs_notify_hotplug(const void *esnap_id, uint32_t id_len,
return g_bdev_is_missing;
}
void
spdk_lvol_shallow_copy(struct spdk_lvol *lvol, struct spdk_bs_dev *ext_dev,
spdk_lvol_op_complete cb_fn, void *cb_arg)
{
if (lvol == NULL) {
cb_fn(cb_arg, -ENODEV);
return;
}
if (ext_dev == NULL) {
cb_fn(cb_arg, -ENODEV);
return;
}
cb_fn(cb_arg, 0);
}
static void
lvol_store_op_complete(void *cb_arg, int lvserrno)
{
@ -950,12 +933,6 @@ vbdev_lvol_rename_complete(void *cb_arg, int lvolerrno)
g_lvolerrno = lvolerrno;
}
static void
vbdev_lvol_shallow_copy_complete(void *cb_arg, int lvolerrno)
{
g_lvolerrno = lvolerrno;
}
static void
ut_lvs_destroy(void)
{
@ -1950,54 +1927,6 @@ ut_lvol_esnap_clone_bad_args(void)
g_base_bdev = NULL;
}
static void
ut_lvol_shallow_copy(void)
{
struct spdk_lvol_store *lvs;
int sz = 10;
int rc;
struct spdk_lvol *lvol = NULL;
/* Lvol store is successfully created */
rc = vbdev_lvs_create("bdev", "lvs", 0, LVS_CLEAR_WITH_UNMAP, 0,
lvol_store_op_with_handle_complete, NULL);
CU_ASSERT(rc == 0);
CU_ASSERT(g_lvserrno == 0);
SPDK_CU_ASSERT_FATAL(g_lvol_store != NULL);
CU_ASSERT(g_lvol_store->bs_dev != NULL);
lvs = g_lvol_store;
/* Successful lvol create */
g_lvolerrno = -1;
rc = vbdev_lvol_create(lvs, "lvol_sc", sz, false, LVOL_CLEAR_WITH_DEFAULT,
vbdev_lvol_create_complete,
NULL);
SPDK_CU_ASSERT_FATAL(rc == 0);
SPDK_CU_ASSERT_FATAL(g_lvol != NULL);
CU_ASSERT(g_lvolerrno == 0);
lvol = g_lvol;
/* Successful shallow copy */
g_lvolerrno = -1;
lvol_already_opened = false;
vbdev_lvol_shallow_copy(lvol, "bdev_sc", vbdev_lvol_shallow_copy_complete, NULL);
CU_ASSERT(g_lvolerrno == 0);
/* Shallow copy error with NULL lvol */
vbdev_lvol_shallow_copy(NULL, "", vbdev_lvol_shallow_copy_complete, NULL);
CU_ASSERT(g_lvolerrno != 0);
/* Successful lvol destroy */
vbdev_lvol_destroy(g_lvol, lvol_store_op_complete, NULL);
CU_ASSERT(g_lvol == NULL);
/* Destroy lvol store */
vbdev_lvs_destruct(lvs, lvol_store_op_complete, NULL);
CU_ASSERT(g_lvserrno == 0);
CU_ASSERT(g_lvol_store == NULL);
}
int
main(int argc, char **argv)
{
@ -2030,7 +1959,6 @@ main(int argc, char **argv)
CU_ADD_TEST(suite, ut_lvol_seek);
CU_ADD_TEST(suite, ut_esnap_dev_create);
CU_ADD_TEST(suite, ut_lvol_esnap_clone_bad_args);
CU_ADD_TEST(suite, ut_lvol_shallow_copy);
allocate_threads(1);
set_thread(0);

View File

@ -13,7 +13,6 @@
#include "common/lib/ut_multithread.c"
#include "../bs_dev_common.c"
#include "thread/thread.c"
#include "ext_dev.c"
#include "blob/blobstore.c"
#include "blob/request.c"
#include "blob/zeroes.c"
@ -8578,129 +8577,6 @@ blob_is_degraded(void)
g_blob->back_bs_dev = NULL;
}
static void
bs_dev_io_complete_cb(struct spdk_io_channel *channel, void *cb_arg, int bserrno)
{
g_bserrno = bserrno;
}
static void
blob_shallow_copy(void)
{
struct spdk_blob_store *bs = g_bs;
struct spdk_blob_opts blob_opts;
struct spdk_blob *blob;
spdk_blob_id blobid;
uint64_t num_clusters = 4;
struct spdk_bs_dev *ext_dev;
struct spdk_bs_dev_cb_args ext_args;
struct spdk_io_channel *bdev_ch, *blob_ch;
uint8_t buf1[DEV_BUFFER_BLOCKLEN];
uint8_t buf2[DEV_BUFFER_BLOCKLEN];
uint64_t io_units_per_cluster;
uint64_t offset;
blob_ch = spdk_bs_alloc_io_channel(bs);
SPDK_CU_ASSERT_FATAL(blob_ch != NULL);
/* Set blob dimension and as thin provisioned */
ut_spdk_blob_opts_init(&blob_opts);
blob_opts.thin_provision = true;
blob_opts.num_clusters = num_clusters;
/* Create a blob */
blob = ut_blob_create_and_open(bs, &blob_opts);
SPDK_CU_ASSERT_FATAL(blob != NULL);
blobid = spdk_blob_get_id(blob);
io_units_per_cluster = bs_io_units_per_cluster(blob);
/* Write on cluster 2 and 4 of blob */
for (offset = io_units_per_cluster; offset < 2 * io_units_per_cluster; offset++) {
memset(buf1, offset, DEV_BUFFER_BLOCKLEN);
spdk_blob_io_write(blob, blob_ch, buf1, offset, 1, blob_op_complete, NULL);
poll_threads();
CU_ASSERT(g_bserrno == 0);
}
for (offset = 3 * io_units_per_cluster; offset < 4 * io_units_per_cluster; offset++) {
memset(buf1, offset, DEV_BUFFER_BLOCKLEN);
spdk_blob_io_write(blob, blob_ch, buf1, offset, 1, blob_op_complete, NULL);
poll_threads();
CU_ASSERT(g_bserrno == 0);
}
/* Make a snapshot over blob */
spdk_bs_create_snapshot(bs, blobid, NULL, blob_op_with_id_complete, NULL);
poll_threads();
CU_ASSERT(g_bserrno == 0);
/* Write on cluster 1 and 3 of blob */
for (offset = 0; offset < io_units_per_cluster; offset++) {
memset(buf1, offset, DEV_BUFFER_BLOCKLEN);
spdk_blob_io_write(blob, blob_ch, buf1, offset, 1, blob_op_complete, NULL);
poll_threads();
CU_ASSERT(g_bserrno == 0);
}
for (offset = 2 * io_units_per_cluster; offset < 3 * io_units_per_cluster; offset++) {
memset(buf1, offset, DEV_BUFFER_BLOCKLEN);
spdk_blob_io_write(blob, blob_ch, buf1, offset, 1, blob_op_complete, NULL);
poll_threads();
CU_ASSERT(g_bserrno == 0);
}
/* Create a spdk_bs_dev */
ext_dev = init_ext_dev(num_clusters * 1024 * 1024, DEV_BUFFER_BLOCKLEN);
/* Make a shallow copy of blob over bdev */
spdk_bs_blob_shallow_copy(bs, blob_ch, blobid, ext_dev, blob_op_complete, NULL);
CU_ASSERT(spdk_blob_get_shallow_copy_total_clusters(blob) == 2);
CU_ASSERT(spdk_blob_get_shallow_copy_copied_clusters(blob) == 0);
poll_threads();
CU_ASSERT(g_bserrno == 0);
/* Read from bdev */
/* Only cluster 1 and 3 must be filled */
bdev_ch = ext_dev->create_channel(ext_dev);
SPDK_CU_ASSERT_FATAL(bdev_ch != NULL);
ext_args.cb_fn = bs_dev_io_complete_cb;
for (offset = 0; offset < io_units_per_cluster; offset++) {
memset(buf1, offset, DEV_BUFFER_BLOCKLEN);
ext_dev->read(ext_dev, bdev_ch, buf2, offset, 1, &ext_args);
poll_threads();
CU_ASSERT(g_bserrno == 0);
CU_ASSERT(memcmp(buf1, buf2, DEV_BUFFER_BLOCKLEN) == 0);
}
for (offset = io_units_per_cluster; offset < 2 * io_units_per_cluster; offset++) {
memset(buf1, 0, DEV_BUFFER_BLOCKLEN);
ext_dev->read(ext_dev, bdev_ch, buf2, offset, 1, &ext_args);
poll_threads();
CU_ASSERT(g_bserrno == 0);
CU_ASSERT(memcmp(buf1, buf2, DEV_BUFFER_BLOCKLEN) == 0);
}
for (offset = 2 * io_units_per_cluster; offset < 3 * io_units_per_cluster; offset++) {
memset(buf1, offset, DEV_BUFFER_BLOCKLEN);
ext_dev->read(ext_dev, bdev_ch, buf2, offset, 1, &ext_args);
poll_threads();
CU_ASSERT(g_bserrno == 0);
CU_ASSERT(memcmp(buf1, buf2, DEV_BUFFER_BLOCKLEN) == 0);
}
for (offset = 3 * io_units_per_cluster; offset < 4 * io_units_per_cluster; offset++) {
memset(buf1, 0, DEV_BUFFER_BLOCKLEN);
ext_dev->read(ext_dev, bdev_ch, buf2, offset, 1, &ext_args);
poll_threads();
CU_ASSERT(g_bserrno == 0);
CU_ASSERT(memcmp(buf1, buf2, DEV_BUFFER_BLOCKLEN) == 0);
}
/* Clean up */
ext_dev->destroy_channel(ext_dev, bdev_ch);
ext_dev->destroy(ext_dev);
spdk_bs_free_io_channel(blob_ch);
ut_blob_close_and_delete(bs, blob);
poll_threads();
}
static void
suite_bs_setup(void)
{
@ -8913,7 +8789,6 @@ main(int argc, char **argv)
CU_ADD_TEST(suite_esnap_bs, blob_esnap_clone_reload);
CU_ADD_TEST(suite_esnap_bs, blob_esnap_hotplug);
CU_ADD_TEST(suite_blob, blob_is_degraded);
CU_ADD_TEST(suite_bs, blob_shallow_copy);
allocate_threads(2);
set_thread(0);

View File

@ -1,81 +0,0 @@
/* SPDX-License-Identifier: BSD-3-Clause
* Copyright (C) 2023 SUSE LLC.
* All rights reserved.
*/
#include "thread/thread_internal.h"
#include "spdk/blob.h"
#define EXT_DEV_BUFFER_SIZE (4 * 1024 * 1024)
uint8_t g_ext_dev_buffer[EXT_DEV_BUFFER_SIZE];
struct spdk_io_channel g_ext_io_channel;
static struct spdk_io_channel *
ext_dev_create_channel(struct spdk_bs_dev *dev)
{
return &g_ext_io_channel;
}
static void
ext_dev_destroy_channel(struct spdk_bs_dev *dev, struct spdk_io_channel *channel)
{
}
static void
ext_dev_destroy(struct spdk_bs_dev *dev)
{
free(dev);
}
static void
ext_dev_read(struct spdk_bs_dev *dev, struct spdk_io_channel *channel, void *payload,
uint64_t lba, uint32_t lba_count,
struct spdk_bs_dev_cb_args *cb_args)
{
uint64_t offset, length;
offset = lba * dev->blocklen;
length = lba_count * dev->blocklen;
SPDK_CU_ASSERT_FATAL(offset + length <= EXT_DEV_BUFFER_SIZE);
if (length > 0) {
memcpy(payload, &g_ext_dev_buffer[offset], length);
}
cb_args->cb_fn(cb_args->channel, cb_args->cb_arg, 0);
}
static void
ext_dev_write(struct spdk_bs_dev *dev, struct spdk_io_channel *channel, void *payload,
uint64_t lba, uint32_t lba_count,
struct spdk_bs_dev_cb_args *cb_args)
{
uint64_t offset, length;
offset = lba * dev->blocklen;
length = lba_count * dev->blocklen;
SPDK_CU_ASSERT_FATAL(offset + length <= EXT_DEV_BUFFER_SIZE);
memcpy(&g_ext_dev_buffer[offset], payload, length);
cb_args->cb_fn(cb_args->channel, cb_args->cb_arg, 0);
}
static struct spdk_bs_dev *
init_ext_dev(uint64_t blockcnt, uint32_t blocklen)
{
struct spdk_bs_dev *dev = calloc(1, sizeof(*dev));
SPDK_CU_ASSERT_FATAL(dev != NULL);
dev->create_channel = ext_dev_create_channel;
dev->destroy_channel = ext_dev_destroy_channel;
dev->destroy = ext_dev_destroy;
dev->read = ext_dev_read;
dev->write = ext_dev_write;
dev->blockcnt = blockcnt;
dev->blocklen = blocklen;
return dev;
}

View File

@ -59,7 +59,6 @@ int g_resize_rc;
int g_inflate_rc;
int g_remove_rc;
bool g_lvs_rename_blob_open_error = false;
bool g_blob_read_only = false;
struct spdk_lvol_store *g_lvol_store;
struct spdk_lvol *g_lvol;
spdk_blob_id g_blobid = 1;
@ -137,7 +136,7 @@ spdk_bs_iter_first(struct spdk_blob_store *bs,
uint64_t
spdk_blob_get_num_clusters(struct spdk_blob *blob)
{
return 1;
return 0;
}
void
@ -248,14 +247,6 @@ spdk_blob_is_thin_provisioned(struct spdk_blob *blob)
return blob->thin_provisioned;
}
void
spdk_bs_blob_shallow_copy(struct spdk_blob_store *bs, struct spdk_io_channel *channel,
spdk_blob_id blobid, struct spdk_bs_dev *ext_dev,
spdk_blob_op_complete cb_fn, void *cb_arg)
{
cb_fn(cb_arg, 0);
}
DEFINE_STUB(spdk_bs_get_page_size, uint64_t, (struct spdk_blob_store *bs), BS_PAGE_SIZE);
int
@ -466,12 +457,6 @@ spdk_blob_open_opts_init(struct spdk_blob_open_opts *opts, size_t opts_size)
opts->clear_method = BLOB_CLEAR_WITH_DEFAULT;
}
bool
spdk_blob_is_read_only(struct spdk_blob *blob)
{
return g_blob_read_only;
}
void
spdk_bs_create_blob(struct spdk_blob_store *bs,
spdk_blob_op_with_id_complete cb_fn, void *cb_arg)
@ -3313,75 +3298,6 @@ lvol_get_by(void)
free_dev(&dev2);
}
static void
lvol_shallow_copy(void)
{
struct lvol_ut_bs_dev bs_dev;
struct spdk_lvs_opts opts;
struct spdk_bs_dev ext_dev;
int rc = 0;
init_dev(&bs_dev);
ext_dev.blocklen = DEV_BUFFER_BLOCKLEN;
ext_dev.blockcnt = BS_CLUSTER_SIZE / DEV_BUFFER_BLOCKLEN;
spdk_lvs_opts_init(&opts);
snprintf(opts.name, sizeof(opts.name), "lvs");
g_lvserrno = -1;
rc = spdk_lvs_init(&bs_dev.bs_dev, &opts, lvol_store_op_with_handle_complete, NULL);
CU_ASSERT(rc == 0);
CU_ASSERT(g_lvserrno == 0);
SPDK_CU_ASSERT_FATAL(g_lvol_store != NULL);
spdk_lvol_create(g_lvol_store, "lvol", BS_CLUSTER_SIZE, false, LVOL_CLEAR_WITH_DEFAULT,
lvol_op_with_handle_complete, NULL);
CU_ASSERT(g_lvserrno == 0);
SPDK_CU_ASSERT_FATAL(g_lvol != NULL);
/* Successful shallow copy */
g_blob_read_only = true;
spdk_lvol_shallow_copy(g_lvol, &ext_dev, op_complete, NULL);
CU_ASSERT(g_lvserrno == 0);
/* Shallow copy with null lvol */
spdk_lvol_shallow_copy(NULL, &ext_dev, op_complete, NULL);
CU_ASSERT(g_lvserrno != 0);
/* Shallow copy with null ext_dev */
spdk_lvol_shallow_copy(g_lvol, NULL, op_complete, NULL);
CU_ASSERT(g_lvserrno != 0);
/* Shallow copy with invalid ext_dev size */
ext_dev.blockcnt = 1;
spdk_lvol_shallow_copy(g_lvol, &ext_dev, op_complete, NULL);
CU_ASSERT(g_lvserrno != 0);
/* Shallow copy with writable lvol */
g_blob_read_only = false;
spdk_lvol_shallow_copy(g_lvol, &ext_dev, op_complete, NULL);
CU_ASSERT(g_lvserrno != 0);
spdk_lvol_close(g_lvol, op_complete, NULL);
CU_ASSERT(g_lvserrno == 0);
spdk_lvol_destroy(g_lvol, op_complete, NULL);
CU_ASSERT(g_lvserrno == 0);
g_lvserrno = -1;
rc = spdk_lvs_unload(g_lvol_store, op_complete, NULL);
CU_ASSERT(rc == 0);
CU_ASSERT(g_lvserrno == 0);
g_lvol_store = NULL;
free_dev(&bs_dev);
/* Make sure that all references to the io_channel was closed after
* shallow copy call
*/
CU_ASSERT(g_io_channel == NULL);
}
int
main(int argc, char **argv)
{
@ -3428,7 +3344,6 @@ main(int argc, char **argv)
CU_ADD_TEST(suite, lvol_esnap_missing);
CU_ADD_TEST(suite, lvol_esnap_hotplug);
CU_ADD_TEST(suite, lvol_get_by);
CU_ADD_TEST(suite, lvol_shallow_copy);
allocate_threads(1);
set_thread(0);

View File

@ -1361,7 +1361,7 @@ test_reservation_write_exclusive(void)
SPDK_CU_ASSERT_FATAL(rc == 0);
/* Unregister Host C */
spdk_uuid_set_null(&g_ns_info.reg_hostid[2]);
memset(&g_ns_info.reg_hostid[2], 0, sizeof(struct spdk_uuid));
/* Test Case: Read and Write commands from non-registrant Host C */
cmd.nvme_cmd.opc = SPDK_NVME_OPC_WRITE;
@ -1430,7 +1430,7 @@ _test_reservation_write_exclusive_regs_only_and_all_regs(enum spdk_nvme_reservat
SPDK_CU_ASSERT_FATAL(rc == 0);
/* Unregister Host C */
spdk_uuid_set_null(&g_ns_info.reg_hostid[2]);
memset(&g_ns_info.reg_hostid[2], 0, sizeof(struct spdk_uuid));
/* Test Case: Read and Write commands from non-registrant Host C */
cmd.nvme_cmd.opc = SPDK_NVME_OPC_READ;
@ -1472,7 +1472,7 @@ _test_reservation_exclusive_access_regs_only_and_all_regs(enum spdk_nvme_reserva
SPDK_CU_ASSERT_FATAL(rc == 0);
/* Unregister Host B */
spdk_uuid_set_null(&g_ns_info.reg_hostid[1]);
memset(&g_ns_info.reg_hostid[1], 0, sizeof(struct spdk_uuid));
/* Test Case: Issue a Read command from Host B */
cmd.nvme_cmd.opc = SPDK_NVME_OPC_READ;

View File

@ -20,7 +20,6 @@ function unittest_bdev() {
$valgrind $testdir/lib/bdev/bdev.c/bdev_ut
$valgrind $testdir/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut
$valgrind $testdir/lib/bdev/raid/bdev_raid.c/bdev_raid_ut
$valgrind $testdir/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut
$valgrind $testdir/lib/bdev/raid/concat.c/concat_ut
$valgrind $testdir/lib/bdev/raid/raid1.c/raid1_ut
$valgrind $testdir/lib/bdev/bdev_zone.c/bdev_zone_ut