Copy operation is defined by source and destination LBAs and LBA count
to copy. For destiantion LBA and LBA count we reuse exiting fields
`offset_blocks` and `num_blocks` in `struct spdk_bdev_io`. For source
LBA new field `src_offset_blocks` was added.
`spdk_bdev_get_max_copy()` function can be used to retrieve maximum
possible unsplit copy size. Zero values means unlimited. It is allowed
to submit larger copy size but it will be split into several bdev IOs.
Signed-off-by: Evgeniy Kochetov <evgeniik@nvidia.com>
Change-Id: I2ad56294b6c062595c026ffcf9b435f0100d3d7e
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14344
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Community-CI: Mellanox Build Bot
Add a new parameter "-c" to display the per channel IO statistics
for required Bdev
./scripts/rpc.py bdev_get_iostat -b Malloc0 -h
usage: rpc.py [options] bdev_get_iostat [-h] [-b NAME] [-c]
optional arguments:
-h, --help show this help message and exit
-b NAME, --name NAME Name of the Blockdev. Example: Nvme0n1
-c, --per-channel Display per channel IO stats for specified device
This could give more intuitive information on each channel's processing
of the IOs with the associated thread on the same Bdev.
Please also be aware that the IO statistics are collected from SPDK
thread's related channel's information. So that it is more relating
to the SPDK thread. And in the dynamic scheduling case, different
SPDK thread could be running on the same Core.
In this case, any seperate channel's IO statistics are returned to
the RPC call and if needed, further parse of the data is needed to
get the per Core information although usually there is one thread
per Core.
On the other hand, user could run the framework_get_reactors RPC
method to get the relationship of the thread and CPU Cores so as
to get the precise information of IO runnings on each thread and
each Core for the same Bdev.
Change-Id: I39d6a2c9faa868e3c1d7fd0fb6e7c020df982585
Signed-off-by: GangCao <gang.cao@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/13011
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
And also related function pointers and APIs:
spdk_bdev_for_each_channel_msg;
spdk_bdev_for_each_channel_done;
spdk_bdev_for_each_channel_continue;
Change-Id: I52f0f6f27717d53c238faf2f998810c9c5ee45d4
Signed-off-by: GangCao <gang.cao@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14614
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Community-CI: Mellanox Build Bot
The following patches will allow the caller to specify a custom
completion callback to spdk_bdev_part_submit_request(). To do it
easily, consolidate completions of all I/O types into
bdev_part_complete_io().
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: I083695189daa7e5271787c50947e428d01a83677
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15001
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Krzysztof Karas <krzysztof.karas@intel.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
If verify is enabled, both data and metadata are checked. However,
if verify is disabled, read data is not checked even if dif_check_flags
is not zero. Add DIF/DIX verification for read I/O at completion.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: Ibde44bc244f84e40cef68653978191363acca5ce
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15074
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Krzysztof Karas <krzysztof.karas@intel.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
For write, verify DIF/DIX before submission and for read, verify
DIF/DIX after successful completion.
As same as the NVMe bdev module and the NULL bdev module, DIF/DIX
verification is done based on the DIF type and DIF insert/strip is
not supported.
In near future, the bdev I/O APIs bring an I/O flag to the underlying
bdev and the malloc bdev module will be able to decide DIF/DIX
verification based on the I/O flag.
One important feature is to setup protection information when
creating a malloc disk. Otherwise, all initial reads will fail
if protection information is enabled.
For users, add some explanation about the dif_type parameter
into doc/jsonrpc.md.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: I93757b77c03cade766c872e418bb46d44918bee2
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14985
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
The malloc bdev module supports both of interleaved and separated
metadata in this patch.
Different from the NULL bdev module, opts->block_size is a data block
size and a block size is caculated internally as a sum of
opts->block_size and opts->md_size if opts->md_interleave is true, or
opts->block_size otherwise. This will be more intuitive. Additionally,
opts->md_size accepts only either of 0, 8, 16, 32, 64, or 128.
Protection information (T10 DIF/DIX) will be supported in the
following patches.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: Icd9e92c8ea94e30139e416f8c533ab4cf473d2a8
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14984
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Community-CI: Mellanox Build Bot
Define a options structure, malloc_bdev_opts, and use it directly for
the bdev_malloc_create RPC. To do this, bdev_malloc.h includes
bdev_module.h instead of bdev.h to have the definition of the struct
spdk_uuid, and the struct malloc_bdev_opts has a instance of struct
spdk_uuid. Clean up file inclusion together. Furthermore, use
spdk_uuid_copy() to copy uuid from the malloc_bdev_opts to the malloc
disk rather than the = operator, and remove a duplicated size check.
These are helpful to add more parameters for creation.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: Ief25f12586c21b1666180ce10cfc6256ede8eba9
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14982
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
If we use a custom decoder for malloc disk's uuid for the
bdev_malloc_create RPC, the code is simplified. Furthermore,
when we add an options structure, we will be able to include
the options structure into struct rpc_construct_malloc directly.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: Ib36fa628569f973218f2cc5ce65a51181cd9fb71
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15125
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
We do not support Soft RoCE anymore. Remove a workaround for Soft RoCE's
bug that we amy receive a completion without error status after qpair is
disconnected/destroyed. Then add a assert to check if rdma_req->req is
not NULL. This will simplify the code and the following patches.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: I80c349053adc0f79679eaf8a5d7265d555d3c2b0
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14909
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
The following patches will support SRQ and SRQ will be per poller.
We will need SRQ in nvme_rdma_cq_process_completions().
It is not possible to identify poller if poll_group is passed to
nvme_rdma_cq_process_completions().
Based on these thoughts, add poll_group pointer to poller and pass
poller to nvme_rdma_cq_process_completions() instead of poll_group.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Signed-off-by: Denis Nagorny <denisn@nvidia.com>
Signed-off-by: Evgeniy Kochetov <evgeniik@nvidia.com>
Change-Id: I322a7a0cc08bdcc8e87e720ad65dd8f0b6ae9112
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14282
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
NVMe-RDMA target has a helper function get_rdma_qpair_from_wc() and
uses it to identify a qpair from a WC.
NVMe-RDMA initiator has a similar function
nvme_rdma_poll_group_get_qpair_by_id().
NVMe-RDMA initiator will support SRQ in the following patches, and
it will want to identify a qpair from a WC.
get_rdma_qpair_from_wc() of NVMe-RDMA target uses wc->qp_num internally
anyway.
However, the upcoming custom transport for RDMA will have to use other
variables of WC.
Hence, it will be convenient to pass WC instead of qp_num if we consider
future enhancements.
Based on these thoughts, for NVMe-RDMA initiator rename
nvme_rdma_poll_group_get_qpair_by_id() by get_rdma_qpair_from_wc().
remove unnecessary declaration, and pass WC instead of qp_num.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Signed-off-by: Denis Nagorny <denisn@nvidia.com>
Signed-off-by: Evgeniy Kochetov <evgeniik@nvidia.com>
Change-Id: I01ead4730207e2c6ac53b83f151bd5f977a11465
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14279
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Poller will have more shared resources when SRQ is supported.
This is a preparation.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Signed-off-by: Denis Nagorny <denisn@nvidia.com>
Signed-off-by: Evgeniy Kochetov <evgeniik@nvidia.com>
Change-Id: Ic3d1cb93dde3f53653a9536a103e5518cebd58e1
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14173
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
nvme_rdma_ctrlr_disconnect_qpair() does not poll the qpair until it is
actually disconnected if it is in a poll group even if its async mode
is disabled. Hence, spdk_nvme_ctrlr_free_io_qpair() removes the qpair
from a poll group when it is being disconnected.
On the other hand, I/O qpair is destroyed after it is actually
disconnected.
When SRQ is enabled and used, a SRQ is destroyed if the corresponding
poller does not have any I/O qpair after an I/O qpair is removed from
the poller.
In particular, if we use spdk_nvme_ctrlr_free_io_qpair(), a SRQ is
destroyed before the corresponding I/O qpairs are destroyed.
Destroying a SRQ failed because it is still referenced by I/O qpairs.
This bug was found when running the SPDK NVMe perf tool with SRQ.
The reason was we had nvme_rdma_poll_group_process_completions() to call
disconnected_qpair_cb after the qpair is actually disconnected.
However, it is ensured that nvme_rdma_poll_group_process_completions()
calls disconnected_qpair_cb for any disconnected qpair.
Hence, remove a check if qpair->poll_group is not NULL from
nvme_rdma_ctrlr_disconnect_qpair() and update the comment.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: I0fde0d827eec3280e1cc5a0fce34d163a6069bc4
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14908
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
With RDMA, the admin poller can experience a remote disconnect when
processing completions. The admin qpair will be disconnected to handle
this. The disconnect code path will manually complete queued aborts.
However, the completion callback for the abort will attempt to resubmit
other queued aborts from the queue, which will result in a very large
stack and can eventually cause a segfault.
The fix is to not resubmit queued aborts if the admin qpair is in any
kind of failed state.
Change-Id: I4a6f959232c8a1bd30c87ca50459014e556cbaa0
Signed-off-by: Vasuki Manikarnike <vasuki.manikarnike@hpe.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15114
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Michael Haeuptle <michaelhaeuptle@gmail.com>
Currently, autotest_cleanup()'s failure causes the EXIT trap to
run it again. This step is not necessary, so clear the traps
prior running it.
Signed-off-by: Michal Berger <michal.berger@intel.com>
Change-Id: I8f841186bf9e9c0e82c8afc409129acad393efe8
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15154
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Currently, in case target directories are empty the directories
themselves are not being purged. Change that, as there is no
need to keep them around.
Signed-off-by: Michal Berger <michal.berger@intel.com>
Change-Id: Ia1a9d60a91508e74b7e106e0217414a8927029b6
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15153
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
This is needed for newer Bash where $! becomes the PID of the
udevadm instead of being its child. Not killing udevadm properly
in this case may be problematic in case the script was run from
the ssh session - depending on the setup, if not all children
of said session are dead, ssh may block forever waiting for them
to terminate (which won't happen as udevadm gets attached to 1
process instead of being reaped).
Signed-off-by: Michal Berger <michal.berger@intel.com>
Change-Id: I3248d0937e9dd919c054a97473f9bb998aa8ca63
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15147
Reviewed-by: Pawel Piatek <pawelx.piatek@intel.com>
Reviewed-by: Kamil Godzwon <kamilx.godzwon@intel.com>
Reviewed-by: Karol Latecki <karol.latecki@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
This function was a bit cluttered so simplify it. Also, it was
assuming nbd module was loaded into the kernel prior running it -
this could silently fail the nbd_all[@] setup in case it wasn't.
Always attempt to load nbd driver before the setup and fail hard
if the driver is not in place.
Signed-off-by: Michal Berger <michal.berger@intel.com>
Change-Id: I5884973944eae3e827eaafec16ba72e2cb4f70e7
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14994
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Karol Latecki <karol.latecki@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Pawel Piatek <pawelx.piatek@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
zram doesn't support some of the uring-specific setup that xnvme may
be using, in particular the IORING_SETUP_IOPOLL, so switch to something
more common like null_blk module. For details see:
https://github.com/spdk/spdk/issues/2708
Signed-off-by: Michal Berger <michal.berger@intel.com>
Change-Id: I48b91ba356b4054433eb1835fa3e2708c8d2628c
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14920
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Pawel Piatek <pawelx.piatek@intel.com>
Reviewed-by: Kamil Godzwon <kamilx.godzwon@intel.com>
Reviewed-by: Karol Latecki <karol.latecki@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
A loop inside 'nvme_tcp_qpair_process_completions' makes
'max_completions' actually behaving like a minimum:
do {
rc = nvme_tcp_read_pdu(tqpair, &reaped);
[...]
} while (reaped < max_completions);
Before this change 'max_completion' constraint, in its true sense,
was actually not respected and a loop inside 'nvme_tcp_read_pdu'
could be executed indefinitely as long as a recv state changed.
To prevent this behavior, max_completion must be passed to
'nvme_tcp_read_pdu' and used as an additional exit condition.
Signed-off-by: Szulik, Maciej <maciej.szulik@intel.com>
Change-Id: I28da962f4a62f08ddb51915b5d0dae9611a82dee
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15136
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Enables DependaBot scanning on the spdk repo.
Signed-off-by: paul luse <paul.e.luse@intel.com>
Change-Id: I0db032c547d28fae8e6e09a0b3179c3680cf8db2
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15113
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Some reset/disable paths are freeing the shadow doorbells without
switching the SQs back to BAR0. Fix this up, and add a small cleanup
when initializing the shadow doorbells.
Signed-off-by: John Levon <john.levon@nutanix.com>
Change-Id: Ia5e5b91b7dc696a558eb0ad59cc554abced47cca
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14901
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
To support SQs allocated to a poll group other than the controller's
main poll group, we need to make sure to poll those SQs when we wake up
and handle the controller interrupt. As they will be running in a
separate SPDK thread, we will arrange for all poll groups to wake up
when we receive an interrupt corresponding to a vfio-user message
arriving.
This can mean needless wakeups: we don't (yet) have a mechanism to only
wake up the poll groups that correspond to a particular SQ write.
Additionally, as we don't have any notion of a poll group per
controller, this ends up polling all SQs in the entire poll group, not
just the ones corresponding to the controller we were handling.
As this has potential performance issues in many cases, it defaults to
disabled.
Signed-off-by: John Levon <john.levon@nutanix.com>
Change-Id: I3d9f32625529455f8d55578ae9cd7b84265f67ab
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14120
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
spdk_nvme_probe_poll_async() can only return 0 or -EAGAIN.
But the code currently indicates that other values might be
possible - so change the code to remove that confusion.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I8e7b2e1183f6650043dd751d610d434d626efa7e
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15110
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change 'bdev_nvme_compare_trids' to name 'bdev_nvme_check_secondary_trid'
and 'bdev_nvme_compare_namespaces' to 'bdev_nvme_check_secondary_namespace'
which better explain the aim of these function.
RPC bdev_nvme_attach_controller only response "invalid parameter"
if the path is not sucessfully added due to not meeting the conditions
in bdev_nvme_compare_trids. Add more log.
Change-Id: If9ba6ec1d397d49689aa2ed6ab74fbfa96e16afa
Signed-off-by: Richael Zhuang <richael.zhuang@arm.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14251
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
A new `oclass` parameter allow to specify DAOS DFS object class that is
responsible for data redundancy and protection.
Examples of the object classes could be found here: https://github.com/daos-stack/daos/blob/master/src/include/daos_obj_class.h
The default value is OC_SX for the max IOPS.
Signed-off-by: Denis Barakhtanov <denis.barahtanov@croit.io>
Change-Id: Ia48681832458c2266eb7c3bcae0df2055e59e309
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15006
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
To fix issue: #2726
And also fix the examples/nvme/perf tool.
The srand() only needs to be called once to set the seed of
futher more calls of rand().
Change-Id: I41ab3a46593513516ad11ea7a5b8960b449e9867
Signed-off-by: GangCao <gang.cao@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15108
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
When crc32c is invoked with a multiple entry input iov,
only the last op has crc_dst set in order to write the final
crc value into the user supplied location.
spdk_idxd_process_events() for every successfully completed
CRC op writes the value into *op->crc_dst
UNLESS it is NULL.
The problem is that _idxd_prep_batch_cmd() that allocates
new ops left op->crc_dst uninitialized.
This results in a memory corruption (use after free)
in the following scenario:
1) op A is allocated an crc_dst is set to point to user memory X.
2) Op A is compeleted
3) User memory X is freed.
4) Ops B and C are allocated (chained), C has crc_dst set.
=> B reused op A memory and crc_dst still points to the
now stale user location (1)
5) B is complered, spdk_idxd_process_events() writes into X
as B->crc_dst = X.
Fix: _idxd_prep_batch_cmd() should initialize crc_dst to NULL.
Signed-off-by: Anton Eidelman <anton@lightbitslabs.com>
Change-Id: I9e7d57ec43a8fbcb3750906015a5cb7291278c35
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15115
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
We were missing a check when ISAL uses the complete output buffer
on compression to determine whether it was s perfect fit or if
simply not enough buffer was provided.
Signed-off-by: paul luse <paul.e.luse@intel.com>
Change-Id: I73532666f50cb9fbef3c42f6bfb25fc5c7de01c6
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14874
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
This done due changes in latest Bash releases (>=5.2) which is
currently shipped with fedora36.
Fixes issue #2740.
Signed-off-by: Michal Berger <michal.berger@intel.com>
Change-Id: Id6e6f00eb4ae8b7f2b110848fd55e710368fcf94
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15081
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: wanghailiang <hailiangx.e.wang@intel.com>
Reviewed-by: Krzysztof Karas <krzysztof.karas@intel.com>
Reviewed-by: Kamil Godzwon <kamilx.godzwon@intel.com>
Do not run test with message send using hello_sock client with
KTLS enabled. CI does not have systems that run with openssl-3
and KTLS and it fails under Fedora36.
This test should be brought back after systems do run openssl-3
on board.
Signed-off-by: Kamil Godzwon <kamilx.godzwon@intel.com>
Change-Id: Id7c46ced745fb4564b7428202a902cb332d30b33
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14830
Reviewed-by: Michal Berger <michal.berger@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Krzysztof Karas <krzysztof.karas@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Add a note about not supporting re-enablement of
static scheduler.
Change-Id: I073d904311a88bf812201ca2b1465dcbdc821792
Signed-off-by: Krzysztof Karas <krzysztof.karas@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15056
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Prevent user from switching back to static scheduler after
different scheduler has been selected. Currently we
do not have a way to save initial thread distribution
configuration, so each time user switches from dynamic
scheduler back to static, the SPDK threads may end up on
different reactors. This would cause discrepancy in
performance statistics of SPDK managed by static scheduler.
Change-Id: Ic17a6be55eaea0e1a748f92e01f7075540403637
Signed-off-by: Krzysztof Karas <krzysztof.karas@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15055
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>