This will allow a platform driver to allocate a buffer in case it cannot
execute the whole sequence and the destination buffer of the last
operation is a "virtual" accel buffer.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: Ia947cf553619828a170c5d0563b4c355d7b5ead5
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16377
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
This will allow drivers to check if a task is using buffers from accel
domain. This is just a helper, since the same can be achieved by
calling `spdk_memory_domain_get_first("SPDK_ACCEL_DMA_DEVICE")`, but
there's only a single accel domain and it is a bit special, so it makes
sense to have a dedicated helper function for getting it.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I07db7445ed9b109e66ecdbc0483a6a158a551070
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16376
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
The goal of a platform driver is to execute chained accel operations in
the most efficient way possible. A driver is aware of the hardware
available on a platform and can execute several operations as a single
one. For instance, if we want to do DMA and then encrypt the data, the
driver can do both at the same time, if the hardware is capable of doing
that.
Platform drivers aren't required to support all operations. If a given
operation cannot be executed, the driver should notify accel to continue
processing a sequence, via spdk_accel_sequence_continue(), and that
operation will processed by a module assigned to its opcode.
It is required however, that all platform drivers support memory
domains, including the "virtual" accel domain. A method for allocating
those buffers will be added in the following patches.
This patch only adds methods to register and select platorm drivers, but
doesn't change the way a sequnce is executed (i.e. it doesn't use the
driver to execute it).
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I97a0b07e264601ab3cf980735319fe8cea54d38e
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16375
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
If req->data is set, with all the previous changes, then req->iovcnt
should also be more than zero.
Signed-off-by: John Levon <john.levon@nutanix.com>
Change-Id: I29b5f45541c9dba2dd896109dd43d2b5321ec467
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16274
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Also deprecate the existing spdk_nvmf_request_data() API, which is
incompatible with iovecs.
Signed-off-by: John Levon <john.levon@nutanix.com>
Change-Id: I44df8ff30a431873a0c2f34b0cdb58df858fd7e3
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16200
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Use req->iov instead of req->data in reservation handling code.
Signed-off-by: John Levon <john.levon@nutanix.com>
Change-Id: I6d79711d03f45bd5e118c6324d22decad887a788
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16199
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
If spdk_sock_flush() returns an error, there's no reason not to
disconnect the qpair, as it usually means that that socket's connection
has been terminated.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I54e9bebc38e2a24a3baf69eb18ec3c654b210318
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16644
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
The bahavior of spdk_sock_flush() was changed in 5433004ec to return the
number of flushed bytes and -1 with errno set to EAGAIN in case nothing
has been flushed (instead of returning 0). Therefore, we shouldn't
treat EAGAIN as an error in nvme_tcp_qpair_process_completions().
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I5473488b5b408cdc739921046f1a0cc2c98f98de
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16643
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
This never happens, as requests in this state are always immediately
transitioned to other states.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I0408ed9d8003d364bc38c86a9a50312721ab1284
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16642
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
It is possible for requests waiting for R2T ACK to receive H2C PDU
before receiving the ACK. Therefore, the following sequence:
1. Host sends a write request to the target.
2. Target sends R2T PDU to the host and sets request's state to
AWAITING_R2T_ACK.
3. Host sends H2C PDU to the target, but it doesn't reach the target
yet.
3. Host sends an abort command to abort that request. Request's state
is changed to READY_TO_COMPLETE.
4. Target receives the H2C PDU, sees that request's state is
READY_TO_COMPLETE, which is unexpected, and terminates the
connection.
will cause the target to terminate the connection, which is obviously
incorrect.
So, to avoid that, we can treat AWAITING_R2T_ACK state in the same way
as TRANSFERRING_HOST_TO_CONTROLLER and register a poller waiting for the
state to be changed.
Fixes#2789.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: Idddc627050000b74663dba397dc14d10aa0e284f
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16641
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
We don't need to allocate 2MiB aligned memory address for
vrings, this will waste memory and may invoke dynamic
memory allocation in DPDK sometimes.
Fix issue #2846.
Change-Id: I6410d417f92623b44c375359d5e2b5ec8ed815c0
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16651
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Starting in SPDK 23.01, calling spdk_bdev_register() and
spdk_bdev_examine() from a thread other than the app thread was
deprecated. This commit removes the deprecation and as such calling
these functions from a thread other than the app thread is an error.
As a side effect of this commit, all bdev module examine_config() and
examine_disk() callbacks will be called on the app thread.
Change-Id: Idaae06608101e2a513d9312ac5544ffe94effe4a
Signed-off-by: Mike Gerdts <mgerdts@nvidia.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15826
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
If multiple claims exist on a bdev, examine_disk() is called for each of
them.
Change-Id: I0a6dc3e4bd1da20bbcbddf97a16e04c62c82354c
Signed-off-by: Mike Gerdts <mgerdts@nvidia.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15290
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
This commit has no functional change. It refactors an if statement into
a case statement in preparation for supporting claims v2.
Signed-off-by: Mike Gerdts <mgerdts@nvidia.com>
Change-Id: I1862428c91a7066ad9079878d4c1b690a5ef631c
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15289
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
This implements the v2 claims API. Compared to the original v1 claims,
v2 claims:
- Support read-write-once, read-write-many, and read-only-many claims.
- Are claimed with spdk_bdev_module_claim_desc().
- Are associated with a bdev descriptor that is passed to
spdk_bdev_module_claim_bdev_desc().
- Are released upon close of the bdev descriptor used to obain the
claim.
- Cannot be taken when a descriptor other than the one passed to
spdk_bdev_module_claim_bdev_desc() has write access.
Later commits in this series are needed to fully integrate them with the
bdev subsystem.
Change-Id: I39a356f5893aa45ac346623ec9ce0ec659b38975
Signed-off-by: Mike Gerdts <mgerdts@nvidia.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15288
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
As new claim types are introduced, printing error messages about who
holds a claim will get more complicated. This refactors the error
message code into a function to prevent code duplication.
Signed-off-by: Mike Gerdts <mgerdts@nvidia.com>
Change-Id: Icdc5332214f3974e75baf11ba5ea02172c4275e1
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15287
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Merge function _ublk_try_close_dev() directly into
function ublk_try_close_dev.
Change-Id: Ib4df28f1d69f56d72e3eeaeb2220eb15f5c37440
Signed-off-by: Liu Xiaodong <xiaodong.liu@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16417
Reviewed-by: <yifan.bian@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
bdev is not proper to describe ublk devices
Change-Id: Ic6784b3062b770dce1c77dc409ba1ee3704bdef2
Signed-off-by: Liu Xiaodong <xiaodong.liu@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16418
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
We need to use SQPOLL on the ctrl ring for now, due to
a bug in kernels <= 6.1. This ring is used very
infrequently, so the workaround has no real effect
on performance.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: Ia5e65a6edb7b1b6c4c945ebda8941f98c2bcdb07
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16620
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
crc32.c uses SPDK_HAVE_ARM_CRC to use __crc32
intrinsics when available, but it isn't including
anything that would actually define SPDK_HAVE_ARM_CRC.
Only crc32c.c tried to define this previously, that
all got moved to crc_internal.h now, so we can just
include that here.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: Ic5c2e6908508e32d2f3784304c5b58d3acd4c965
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16688
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
The preprocessor code will be needed in another
crc-related .c file in an upcoming patch.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: Icc5e6f0d81f76093dfc973a9c1fe56271baf1ed8
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16687
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
For JSON RPC, boolean response with false value may not be regarded as error.
Previously many cases were replaced to use
spdk_jsonrpc_send_error_response() explicitly. Replace one of the remaining
cases in this patch.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: I1b4ffe015c2b2e28d411faf7763c2baca81f66f5
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16623
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
For JSON RPC, boolean response with false value may not be regarded as error.
Previously many cases were replaced to use
spdk_jsonrpc_send_error_response() explicitly. Replace one of the remaining
cases in this patch.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: Ibfcb8c53662238b7ba8f08ecd1678953af8dc202
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16622
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
The next patch will improve media mgmt notifications but it will be
almost same as _resize_notify() and _remove_notify().
On the other hand, there are a few differences between _resize_notify()
and _remove_notify(). _remove_notify() will be better.
To avoid duplication, unify _resize_notify() and _remove_notify() by
adding abstraction event_notify() and _event_notify().
Add unit tests for the complex race conditions.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Signed-off-by: Mike Gerdts <mgerdts@nvidia.com>
Change-Id: Ibe2478479c61459c0da0db8d28c7273f05275e0f
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16577
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Main core can be different than first core (default behavior) as it
can be specified by application argument. It can be useful to
determine if given thread is matching main core.
Signed-off-by: Jacek Kalwas <jacek.kalwas@intel.com>
Change-Id: I25292a91ad677806eaf19ad68acdda0f28da6cfb
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16596
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
These routines can only handle a single buffer; double check that is the
case, and fail if not.
Signed-off-by: John Levon <john.levon@nutanix.com>
Change-Id: I136482c27c73655887c49405f747b8ed073f7b69
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16198
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Use req->iov as needed, to make it easier to remove req->data later.
Signed-off-by: John Levon <john.levon@nutanix.com>
Change-Id: Ie625f374e846f7e6afd6a5d143a5174d27d419b4
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16256
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Use req->iov as needed, to make it easier to remove req->data later.
Signed-off-by: John Levon <john.levon@nutanix.com>
Change-Id: I4095e3c4089b730db123705d0168cd409375cc43
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16196
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Use req->iov as needed, to make it easier to remove req->data later.
Signed-off-by: John Levon <john.levon@nutanix.com>
Change-Id: Id23c4ef8018d6a7aad42c3d5054fa9addcf16f0a
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16195
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Refer to req->iov instead of req->data. As the queue connection code
already presumes a single data buffer, add some sanity checking for
this.
We also need to fix vfio_user.c as a result to correctly set ->iovcnt.
Signed-off-by: John Levon <john.levon@nutanix.com>
Change-Id: Ib1e4ef3885200ffc5194f00b4e3fe20ab1934fd7
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16194
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Add a new API for incremental copying in or out of an iovec, and replace
current code to use the new API.
Signed-off-by: John Levon <john.levon@nutanix.com>
Change-Id: I088b784aef821310699478989e61411952066c18
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16193
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
During secondary process shutdown, when nvme device
would get detached, it would trigger env_dpdk to
send rte_eal_hotplug_remove event for the corresponding
BDF. But this isn't valid from a secondary process -
we can only attach/detach from the primary process.
Usually this would just result in a bunch of annoying
print messages as the secondary and primary process
sent rte messages between each other. But occasionally
one of the response messages from the primary process
could arrive just as the secondary process was going
through rte_eal_cleanup. The message would get
kicked to the DPDK intr thread, we would do bus_cleanup
which frees all of the PCI state, and then the message
would execute on the intr thread causing seg faults,
use-after-free or some other violation.
It's possible some kind of cleanup should be
implemented in DPDK, but for now, let's just not
induce incorrect behavior from SPDK, and don't send
the hotplug_remove messages from a secondary
process.
Fixes issue #2651.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: Ie431a1f8e74503e1de1be36cbb9589682d6dc94a
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16553
Reviewed-by: Michal Berger <michal.berger@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
We should always called the unregister callback on
the same thread that spdk_bdev_unregister() was
originally called. So save the thread pointer and
use an spdk_thread_send_msg() to make sure it gets
called on the correct thread when the unregister
finishes.
Also add unit test that reproduces the original
issue.
Fixes issue #2883.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: Ib3d89368aa358bc7a8db46a8a8cb6339340469d9
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16554
Reviewed-by: Mike Gerdts <mgerdts@nvidia.com>
Reviewed-by: Xiaodong Liu <xiaodong.liu@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Since ublk_start_disk internally runs ublk_ctrl_cmd
asynchorously, the result should be returned after the
whole process of ublk_start_disk is completed.
Add a callback into ublk_start_disk parameter, and change
rpc_ublk_start_disk to send response in callback.
Change-Id: Icc0d9e8cb81f2b67bf99fdead423bfe8159714bc
Signed-off-by: Liu Xiaodong <xiaodong.liu@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16500
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Use ublk_close_dev_done to release obtained resource
after UBLK_CMD_ADD_DEV is executed in the ublk device
start process.
Rename ublk_delete_dev to ublk_free_dev,
and ublk_close_dev_done to ublk_delete_dev
Also using the correct way -- io_uring_queue_exit()
to release uring in ublk instead of just closing uring fd.
Change-Id: I46a1ddea162adeb6dfe8c9b45082ed2f93f4b309
Signed-off-by: Liu Xiaodong <xiaodong.liu@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16460
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Obtained resource can be directly released in error
handling of ublk_start_disk.
Fix the improper usage of _ublk_try_close_dev to release
resource in ublk_start_disk.
Change-Id: I291bbfb179b9df4deb8f3d346b2d660e6d17fdc1
Signed-off-by: Liu Xiaodong <xiaodong.liu@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16459
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Add function named ublk_ios_fini() to release memories
allocated in ublk_ios_init().
Fix the incorrect memory release for the failure inside
ublk_ios_init() by ublk_ios_fini().
Also move the memory release of ublk_io into function
ublk_delete_dev() instead of ublk_close_dev_done(). This
will be used by next patch.
Change-Id: I5c3fc31a4d7114e17fb86fc6facc8cccea27d6e7
Signed-off-by: Liu Xiaodong <xiaodong.liu@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16442
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
This will help with debugging more complex
ublk configurations - needed especially knowing
that ublk kernel driver is still a bit flaky.
Create a new LOG flag "ublk_io" for the existing
per-IO debug logs, and use the existing "ublk"
flag for ctrl-related debug logs.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: Ic019c1e837b04dbf5d210c46a98cfbed732278a0
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16457
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Xiaodong Liu <xiaodong.liu@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I55a5a5111aa334807e22c9c622e33c69f0405a39
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16456
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Xiaodong Liu <xiaodong.liu@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Sort the case statements by the normal order in which
the commands are submitted.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I203a70045cd48d30d1229c754ddb57e7e31460af
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16455
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Xiaodong Liu <xiaodong.liu@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Instead of having the poller check the last
ctrl_cmd_op to determine which function to call,
just have ublk_ctrl_cmd store that function pointer
in the ublk itself. This keeps all of the
cmd_op-specific logic in one place rather than
splitting it between the submit and poller
functions.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I59f333d38a0d89cc4c50082177ff818135bcad37
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16454
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Xiaodong Liu <xiaodong.liu@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Some functions such as ublk_start_kernel,
ublk_stop_kernel and _ublk_start_disk were only
calling ublk_ctrl_cmd. Just call ublk_ctrl_cmd
directly and avoid the extra function indirections.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: Ic7d211b9b730af816f7747e031bdaef865ece433
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16453
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Xiaodong Liu <xiaodong.liu@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
We don't need this parameter - it is simpler to
just determine the argument data (if any) in the
function itself based on the opcode.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: Ic8df8f9ec569c10ebb565efc7268fba1d50bcdf5
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16452
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Every queue should be closed based on only its own
I/O information, not the whole device.
Remove function ublk_is_ready_to_stop().
Change-Id: Iec5a260f68d6f84c0af4c8e0b5380272049b1f5d
Signed-off-by: Liu Xiaodong <xiaodong.liu@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16416
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
count was unused in release builds. Otherwise following
error is produced on clang build:
19:30:56 ublk.c:974:14: error: variable 'count' set but not used
[-Werror,-Wunused-but-set-variable]
19:30:56 int rc = 0, count = 0, tag;
19:30:56 ^
19:30:56 1 error generated.
Change-Id: If7ca88de37ed6e40826e09b055355c07f67c8869
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16450
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Reviewed-by: Xiaodong Liu <xiaodong.liu@intel.com>
This is a workaround but is necessary to fix the github issue #2874.
Due to some unknown reason, in nightly test with Intel e810 NICs
when a qpair is created with synchronous mode and connection errors
are detected, the qpair is destroyed even if requests for the qpair are
still inflight. Then, nvme_rdma_process_recv_completion() causes NULL
pointer acccess. To fix this NULL pointer access, change
nvme_rdma_process_recv_completion() to return immediately if rsp->rqpair
is NULL. Add a TODO comment to find a root cause and really fix the
issue.
One of the fixes for the issue #2874.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: Ic810922f7ea1b32373b15f4e0cf7c2429659cbab
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16431
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Supporting SRQ caused two kinds of memory leaks. Fix both in this patch.
1. rqpair->rsps was leaked and null pointer access occurred
An error was detected during the nightly nvmf_delete_subsystem test.
The NVMe perf tool crashed with SIGABRT.
The reason of the crash was
nvme_rdma.c:2504:2: runtime error: member access within null pointer of type 'struct nvme_rdma_rsps'
This was caused by clearing rqpair->rsps before freeing rqpair->rsps.
rqpair->rsps should have been held until rqpair->rsps is freed. However,
when we support SRQ, rqpair->rsps was cleared when releasing rqpair->poller
by mistake. rqpair->rsps should be cleared only if SRQ is enabled because
in this case rqpair uses rsps of rqpair->poller.
2. rqpair->reqs and rsps are leaked for admin qpair at controller reset
To avoid unnecessary alloc and free for rqpair->rsps when enabling SRQ,
nvme_rdma_create_reqs() and nvme_rdma_create_rsps() were moved to
nvme_rdma_connect_established().
On the other hand, nvme_rdma_free_reqs() and nvme_rdma_free_rsps() were
called by nvme_rdma_ctrlr_delete_io_qpair().
However, at controller reset, admin qpair was just disconnected and
reconnected. In this case, nvme_rdma_create_reqs() and
nvme_rdma_create_rsps() were called again without calling
nvme_rdma_free_reqs() and nvme_rdma_free_rsps().
Hence, memory leak occurred.
To fix the memory leak, move nvme_rdma_free_reqs() and nvme_rdma_free_rsps()
from nvme_rdma_ctrlr_delete_io_qpair() to nvme_rdma_qpair_destroy().
One of the fixes fot the issue #2874
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: I167ba908cff73d7a0be2248affce4c54f233da51
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16384
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
This routine was neglecting to reset ->iovcnt, leading to havoc when the
request was re-used.
Signed-off-by: John Levon <john.levon@nutanix.com>
Change-Id: Ifd4ac47b95edd517ce5df731c682697bf51da819
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16273
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
For IC_RESP and C2H_TERM_REQ, use the regular async write path but add
an additional flush. The flush operation reports errors, so we may at
some point attempt to handle busy conditions by blocking. However, for
now this doesn't do that because previously the writev call didn't
either.
Change-Id: I4d05e19ac6dd781be7c96005549abdb52511b8c1
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15213
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
If we call flush, we want to flush regardless of whether there is a poll
group.
Change-Id: I88680105d999a909f3f1fe75be9caff31a8555ff
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16420
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
When a queue has finished processing on its polling
thread, it sends a message to the app thread signaling
that it is done. Then when the app thread gets
messages from all of the queues for that device, it can
proceed with tearing the device down.
But if there are still ctrl_ring commands in progress,
it needs to wait. Previously it would register a
poller that would retry the same function if it
found commands in progress. But the problem is that
it did not differentiate the function getting called
as a direct message from the polling thread vs. retried
via the poller on the app thread. This could result
in lost messages.
So fix it to always increment the queues_closed
counter (renamed from q_deinit_num), and then
only check for ctrl ring commands in progress after
we received all of the queue closed messages.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I0ea23ebc69acb29d5ab7e1d86ddbe74b9973e225
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16405
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Michal Berger <michal.berger@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Reviewed-by: Xiaodong Liu <xiaodong.liu@intel.com>
We make a few changes here to enable this:
1) Set IORING_SETUP_SQPOLL on the control ring.
Otherwise when UBLK_START_DEV is submitted it
will be processed in the context of the system
call itself, resulting the kernel block layer
submitting reads to the new device which blocks
the thread - meaning the system call may never
return.
2) Save the cmd_op in each sqe, along with the
spdk_ublk_dev pointer.
3) Add a poller to poll the ctrl ring. The poller
can get the ublk and cmd_op from the cqe to
know which ctrl operation completed and take
next steps as necessary.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: Ia0e51a4ff74781c85967c54969fbfc67a0d3f115
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16404
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Reviewed-by: Xiaodong Liu <xiaodong.liu@intel.com>
Reviewed-by: Michal Berger <michal.berger@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
There is a way to pause/resume spdk pollers, however there is no way
to achieve that using public API for the given target which has
a hook behaving similar to pollers. Exposing such functionality can
be used for pausing and restoring target pollers during
reset, e.g. new commands should not be fetched to assure
that all internal resources can be cleared/reinitialized safety.
Pausing target poller during the reset will assure that, without
need for destroying transport or adding condition statements in IO path.
Similar use case might be hitless upgrade. Depending on implementation
there might be need that no new command can be submitted when
secondary processes are being switched to upgraded versions.
Pausing target pollers should be useful in this case.
Signed-off-by: Kamuda Szymon <szymon.kamuda@intel.com>
Change-Id: I419816552c710c43e02197ebcc20a967fb23b3bd
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15911
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
To allow SO_MINOR updates on LTS for the whole year it is supported,
the major version for all components needs to be increased.
This is to prevent scenario where two versions exists with matching
versions, but conflicting ABI.
Ex. Next SPDK release adds an API call increasing the minor version,
then LTS needs just a subset of those additions.
Increasing major so version after LTS, allows the future releases
to update versions as needed. Yet allowing LTS to increase minor
version separately.
Disabled test for increasing SO version without ABI change, as
that is goal of this patch. This check shall be removed with SPDK 23.05
release.
Looks like this was left over from prior LTS, to avoid that
make sure it is only skipped when running against v23.01.x as latest
release.
This patch:
- increases SO_VER by 1 for all components
- resets SO_MINOR to 0 for all components
- removes suppressions for ABI tests
Short reference to how the versions were changed:
MAX=$(git grep "SO_VER := " | cut -d" " -f 3 | sort -ubnr | head -1)
for((i=$MAX;i>0;i-=1)); do find . -name "Makefile" -exec \
sed -i -e "s/SO_VER := $i\$/SO_VER := $(($i+1))/g" {} +; done
find . -name "Makefile" -exec \
sed -i -e "s/SO_MINOR := .*/SO_MINOR := 0/g" {} +
Change-Id: I3e5681802c0a5ac6d7d652a18896997cd07cc8bf
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16419
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
With per channel stats called on thread A,
spdk_bdev_for_each_channel calls
spdk_for_each_channel which immediately sends
a message to thread B.
If thread B has no workload, it may execute the
message relatively fast trying to write stats to
json_write_ctx.
As result, we may have 2 scenarious:
1. json_write_ctx is still not initialized on
thread A, so thread B dereferences a NULL pointer.
1. json_write_ctx is initialized but thread A writes
response header while thread B writes stats - it leads
to corrupted json response.
To fix this race condition, initialize json_write_ctx
before iterating bdevs/channels
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Signed-off-by: Alexey Marchuk <alexeymar@nvidia.com>
Reported-by: Or Gerlitz <ogerlitz@nvidia.com>
Change-Id: I5dae37f1f527437528fc8a8e9c6066f69687dec9
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16366
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
This is a preparation for the next patch to fix the race condition of
per channel mode.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: I9eaefc527ccf82011af39b8261f5b3cc12983bda
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16365
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
For now, we will still execute the two parts
consecutively and synchronously. Follow-up patches
will do the second part asynchronously, after the
ublk cmds associated with the first part have
completed.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I814d885a8a113c3367207d11ae09dd536eb63460
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16403
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Reviewed-by: Xiaodong Liu <xiaodong.liu@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Upcoming patches will submit ctrl cmds and wait for
them to complete asynchronously. So we will want to
first send the ADD_DEV and SET_PARAMS commands, wait
for them to complete, and only then open the ublk
device file.
So to prepare for that sequencing, move the open()
from _ublk_start_disk to ublk_start_disk.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I9fdf19ce9b51bd552faa917e1e842f9ddfb111a1
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16402
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Xiaodong Liu <xiaodong.liu@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
We will put this pointer into the sqe. It will
be useful when we start doing async completions
on the ctrl ring.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I3bdb728eb1d3ed66a8ecd05df208e4f36e3fbe0b
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16401
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Xiaodong Liu <xiaodong.liu@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
When blobstore is shutdown unexpectedly the super block should
already be marked as dirty. Only when proper blobstore unload
happens, the super block is marked as clean.
Super block is not marked as dirty on blobstore load,
but on first action that starts to modify the metadata.
At this time it only happens through blob persist,
which is fine for creation/deletion of blobs or
their modification (resize/xattr).
It works for cluster allocation with extent_table disabled,
and when extent page needs to be allocated.
Yet it fails for cases when no new extent page is required.
It will result in not marking blobstore as dirty and
then fail when loading a particular blob due to mismatch
between used_clusters and contents of extent page.
To fix that, the blobstore is now marked dirty on a very first
extent page update since blobstore load.
Fixes#2830
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Change-Id: Ied37ecf90d46e1bc51b22c323dce278a0fa88f72
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16179
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Currently we do not have a way to dump opts
for virtio_blk transports. This patch introduces
necessary changes to let us save and load those
via JOSN config.
Change-Id: I7ee4f31062f3d4a264f322e66a67ba3d075f1d75
Signed-off-by: Krzysztof Karas <krzysztof.karas@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15248
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I5b8ce0a4571872e6755c5fa0abbfa1a981dd411f
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16400
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
This patch makes marking blobstore as dirty, separate from
persist process. Adding a context strucutre and new single
callback once completed.
It is being as part of refactor to allow marking blobstore
dirty when writing out extent page - see #2830.
Change-Id: Ie2e9cc32860697e0e747939842ab04f48fbff49b
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16328
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Patch https://review.spdk.io/gerrit/c/spdk/spdk/+/16328,
introduced a refactor for persist path. No functional change
should occur with it, but the code layout after compilation
(path length) might have changed.
It resulted in unrelated scan-build failure:
https://ci.spdk.io/results/autotest-per-patch/builds/95746/archive/scanbuild-vg-autotest/scan-build/report-d08e76.html#EndPath
Tried to replicate the issue without the above patch,
by increasing maxloop or -analyze-headers in scan-build.
Didn't result in any new failures in blobstore.
This seemed like a false positive, so it was verified:
https://review.spdk.io/gerrit/c/spdk/spdk/+/16333/
With no other options, an assert is added only to the
function where the false positive occured.
scan-build log for posterity:
blobstore.c:1062:58: warning: Division by zero [core.DivideZero]
desc_extent_rle->extents[extent_idx].cluster_idx = lba /
lba_per_cluster;
~~~~^~~~~~~~~~~~~~~~~
blobstore.c:1079:58: warning: Division by zero [core.DivideZero]
desc_extent_rle->extents[extent_idx].cluster_idx = lba /
lba_per_cluster;
~~~~^~~~~~~~~~~~~~~~~
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Change-Id: I9dd729fa13ce1c9bbcb91e4326658e2b4e326e6d
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16335
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
The iobuf buffers are only used in accel when executing chained
operations. Since none of the components in SPDK are using chaining
yet, there's little point in having per-thread iobuf caches, as they
only reduce the number of available buffers in other libraries.
This change will be reverted once bdev layer and bdev modules are
updated to support chaining.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: Ibad19ea92f2218a8dec01e802a736cfdd357dfc6
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16398
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
This allows to get start address and length of each
memory chunk in order to create app-specific
resources.
Since we don't want to expose rte structure in the
callback, we have to remap rte data types to SPDK.
Signed-off-by: Alexey Marchuk <alexeymar@nvidia.com>
Change-Id: I3865c4cfe532c6a99a5a3c6c983ded8b9a338de1
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16324
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
directly
This patch removes hardcoded compressdev code from the
vbdev module and instead uses the accel_fw. The port required
a few changes based on how things are plumbed and accessed,
nothing that isn't be too obscure. CI tests were updated to
run ISAL accel_fw module as well as DPDK compressdev with QAT.
Unit tests for the new module will follow in a separate patch.
Signed-off-by: paul luse <paul.e.luse@intel.com>
Change-Id: I769cbc888658fb846d89f6f0bfeeb1a2a820767e
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/13610
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
ublk could export a backend device as ublk block device (/dev/ublkb*).
A rpc method is used to add ublk device and it should be done
after creating ublk target. Corresponding, ublk_del_dev is
used to delete the specified ublk device.
Signed-off-by: Yifan Bian <yifan.bian@intel.com>
Co-authored-by: Xiaodong Liu <xiaodong.liu@intel.com>
Change-Id: I3a4ba8d8dc5f5ad241511ccbc9d3336b582a6dc5
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15976
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Xiaodong Liu <xiaodong.liu@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Add rpc methond for ublk target creation and destruction. Before to
add ublk device, need to initialize ublk target to create ublk
threads, corresponding an rpc methond to destroy ublk target is
also added. It will deinitialize ublk target and release all ublk
devices.
Signed-off-by: Yifan Bian <yifan.bian@intel.com>
Co-authored-by: Xiaodong Liu <xiaodong.liu@intel.com>
Change-Id: I5db0cf9cc68745440df999169aa1c61111010e02
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15962
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Xiaodong Liu <xiaodong.liu@intel.com>
ublk backend could support ublk driver with kernel. Specify
configuration parameter to start it up.
Signed-off-by: Yifan Bian <yifan.bian@intel.com>
Co-authored-by: Xiaodong Liu <xiaodong.liu@intel.com>
Change-Id: I55e7d757e04315b25e9bfab5fdcbb6621be3e29e
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15680
Reviewed-by: Xiaodong Liu <xiaodong.liu@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
The mlx5 accel module supports crypto operations.
Data buffer is split into `block_size` chunks and each
chunk is enrypted individually.
mlx5 library contains some utility functions that will
later be used by other libraries, this lib will be
exntended later.
Signed-off-by: Alexey Marchuk <alexeymar@nvidia.com>
Change-Id: Iacdd8caaade477277d5a95cfd53e9910e280a73b
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15420
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
All DPDK related code is removed, handling of
RESET command was sligthly updated.
Handling of -ENOMEM was updated for cases when
accel API returns -ENOMEM
Crypto tests in blockdev.sh were extended with more
crypto_bdevs to verify NOMEM cases - that failed
with original vbdev_crypto implementation
Signed-off-by: Alexey Marchuk <alexeymar@nvidia.com>
Change-Id: If1feba2449bee852c6c4daca4b3406414db6fded
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14860
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
gcc-12 is really tricky. It detected that in
ftl_nv_cache_load_state() that when we do an FTL_NOTICELOG,
the dev and nv_cache values are associated with each
other by the SPDK_CONTAINEROF() operation.
So then in FTL_LOG_COMMON, it checks if dev is NULL. If it
is, it doesn't print the dev->conf.name, but still prints
the varargs which include nv_cache members. But if dev is
NULL then these nv_cache members wouldn't be valid either,
and that's what gcc-12 is complaining about, in a very
unclear way.
So now we just have FTL_LOG_COMMON contain a single line, with
a tertiary operator to print either dev->conf.name or "N/A"
depending on whether dev is NULL or not. I suspect this
fixes it because we've replaced the if statement with
a tertiary operator that is independent from the VA_ARGS.
Fixes issue #2829 (partially).
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: Ia56e2c7fb7966e7a5ceff35b36b0346b556ce7e7
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16342
Reviewed-by: <sebastian.brzezinka@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
We had it for compress but simply didn't think of a use case for
decompress. During the develpoment of the compressdev accel_fw
module it was discovered that compressdev does indeed provide the
uncompressed length on completion of decompress and the reducelib
uses it. So, add it here.
Signed-off-by: paul luse <paul.e.luse@intel.com>
Change-Id: I2f6a8bbbe3ef8ebe0b50d6434845f405afa7d37d
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16035
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
This is the port of the vbdev compress logic into the accel
framework. It includes just one enhancement, to only fill each
mbuf in either src or dst array with max "window size" param to
avoid QAT errors. Note that DPDK ISAL PMD was not ported as we
have native ISAL compression in accel now.
Note: ISAL w/DPDK is still built w/this patch, that can't be
removed until the vbdev module moves to accel fw as it still
depends on DPDK ISAL PMD.
Follow-on patches will include addition C API for PMD selection,
this patch just gets equivalent functionality going. Upcoming
patches will also convert the vbdev compress module to use the
accel framework instead of talking directly to compressdev.
More patches will also address comments on vbdev common code
that addressed here would make the review challenging.
This patch also fixes a bug in the ported code that needs to
be fixed here to pass CI. Capability discovery was incorrect
causing all devices to appear to not support chained mbufs,
with the mbuf splitting code this is important to get right.
Signed-off-by: paul luse <paul.e.luse@intel.com>
Change-Id: I7f526404819b145ef26e40877122ba80a02fcf51
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15178
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
This patch adds virtio_blk_get_transports matching the
virtio_blk_create_transport RPC. Allowing for querying
existing virtio_blk transports and displaying their options.
Signed-off-by: Krystyna Szybalska <krystyna.szybalska@gmail.com>
Change-Id: I0ec49c5f2ad11962feb5087dd376407ad125c349
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16303
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Move parallel arrays of response buffers and response SGLs from
qpair to a new responses object.
Use options to create the responses object.
Use spdk_zmalloc() to allocate the responses object because qpair
is also allocated by spdk_zmalloc().
The purpose is to share the code and the data structure between
SRQ is enabled and disabled.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Signed-off-by: Denis Nagorny <denisn@nvidia.com>
Signed-off-by: Evgeniy Kochetov <evgeniik@nvidia.com>
Change-Id: Ia23fe7328ae1f2f551fed5863fd1414f8567d602
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14172
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Accel modules can now implement the get_memory_domains() callback to
indicate the types of memory domains they support. If unimplemented, a
module is assumed not to support memory domains and accel will take care
of pulling/pushing data to local buffers prior to passing a task to be
executed by a module.
For now, similarly to the bdev layer, we only check if a module supports
memory domains, but we don't verify the types of the domains. That
could be easily added in the future, if necessary.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: Ia513f4f31124672b705b6dd33a2624f0ae94d3ce
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16027
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
It allows accel to store private data per each opcode/module without
having to change externally visible structures or allocate anything when
a module is registered. Since a single module can service multiple
opcodes at the same time, so some of these values might be duplicated.
However, there are only a handful of opcodes, so it shouldn't be a
problem.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I609a6ccc2d241cb9b8273cc2c6d1933d2bc25e0e
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16026
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
If the destination buffer is in remote memory domain, we'll now push the
temporary bounce buffer to that buffer after a task is executed.
This means that users can now build and execute sequence of operations
using buffers described by memory domains. For now, it's assumed that
none of the accel modules support memory domains, so the code in the
generic accel layer will always allocate temporary bounce buffers and
pull/push the data before handing a task to a module.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: Ia6edf266fe174eee4d28df0ca570c4d825436e60
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15948
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
If the source buffer is from a remote memory domain, we will now pull it
to the temporary bounce buffer before a task is executed.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I476684a4359410c69dd69a2b425b9e61d4c55a7e
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15947
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
The first task on a sequence's task queue is the one that we're
currently executing. By moving the place where we remove it from that
queue and place it on the completed queue to process_sequence(), we'll
be able to perform some extra steps (e.g. memory domain push) after a
task has been completed.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: Ia98f491eb52be0156954372461e05c198c070e3b
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15946
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Processing a sequence consists of multiple steps and we call
accel_process_sequence() mutliple times, so we need to check various
things to verify if some of those steps have already been done. Having
a state machine allows us to reduce the number of such checks and makes
it easier to add additional steps.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I254819fee0893866de395193041b319cbad228ec
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15945
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
If a task has buffers in a remote memory domains, we'll now allocate a
buffer from local memory and replace the original buffer with it. This
is the first step in supporting buffers in remote memory domains. To
fully support it, we'll also need to pull/push the data before/after
executing a task.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I3c86bbb6dbe6a31cb2cae8ce7d73e272ddc2734c
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15944
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
All operations are using iovecs to describe their buffers and only
encrypt/decrypt additionally used nbytes to store the total size of a
src buffer. We don't really need this value in the generic accel code,
so we can let modules calculate it, if necessary. That way, we won't
waste cycles calculating it if a module doesn't use it and it makes the
code a bit easier, as we won't have to deal with the fact that nbytes is
only valid for certain operations.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I29252be34a9af9fd40f4c7fec9d0a0c1139c562d
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16306
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Also, since this was the last operation using dst and nbytes, these
fields were removed from spdk_accel_task.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I0d6b090e101c016d1bdcbe7a3bee7d6f691f1c9e
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15943
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Also, since this was the last operation using src, remove this field
from spdk_accel_task.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I55fd98697ef4f92a13dd0563b4adf9ccb0af171b
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15942
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
When initializing SPDK, the used DPDK args are printed.
Unfortunately before each argument a timestamp is added.
Rather than use SPDK_PRINTF for each argument, bunch
up whole line to be printed and then print it in one go.
Please see before:
[2022-12-20 13:52:05.647131] [ DPDK EAL parameters: [2022-12-20
13:52:05.647145] spdk_tgt [2022-12-20 13:52:05.647159] --no-shconf
[2022-12-20 13:52:05.647170] -c 0x1 [2022-12-20 13:52:05.647185]
--huge-unlink [2022-12-20 13:52:05.647199] --log-level=lib.eal:6
[2022-12-20 13:52:05.647221] --log-level=lib.cryptodev:5 [2022-12-20
13:52:05.647232] --log-level=user1:6 [2022-12-20 13:52:05.647251]
--iova-mode=pa [2022-12-20 13:52:05.647261] --base-virtaddr=0x200000000000
[2022-12-20 13:52:05.647275] --match-allocations [2022-12-20
13:52:05.647286] --file-prefix=spdk_pid1352179 [2022-12-20 13:52:05.647307]
]
And after:
[2022-12-20 13:52:29.038353] [ DPDK EAL parameters: spdk_tgt --no-shconf -c
0x1 --huge-unlink --log-level=lib.eal:6 --log-level=lib.cryptodev:5
--log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000
--match-allocations --file-prefix=spdk_pid1358716 ]
Change-Id: I4c6c25818ae99bad942bf61ab590f971d339ffc6
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16031
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Also, make it possible to remove copy operations following a fill
operation if they're using the same buffers.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I7da195ce80650a02c5db99d9400ee692f797b1f8
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15940
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Also, replace src2 with an iovec + iovcnt and rename it to s2 to
keep the naming consistent with the source buffer (s).
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I44787128377addd514818ec5aaec084b1a31f0c3
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15939
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Also, replace dst2 with an iovec + iovcnt and rename it to d2 to
keep the naming consistent with the destination buffer (d).
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: Ib394c127eeb5890451535ff485f96f7edd2897a4
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15938
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
This patch is first in the series of patches aimed to make all accel
operations describe their buffers with iovecs. The intention is to make
it easier to handle tasks in a generic way.
It doesn't mean that we change the API - all function signatures are
preserved. If a function doesn't use iovecs, we use the aux_iovs array.
However, this does mean that each accel module that provides support for
a given operation will need to be adjusted to use iovecs.
Additionally, update the unit test checking copy elision to verify the
buffers of the copy operation that is left.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I9e6d8d1be3b8b9706cb4a6222dad30e8c373d8fb
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15937
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Users can now specify buffers allocated through `spdk_accel_get_buf()`
when appending operations to a sequence. When an operation in a
sequence is executed, we check it if it uses buffers from accel domain,
allocate data buffers and update all operations within a sequence that
were also using those buffers.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I430206158f6a4289e15f04ddb18f0d1a2137f0b4
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15748
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
It's common to set up an iovec around a single buffer; add a helper for
this.
Signed-off-by: John Levon <john.levon@nutanix.com>
Change-Id: Ic4183e29d78549ec102045c6af0b5ff448cb5c59
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16192
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
And use it in a couple of places.
Signed-off-by: John Levon <john.levon@nutanix.com>
Change-Id: I4b86cef0e9489c1435c0206dd6c5cda4ffe4d33a
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16191
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
When a buffer is get, it does not need to reserve the space
for tailq header.
Signed-off-by: MengjinWu <mengjin.wu@intel.com>
Change-Id: I0aa2d77739fbb86a6e2df1c00a772aff1cb7c6e4
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16181
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
This VTune integration was added many years ago, but
hasn't been tested and to my knowledge is not being
used by anyone. The statistics it enables are very
limited, specific to the bdev nvme module with no
insight into the rest of an SPDK application.
So deprecate this support now, we will remove it
immediately after the v23.01 release.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I5552d85084c350e9d0b2570946801acd65a89d64
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16294
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
In order to connect to a zoned SPDK NVMe-oF target the ZNS specific
identify functions must be implemented and the supported ZNS opcodes
must be set accordingly.
Enable the zone management send and receive opcodes within the
`g_cmds_and_effect_log_page`. If the backing zoned bdev supports the
zone append command the `nvmf_get_cmds_and_effects_log_page` function
will respect that in the returned data structure.
Signed-off-by: Dennis Maisenbacher <dennis.maisenbacher@wdc.com>
Change-Id: Id9dd22a8696aa28177cc52e1f3587e10194de910
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16045
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
In order to connect to a zoned SPDK NVMe-oF target the ZNS specific
identify functions must be implemented and the supported ZNS opcodes
must be set accordingly.
Implementing ZNS specific identify functions to return the 'I/O Command
Set specific Identify Namespace data structure (CNS 05h)'
(`spdk_nvmf_ns_identify_iocs_specific`) and 'I/O Command Set specific
Identify Controller data structure (CNS 06h)'
(`spdk_nvmf_ctrlr_identify_iocs_specific`).
Those functions return a null filled data structure for any I/O Command
Set other than ZNS.
Signed-off-by: Dennis Maisenbacher <dennis.maisenbacher@wdc.com>
Change-Id: I6b9529ce0a86400afb01d4e09cbdb3e5c3a68514
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16044
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Layout's regions need to be aligned to write unit size.
Calculate the exact amount of bands needed for metadata, rather than
assuming 1 band is enough.
Signed-off-by: Kozlowski Mateusz <mateusz.kozlowski@intel.com>
Signed-off-by: Mariusz Barczak <mariusz.barczak@intel.com>
Change-Id: Ib304ea65a35d8b34518efda02379072355c0cd10
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16218
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Community-CI: Mellanox Build Bot
Don't let the invalidity value continuously drop in degenerate scenarios -
previously could have happened if band_cmp picked based on other value, when invalidity is similar.
Signed-off-by: Kozlowski Mateusz <mateusz.kozlowski@intel.com>
Signed-off-by: Mariusz Barczak <mariusz.barczak@intel.com>
Change-Id: I33166501e832cd7f359b3acef1e614cf9b1288d5
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16215
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
If host disconnect the connection when fabric commands are offload to
DSA, there will be use-after-free problems.
Now, disable the offload of fabrics command.
Fix issue 2828
Signed-off-by: MengjinWu <mengjin.wu@intel.com>
Change-Id: I669b01728e1ad275b7b121d47141bdf3fe5f7d9f
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15992
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Always add -larchive to DPDK static link args if
libarchive is available. This is less fragile than
previous mechanism of trying to remove
RTE_HAS_LIBARCHIVE to keep DPDK from trying to use it.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: Ib26fc204927d8967b98d416373fc91446169d5af
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15951
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Refactoring to avoid code duplication in the following commits.
Signed-off-by: Dennis Maisenbacher <dennis.maisenbacher@wdc.com>
Change-Id: I5a597a02c810cfa1fad6dc397d012cf6a3f189ca
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16043
Community-CI: Mellanox Build Bot
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Use uint32_t to avoid overflow when left shift by 16 places
This patch fix issue #2858
Signed-off-by: Sebastian Brzezinka <sebastian.brzezinka@intel.com>
Change-Id: I07f4328674ae7bd7525792ca1e424e85a932c87f
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16180
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
spdk_bdev_module_init() must only be called if the module sets
async_init to true. This patch fixes the doc string to match the
implementation and adds an assert() to catch API usage errors early.
Change-Id: I677345de028c8f7597ecf81ff9b9b855867bbf01
Signed-off-by: Mike Gerdts <mgerdts@nvidia.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16133
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
As an lvstore is being loaded, blobs are itereated with
spdk_bs_iter_next(), which opens a blob, calls lvol_next_lvol(), then
closes the blob. Since the blob struct that is passed to
load_next_lvol() is only transiently opened, it should not be stored
with the lvol.
A short while later, the lvstore opens each lvol by calling
spdk_lvol_open() from _vbdev_lvs_examine_cb(). At that time, the lvstore
holds the open reference and only then is it safe to keep a reference to
the blob.
Change-Id: I309227b23b59058a58167a9dac35af5fabc29d98
Signed-off-by: Mike Gerdts <mgerdts@nvidia.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14965
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
spdk_nvme_cpl_get_status_string() returns a string which contains upper
cases, spaces, and hyphens. To use the returned string for JSON RPC, we
have to convert it to a string which contains only lowercases and
underscores.
For our convenience, add a new API spdk_strcpy_replace() to replace
all occurrences of the search string with the replacement string.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: I3ca9774d0bfb2d0bb7bd7412bc671e6f69104b7d
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16054
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Used (void) on cmd and removed increment to fix
clang 15 werror.
vmd.c:368:11: error: variable 'cmd' set but not used [-Werror,-Wunused-but-set-variable]
uint16_t cmd = dev->header->zero.command;
^
1 error generated.
Signed-off-by: Kamil Godzwon <kamilx.godzwon@intel.com>
Change-Id: I4e383ac41b46d13df0210bf90f11f6130290f243
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16127
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Patch below started checking development version of DPDK
using rte_version_release():
(32e6ffb) env_dpdk: add support for DPDK main branch for 23.03
rte_version_release() is present starting with DPDK 21.11,
so it broke earlier versions like DPDK 20.11 packaged on Fedora 35.
SPDK supports only last two DPDK LTS versions, which does not include DPDK 20.11.
Yet there is no need to break older versions unnecessarily.
Another aspect is that rte_version_release() is marked as experimental,
so it could change in the future. Only using stable rte_version(),
helps with forwards compatibility too.
Change-Id: Id17d643a12dcfc03c2d4688d1bc5030dc339f428
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reported-by: Michal Berger <michal.berger@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16017
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Michal Berger <michal.berger@intel.com>
Added spdk_nvme_qpair_get_num_outstanding_reqs to get the number
of outstanding reqs for a specific qpair.
Change-Id: I55d75a7363ac63bd26db76594e70e8b17b3e5830
Signed-off-by: Richael Zhuang <richael.zhuang@arm.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15916
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Added num_outstanding_reqs in struct spdk_nvme_qpair to record outstanding
req number in each qpair. This can be used by multipath to select I/O
path.
Increment num_outstaning_reqs when req is removed from free_req queue and
decrement it when req is put back in free_req queue.
Change-Id: I31148fc7d0a9a85bec4c56d1f6e3047b021c2f48
Signed-off-by: Richael Zhuang <richael.zhuang@arm.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15875
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
bdev_crypto uses memset() to zero secrets passed
by the user (cleanup/error path) which is not safe -
compiler may detect that the buffer being zeroed
is not accessed any more and may "optimize" (drop)
zerofying.
C11 standard introduces memset_s which guarantess to
change the buffer content, but this function is optional,
gcc may not support it. As alternative, add not optimal
from performance point of view default implementation.
Add unit test to math_ut.c to avoid creating new .c file
for 1 simple test
Signed-off-by: Alexey Marchuk <alexeymar@nvidia.com>
Change-Id: I11c7d15610df02e4a3761a88c85f6f8c54fb4b0a
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16038
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
The FIELD_OK macro in blob_open_opts_copy() should consider offsets in
struct spdk_blob_open_opts, not struct spdk_blob_opts.
Signed-off-by: Mike Gerdts <mgerdts@nvidia.com>
Change-Id: I62e22acbe7dfb994453a379c92f78b7e9bc7fc13
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14962
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Blob IDs are sequentially assigned starting at 0x100000000. When
debugging with a small number of blob IDs, it is much more intuitive to
see blob ID 0x100000000 rather than blob ID 4294967296.
In commit 76a577b082 a similar change was
made to blobcli.
Signed-off-by: Mike Gerdts <mgerdts@nvidia.com>
Change-Id: Ic5321a83b57cf8c9f8df48cd424a926b6fec4ba8
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14963
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
It's the same as spdk_iovcpy(), but the dst/src buffers can overlap.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I6daa0a846d7d1deac2c01d1a1be09171fa8bf796
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15747
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
The data buffers backed by these accel buffers aren't allocated
immediately, but only when they're necessary to execute a given
operation. It allows users to append operations to a sequence, without
actually reserving large space for the data. That way, if some of these
buffers aren't needed to execute a sequence, they won't be allocated.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: Ieeea8a011b40c7f2f33e9a6f03fe34264e9316f3
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15746
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
It will be used for allocating buffers from accel domain and
allocating bounce buffers to push/pull the data from memory domains for
modules that don't support memory domains.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: Idbe4d2129d0aff87d9e517214e9f81e8470c5088
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15745
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
This domain is meant to represent data being transformed by accel
engine. Users will be able to allocate buffers from that memory domain
and use them when appending operations to an accel sequence.
Since these buffers are only meant to be used as placeholders for actual
buffers, none of the push/pull/translate callbacks are implemented. To
access the data after it was transformed by accel, users should make
sure that the final command's destination buffer isn't allocated from
accel memory domain.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: Ia031c7b205e98792d0a93f01513101b86afa9faa
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15744
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reversing a sequence means that the order of its operations is reversed,
i.e. the first operation becomes last and vice versa. It's especially
useful in read paths, as it makes it possible to build the sequence
during submission, then, once the data is read from storage, reverse the
sequence and execute it.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I93d617c1e6d251f8c59b94c50dc4300e51908096
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15636
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Operation sequence should always be treated as a whole, meaning that
users cannot rely on the contents of any intermediate buffers and should
only care about the buffer that's the destination of the whole
operation. This allows us to remove some of those copy operations by
changing source / destination buffer of a preceding / following
operation.
If a sequence is using buffers from non-local memory domain, users can
append a copy operation to a sequence to specify a local destination
buffer. If the module executing the operations is aware of memory
domains, this can avoid doing an extra spdk_memory_domain_pull_data().
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I93b94d46ee32700819e9e6f1c55350692db8a67a
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15530
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
This patch introduces the concept of chaining multiple accel operations
and executing them all at once in a single step. This means that it
will be possible to schedule accel operations at different layers of the
stack (e.g. copy in NVMe-oF transport, crypto in bdev_crypto), but
execute them all in a single place. Thanks to this, we can take
advantage of hardware accelerators that supports executing multiple
operations as a single operation (e.g. copy + crypto).
This operation group is called spdk_accel_sequence and operations can be
appended to that object via one of the spdk_accel_append_* functions.
New operations are always added at the end of a sequence. Users can
specify a callback to be notified when a particular operation in a
sequence is completed, but they don't receive the status of whether it
was successful or not. This is by design, as they shouldn't care about
the status of an individual operation and should rely on other means to
receive the status of the whole sequence. It's also important to note
that any intermediate steps within a sequence may not produce observable
results. For instance, appending a copy from A to B and then a copy
from B to C, it's indeterminate whether A's data will be in B after a
sequence is executed. It is only guaranteed that A's data will be in C.
A sequence can also be reversed using spdk_accel_sequence_reverse(),
meaning that the first operation becomes last and vice versa. It's
especially useful in read paths, as it makes it possible to build the
sequence during submission, then, once the data is read from storage,
reverse the sequence and execute it.
Finally, there are two ways to terminate a sequence: aborting or
executing. It can be aborted via spdk_accel_sequence_abort() which will
execute individual operations' callbacks and free any allocated
resources. To execute it, one must use spdk_accel_sequence_finish().
For now, each operation is executed one by one and is submitted to the
appropriate accel module. Executing multiple operations as a single one
will be added in the future.
Also, currently, only fill and copy operations can be appended to a
sequence. Support for more operations will be added in subsequent
patches.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: Id35d093e14feb59b996f780ef77e000e10bfcd20
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15529
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Existing `vhost_user_session_send_event` is only used to
stop vhost user device's session now, so we rename it to
`vhost_user_wait_for_session_stop` and also rename the
whole function calls when stopping the device with
more apposite names.
Change-Id: Ib8ea48273e85f7856ca2dfca57b5fd933ac4cf7a
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15296
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
For vhost-user device, the variable `active_session_num` is used
to count number of sessions of a vhost-user device, we don't use
it anywhere, and the assertion of this variable is already
guaranteed by `vsessions_num`, so just remove it.
Change-Id: I335a75d17583b3744a41152b35cd5a1a8762a687
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15189
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
If we kill the vhost process while VM is connected, the `g_fini_cb`
will not be called due to active session is in the vhost-user device,
but we're sure that this VM is stopped for this case, because
`vhost_driver_unregister` is called in the shutdown thread, so here
we reuse `g_vhost_user_started` flag for this case and free the sessions,
the following call to `vhost_driver_unregister` can also handle this
case, because the Unix Domain socket is already unregistered.
Fixes commit 327d1c98 ("vhost: defer vhost_dev_unregister until scsi tgts removed")
Change-Id: I4f368ac8c304dd9525d15abdce8fd5b2ed79b96e
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15623
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
`rte_vhost_driver_unregister` API for removing socket is not
asynchronous, it may call SPDK ops for adding a new connection
or removing a connection, so we can't hold the user device lock
when calling this function, and reject to add a new connection
while calling `rte_vhost_driver_unregister`.
Fix issue #2748.
Change-Id: I5594224f26374b2336d64175ecd5e5ec3d545a58
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15483
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
`spdk_vhost_dev` is created|deleted via RPC or APIs, and
we use a global `spdk_vhost_lock` to protect it, but for
some other places such as: vhost-user message processing,
we also use the global lock for now, actually we don't
need to use this lock, because these vhost-user messages
processing will not delete nor add vhost devices.
While here, we add a `spdk_vhost_user_dev` access lock to
protect vhost-user message processing as an optimization.
Change-Id: Ia9c45b056cebb7b65f458d56ed775a15e386f905
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15184
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Feng Li <lifeng1519@gmail.com>