There are multiple locations where a struct lvol_store is allocated.
This invites inconsistency in initialization, which will become more of
a problem as esnap clones have additional initialization.
Now all struct lvol_store allocations should be done with lvs_alloc().
Signed-off-by: Mike Gerdts <mgerdts@nvidia.com>
Change-Id: I07a2f274475375072f80c25ed67cb1fb802cc4e1
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16231
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
There are several places where new lvols are created and each reproduces
much of the same code. Esnap clones will add yet another in lvol.c and
more in unit tests. This introduces lvol_alloc() to minimize the chance
of unintended skew over time.
A side effect of this is that snapshots and clones now inherit clear
method from their parent. Previously they would fall back to the
default. The old behavior seems to be accidental, hence the change.
Signed-off-by: Mike Gerdts <mgerdts@nvidia.com>
Change-Id: Ibf6f79c567e92354ea73e6589c736b1b946731a0
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14976
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Community-CI: Mellanox Build Bot
The thin_provision member of struct spdk_lvol is set but never used.
When needed, an lvol's thin provision state is obtained by looking at
the lvol's blob. This removes the unused thin_provision member.
Signed-off-by: Mike Gerdts <mgerdts@nvidia.com>
Change-Id: I5a2048b5334a26772a25a0bd238e42d3aeb63b49
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17173
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Community-CI: Mellanox Build Bot
When an esnap clone blob's external snapshot arrives after the blob is
opened, it can now be hot-added to the blob. Presumably the new device
replaces a place-holder device that did not really atteempt IO.
Change-Id: I622feb84efa66628debf44f7e7cb88b6a012db6d
Signed-off-by: Mike Gerdts <mgerdts@nvidia.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16232
Community-CI: Mellanox Build Bot
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
This adds the ability to abort IOs as esnap bs_dev channels are being
destroyed.
Signed-off-by: Mike Gerdts <mgerdts@nvidia.com>
Change-Id: Ia63d4cbef5cd4c84dc8d5e2e9e407bacd961385f
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16423
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
As the blobstore is being unlaoded, async esnap channel destructions may
be in flight. In such a case, spdk_bs_unload() needs to defer the unload
of the blobstore until channel destructions are complete.
The following commands lead to the illustrated states.
bdev_malloc_create -b malloc0
bdev_lvol_clone_bdev lvs1 malloc0 eclone
.---------. .--------.
| malloc0 |<--| eclone |
`---------' `--------'
bdev_lvol_snapshot lvs1/eclone snap
.---------. .------. .--------.
| malloc0 |<--| snap |<--| eclone |
`---------' `------' `--------'
bdev_lvol_clone lvs1/snap eclone
.--------.
,-| eclone |
.---------. .------.<-' `--------'
| malloc0 |<--| snap |
`---------' `------'<-. .-------.
`-| clone |
`-------'
As the blobstore is preparing to be unloaded spdk_blob_unload(snap) is
called once for eclone, once for clone, and once for snap. The last of
these calls happens just before spdk_bs_unload() is called.
spdk_blob_unload() needs to destroy channels on each thread. During this
thread iteration, spdk_bs_unload() starts. The work performed in the
iteration maintains a reference to the blob, and as such it
spdk_bs_unload() cannot do its work until the iteration is complete.
Change-Id: Id9b92ad73341fb3437441146110055c84ee6dc52
Signed-off-by: Mike Gerdts <mgerdts@nvidia.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14975
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
This adds support for inflate and decouple for esnap clones. Since there
are no immediate consumers that will provide back_bs_dev->is_zeroes()
that can return true, a shortcut is taken in that inflate and decouple
of esnap clones are the same.
Signed-off-by: Mike Gerdts <mgerdts@nvidia.com>
Change-Id: I4d2e6565126991acd650f073ce876466334e986d
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11574
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
An esnap clone needs special handling as snapshots are created and
removed. In particular: the following must exist on the blob that
directly references the external snapshot and must be removed from
others:
- Ensure SPDK_BLOB_EXTERNAL_SNAPSHOT invalid flag exists only on the
esnap clone.
- Ensure BLOB_EXTERNAL_SNAPSHOT_ID internal xattr exists only on the
esnap clone.
- Clean up any esnap IO channels on a blob that is no longer an esnap
clone due to snapshot creation or removal.
See the diagrams and description in blob_esnap_clone_snapshot() in
blob_ut.c for details.
Signed-off-by: Mike Gerdts <mgerdts@nvidia.com>
Change-Id: Ie4125d64d5bac9cfa7d6c7cc9a543d72a169f6ee
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11573
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
The channel passed to blob IO operations is useful for tracking
operations within the blobstore and the bs_dev that the blobstore
resides on. Esnap clone blobs perform reads from other bs_devs and
require per-thread, per-bs_dev channels.
This commit augments struct spdk_bs_channel with a tree containing
channels for the external snapshot bs_devs. The tree is indexed by blob
ID. These "esnap channels" are lazily created on the first read from an
external snapshot via each bs_channel. They are removed as bs_channels
are destroyed and blobs are closed.
Change-Id: I97aebe5a2f3584bfbf3a10ede8f3128448d30d6e
Signed-off-by: Mike Gerdts <mgerdts@nvidia.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14974
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
As per https://github.com/DPDK/dpdk/commit/71998eb61ff
Change-Id: Ie4e5a38976145e1037ef45593b4dc4265091482d
Signed-off-by: Michal Berger <michal.berger@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17322
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Pawel Piatek <pawelx.piatek@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Let us consider the following process:
1. one fabric connect request A comes but the subsystem is paused
due to adding/removing ns or other operations, so this request A
will be put into sgroup->queued until the subsystem becomes active;
2. the subsystem is paused for a long time until the connect timeout,
related qpair is destroyed, the sgroup->queued will not be cleaned
because qpair's ctrlr is NULL;
3. if a new request B comes, it is more likely to be allocated to the
same memory as the previous fabric command request. And it will be
put into sgroup->queued again, where has already exists the exactly
same pointer with request B.
This leads to the pointer hanging problem and it will cause infinitely
loop when traversing sgroup->queued!
So this patch avoids the ptr-hanging problem by checking and cleaning
all sgroups queued req whose qpair is the being destroyed qpair in
_nvmf_qpair_destroy when ctrlr is NULL.
This problem is already described in issue #2133.
Signed-off-by: Peng Lian<peng.lian@smartx.com>
Change-Id: I909d673b5050f21fa193914cc4ffe6634232fa7d
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17147
Community-CI: Mellanox Build Bot
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Add an API to easily determine if a blob is an esnap clone, similar to
what already exists for snapshot, clone, and thin_provisioned.
Signed-off-by: Mike Gerdts <mgerdts@nvidia.com>
Change-Id: Ie07cd09b30513893e82f1c85e94a24a93c79d71e
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16862
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Community-CI: Mellanox Build Bot
When a sequence is used to perform IO on an esnap clone, differenent
channels will be needed for the blobstore device and the esnap device.
No special esnap handling is required when a sequence is used to perform
IO directly on the blobstore device.
This commit splits bs_sequence_start() into bs_sequence_start_bs() and
bs_sequence_start_blob() to handle these two scenarios. A later commit
introduces special handling of ensap clone blobs.
Signed-off-by: Mike Gerdts <mgerdts@nvidia.com>
Change-Id: I3a6f46640cdb7fdc380bf557736638f1b39f05e3
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17172
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Community-CI: Mellanox Build Bot
For the various forms for read_bs_dev() and readv_bs_dev() to perform
reads from esnap devices, the spdk_bs_request_set used for the IO needs
to keep track of the back_bs_dev IO channel as well as the blobstore's
IO channel.
This commit has no change in functionality: it is preparation for a
change in a later commit.
Signed-off-by: Mike Gerdts <mgerdts@nvidia.com>
Change-Id: I8edd9c4bf29bc074194331b42c5ef9d27590ce88
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14973
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Jim Harris <james.r.harris@intel.com>
External snapshots have a slightly more complicated cleanup of
back_bs_dev. This moves all calls to back_bs_dev->destroy() into a
function so that this more complicated cleanup can have a single
implementation.
Signed-off-by: Mike Gerdts <mgerdts@nvidia.com>
Change-Id: I78460aa3877481788118e2b0b76931dcf5c56338
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14972
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
When consumers open a blob with spdk_bs_open_blob_ext(), they can set
esnap_ctx in struct spdk_blob_open_opts to have that context passed
to bs->external_bs_dev_create().
Change-Id: I0c1a9cec0e5aed5ef2a7143103e822cbe400aabb
Signed-off-by: Mike Gerdts <mgerdts@nvidia.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14971
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
- fix precision
when one convert to seconds and then multiply
we can have precision errors
for example if one have 77ms, it will go to 0 when converted to seconds
and then multiply that 0 by 1000 will return 0 instead of 77ms.
- fix mismatch nsec/usec
nsec was multiplied by 1000*1000 while usec by 1000*1000*1000
it should be the opposite.
anyway the implementation had changed.
- implementation description
* env_ticks_to_msec: j / (tick_hz / 1000)
this is exactly the same as (j * 1000) / tick_hz (eq #2).
but this implementation (eq #2) can only handle 54b in j (before overflowing)
because of the multiplication by 1000 (10b).
with the correct implementation we use all 64b in j.
we assume that tick_hz will be prefectly divisible by 1000 so we are ok.
* env_ticks_to_usec: j / (tick_hz / (1000 * 1000))
same as in msec case, we use all 64b in j.
here we assume that tick_hz is perfectly divisible by (1000 * 1000)
i.e. we assume that CPU frequency is some multiple of 1MHz.
* env_ticks_to_nsec: (j * 1000) / (tick_hz / (1000 * 1000))
in this case we can't assume that tick_hz is divisible by 10^9
because there are many CPUs with 2.8GHz or 3.3GHz for example.
so we multiply j by 1000
this means that we can only handle correctly j up to 54b.
(64b - 10b, 10b for the *1000 operation)
Signed-off-by: Amir Haroush <amir.haroush@huawei.com>
Signed-off-by: Shai Fultheim <shai.fultheim@huawei.com>
Change-Id: Ia8ea7f88b718df206fa0731e3f39f419ee922aa7
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17078
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
atomic64 functions should operate with atomic64 and long types.
Signed-off-by: Amir Haroush <amir.haroush@huawei.com>
Signed-off-by: Shai Fultheim <shai.fultheim@huawei.com>
Change-Id: I2ea8f1cc06d6df0f7dd5b9d628839138b78bc412
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17077
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
During resize, we correctly determine if we have enough
md_pages for new extent pages, before proceeding with
actually allocating clusters and associated extent
pages.
But during actual allocation, we were incrementing
the lfmd output parameter, which was incorrect.
Technically we should increment it any time
bs_allocate_cluster() allocated an md_page. But
it's also fine to just not increment it at the
call site at all - worst case, we just check that
bit index again which isn't going to cause a
performance problem.
Also add a unit test that demonstrated the original
problem, and works fine with this patch.
Fixes issue #2932.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: Iba177a66e880fb99363944ee44d3d060a44a03a4
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17150
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: 阿克曼 <lilei.777@bytedance.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Community-CI: Mellanox Build Bot
If blob_freeze_io() is called twice in a row,
and the second time occurs before the for_each_channel
for the first completes, the second caller will
receive its callback too soon.
Instead just simplify the whole process, always do
the for_each_channel and don't try to optimize it
at all. These are infrequent operations - correctness
and simplicity are in order.
A few additional changes:
1) Make same changes for unfreeze path.
2) Add blob_verify_md_op() calls, just to be sure
these are only called from md_thread. This was
already checked in calling functions, but as these
functions get called from new code paths (i.e.
esnap clones) it can't hurt to add additional
checks.
3) Add unit test that failed with original code, but
passes with this patch.
Fixes issue #2935.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: Ibefba554547ddf3e26aaabfa4288c8073d3c04ff
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17148
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Reviewed-by: Mike Gerdts <mgerdts@nvidia.com>
Community-CI: Mellanox Build Bot
It is quite common for a user to use the exact same iovec (in memory) to
describe buffers for two different operations. If that iovec was
describing accel buffer, accel would modify it replacing it with an
actual buffer. This is broken if that iovec was used by some other task
in a sequence, as accel wouldn't be aware that it has been changed too.
To address this, accel will use a new iovec from the aux_iovs array. It
means that accel buffers always *must* be passed using a single iovec.
Theoretically, users could chunk that buffer into several iovecs, but
spdk_accel_get_buf() always returns a single buffer, so, in practice,
this should never happen, and therefore is unsupported.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I25271bc032987dd6028fb7b3adde061657759b4b
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17039
Community-CI: Mellanox Build Bot
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Requests that have their data pushed/pulled from a memory domain or have
an accel sequence executed aren't handled by a bdev module, so we
shouldn't submit an abort request. Those operations cannot be aborted
either, so the abort request is failed in this case.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: Icd185c4a2951a555d321cd037de0af1ab157f37a
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17020
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
These operations are handled internally by the bdev layer, so it should
first wait until they're completed before issuing reset to a bdev
module.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I74f0d42dcb9a289aa7c3115ca309cb92870548e2
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17019
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Similarly to requests executed by accel, we need to track bdev_ios that
have their data pushed/pulled.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: Ie6b0d2c058e9f13916a065acf8e05d1484eae535
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16978
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
It will make it possible to check if a request is being processed by
accel when doing resets/aborts.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: Ice07211df316e1eee9640e750ff8e176c8a3ca6f
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16977
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
This patch enables passing accel sequence for read requests. The
handling is pretty similar to writes, but the sequence is executed after
a request is completed by a bdev module.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I79fd7d4873265c81a9f4a66362634a1c4901d0c9
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16975
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
It is now possible to submit a write request with a sequence of accel
operations that need to be executed before actually writing the data.
Such requests will be directly passed to a bdev module (so that it can
append subsequent operations to an accel sequence) if that bdev supports
accel sequences and the request doesn't need to be split. If either of
these conditions are not met, bdev layer will execute all the
accumulated accel operations before passing the request to a bdev
module.
The reason for not submitting split IOs with an accel sequence is that
we would need to split that accel sequence too. Currently, there's no
such functionality in accel, so we treat this case in the same way as if
the underlying bdev module didn't support accel sequences (it's executed
before bdev_io is split).
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I66c53b3a1a87a35ea2687292206c899f80aaed4a
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16974
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
bdev_io_should_split() adds some non-zero overhead, so checking it
multiple times in an IO path is inefficient. So, to avoid that, call
bdev_io_should_split() once during IO initialization and cache the
result in bdev_io.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I1da6514d409f8a4e4bbb14722dd53b2c88988cac
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17058
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
This channel will be used to execute accel operation sequences.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: Ied4bb57d14a50a923908ffb13ef4ba34ca65175c
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16972
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Modules can now report that they support accel chaining for specific
operations through the accel_sequnce_supported() callback.
The support is reported per IO type. This allows modules to support
accel sequences for some operations, while relying on the bdev layer to
handle them for other IO types.
Only bdevs without separate metadata buffers are allowed to support this
new mode. That's because metadata in separate buffer is expected to use
the same memory domain as data buffers. With an accel sequence, those
data memory domains can change, while metadata's memory domain always
stays the same. To support bdevs with separate metadata buffers, we'd
need to add separate pointers for metadata's memory domain. For now,
simply disallow registering bdevs with separate metadata supporting
accel sequences.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I0c49cc00096837d70681a69b2633c2cb3dfd4e39
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16971
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
If an IO is completed, before submitting it to a module, it isn't put on
the io_submitted list, so we can't use bdev_io_complete() to complete
it, as it'll break that list. To avoid that, a new function was added,
bdev_io_complete_unsubmitted(), that will safely complete the IOs in
such case. For now, it's equivalent to executing user's completion
callback, but it'll serve as a good place to release any resources that
should be freed before an IO is completed.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I1442ead9d272d9210553803bed1d1c989a2bf761
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16970
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
This function can be useful in places other than accel modules (e.g. to
check if a buffer belongs to accel), so it needs to be declared in
accel.h.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I8fdd58b2ed40dc4a4acce2a8d3e1c5f76944c929
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16969
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
They were disabled before the v23.01 release, because none of the other
libraries were using the new spdk_accel_append_* API. But now, they
will be used in the bdev layer and bdev modules, so they need to be
re-enabled. We're using the same values as we do in the bdev layer.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: Ibda86ca5619e4104e107048ce0965171501fdc5a
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16968
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
we can get sessions information by vhost_get_controllers
Signed-off-by: zhipeng Lu <luzhipeng@cestc.cn>
Change-Id: I8e63aea64d02b3467a62f30a712e1dcbf6fb8854
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16315
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Community-CI: Mellanox Build Bot
When a blobstore consumer creates or loads a blobstore, it should be
able to set a per-blobstore context pointer that will be passed back to
the consumer via bs->esnap_bs_dev_create().
Signed-off-by: Mike Gerdts <mgerdts@nvidia.com>
Change-Id: I59c0ebe21eaf65c3d79a4ac3469715283f56313a
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14970
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
All paths in nvme_rdma_parse_addr(), except the one in this patch
already returned negated error values, so fix it.
Change-Id: I615956e4139f70bfc171bcab94e6e89f60e62ac3
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17098
Community-CI: Mellanox Build Bot
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
On FreeBSD getaddrinfo() report positive error code
values, meanwhile Linux does it with negative ones.
Make sure that regardless of the system used,
error codes with same sign are reported.
This can be observed in the log reported in #2936.
Besides the above, in some instances replaced EINVAL
with the actual return value.
Change-Id: I7f88c314bdf5c3a03f8661c2213e33b2fc276ef7
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17097
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
nvme_tcp_parse_addr() uses getaddrinfo() to parse the address.
Depending on the system behavior of this function differs.
On FreeBSD the port is verified not to be exceeding 65535
for IPv4, meanwhile Linux does not check it at this point.
test_nvme_tcp_qpair_connect_sock() UT was attempting to
test the code path that is moved in this patch, but
on FreeBSD was encountering failure during getaddrinfo()
with different error code.
This patch moves the destination port check before
parsing addresses to take the same path regardless of
the system used.
Fixes#2936
Change-Id: I271e8c32e07a15dcf0e0ee7e90dd174c96b18858
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17095
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
NVME TCP driver supports up to 16 sge elements
while only 1 sge is reported - that leads to
unnecessary requests split which degrades perf.
Also pass correct iovcnt to nvme_tcp_build_iovs -
it should be 32. Otherwise, pdu header consumes
1 iov and data is written partially.
Add a check that at least data_len bytes were
appended to the socket iovs and fail request
otherwise.
Signed-off-by: Alexey Marchuk <alexeymar@nvidia.com>
Change-Id: Ie83c807dd3fec2c7e7cbcda1e493d6fd74ebe599
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17006
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Make it clear that number of entries might not be equal
to the number of recorded traces, as some of the latter
might occupy two entries due to their length.
Change-Id: I3099cfb719c38bdee48fbe20fccef3ef43e820a3
Signed-off-by: Krzysztof Karas <krzysztof.karas@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16916
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
nvme_rdma_qpair_abort_reqs() and nvme_tcp_qpair_abort_reqs() did not
initialize cpl->sqid. Hence, unexpected message was printed by
spdk_nvme_print_completion(). Fix the bugs in this patch.
Fixes#2930
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: I8b41166e58b26ce22c453ab85794b46dbe3dd3a2
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17067
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Michael Haeuptle <michaelhaeuptle@gmail.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
nvme_ctrlr_disable_poll() continued to be called until it returned 0.
However, if the corresponding drive was unresponsive, the continuous
calls consumed CPU and affected other operations.
If the corresponding drive is unresponsive, we cannot complete disabling
the controller. Hence, call nvme_transport_ctrlr_disconnect_qpair_done()
if nvme_ctrlr_disable_poll() returned any value other than -EAGAIN.
Even before this patch, nvme_ctrlr_disable_poll() collected an error log
if it failed. Hence, we do not have to add more error logs.
Fixes issue #2931
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: I26cabb94e5744e3a2d975670adbf2e4e48d5bd7a
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17002
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Michael Haeuptle <michaelhaeuptle@gmail.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
By the patch 736b9da034
nvme_qpair_abort_all_queued_reqs() was changed to be called after the
adminq is actually disconnected.
However, the patch ac31590b37
unexpectedly disabled to call nvme_qpair_abort_all_queued_reqs() for
adminq because qpair->active_proc is NULL for adminq.
Add one more condition to nvme_transport_ctrlr_disconnect_qpair_done().
Fixes issue #2928
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: Ic65f4cd952e6e89275788ff4b86ceca050f624d5
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/17001
Reviewed-by: Michael Haeuptle <michaelhaeuptle@gmail.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Add command dword 13 field to the extendable structure
spdk_nvme_ns_cmd_ext_io_opts. This now enables us to pass dspec
and dsm fields.
Signed-off-by: Ankit Kumar <ankit.kumar@samsung.com>
Change-Id: Id4d3dac14fdbf0e2a57e0bf287551dfd827dd503
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16945
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Community-CI: Mellanox Build Bot
Made cq_is_full() as wrapper around cq_free_slots()
Signed-off-by: Swapnil Ingle <swapnil.ingle@nutanix.com>
Change-Id: I392f62e959c7e23b4360e77759027ea55c2398b9
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16789
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: John Levon <levon@movementarian.org>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Linux host nvme driver processes all pending cqe's in one batch along with
completing backing blk_mq req's and later rings cq_doorbell once for all
processed cqes.
As blk_mq req's are completed there is room for more submissions
before ringing cq_doorbell.
This may race with vfio_user cq_is_full() which uses cq_doorbell to make final
decision and as host has not updated cq_doorbell we fail with cq_full error.
To mitigate this only process commands from sq which have free cq slot.
Signed-off-by: Swapnil Ingle <swapnil.ingle@nutanix.com>
Change-Id: I0cefb41df8099eb71de25923d05a9fcb28e4d124
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16788
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Add default copy command support in bdev layer for backing devices that
does not support copy command.
Signed-off-by: Rui Chang <rui.chang@arm.com>
Change-Id: I5632e25544e95ac0c53ff91c4cd135dac53323ae
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16638
Community-CI: Mellanox Build Bot
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
When IBV_EVENT_DEVICE_FATAL & RDMA_CM_EVENT_DEVICE_REMOVAL occurs,
destroy qpair immediately and do no assume that no successful WQE will
be received after rdma_disconnect.
Signed-off-by: sijie.sun <sijie.sun@smartx.com>
Change-Id: I23e44dd32c8adea301e5251659b1be519f5dfdf7
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16314
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Community-CI: Mellanox Build Bot
IB device may be unplugged & hotplugged when modifying slaves of bonding
IB devices. This patch will try to recreate ibv device contexts, poller
and listeners after IB devices come back.
Signed-off-by: sijie.sun <sijie.sun@smartx.com>
Change-Id: I3288174bad847edc2d9859cb34aa93c6af8c673b
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15616
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
This is the beginning of support for external snapshots. An external
snapshot is a read-only blobstore device (struct spdk_bs_dev) that can
be used as a blob's back device. Normally a blob will have no back
device (a normal blob), a zeroes back device (a thin provisioned blob),
or a blob back device (a clone blob). When a blob has an external
snapshot ("esnap") as its back device, it is called an esnap clone.
With this patch, esnap clones can be created but they are not yet
useful. Subsequent patches in the series will plumb the IO path, enable
various features, and allow lvol bdevs to be esnap clones.
Signed-off-by: Mike Gerdts <mgerdts@nvidia.com>
Change-Id: I29206b628a2b03b6386a88532565e228df988e0e
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14969
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Removed SPDK_IDXD_FLAG_PERSISTENT flag and associated code.
Change-Id: Ib4e038794792ae9866bdf344f1ec58dd04dbd483
Signed-off-by: Marcin Spiewak <marcin.spiewak@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16986
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
This patch removes references to deprecated PMEM from accel library.
The code that was executed when ACCEL_FLAG_PERSISTENT flag is set,
is no longer needed and is removed.
_sw_accel_copy() function is removed and replaced with memcpy(), as
after PMEM removal its functionality is the same as memcpy().
_sw_accel_dualcast() is no longer needed, replaced with direct calls
to memcpy()
Removed 'flags' parameter - it is no longer needed
accel_ut.c: removed references to PMDK
deprecation.md updated
ACCEL_FLAG_PERSISTENT flag will be removed in next patch.
Change-Id: I86130466fe7a5f6ee547df1517b803035ff41a7a
Signed-off-by: Marcin Spiewak <marcin.spiewak@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16899
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
If a driver is registered and selected, it'll now be used to execute
sequences of accel operations. The driver has priority over accel
modules, so the modules will only be used to execute operations that the
driver cannot perform.
Once driver completes a task (or a number of tasks), it notifies accel
using standard spdk_accel_task_complete(). To let accel continue
processing a sequence, driver can call spdk_accel_sequence_continue().
This can be done when the driver executes all tasks (1), an error occurs
(2), or the driver doesn't know how to execute a given opcode (3). In
case of (3), that operation will be executed using appropriate accel
module and, while the rest of the sequence will be sent back to the
driver.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: If414c02073ffc731454e03d25c7ee02bef58463b
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16548
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
The following error was reported when running gpt_ut which is related
to crc32_update().
"load of misaligned address 0x001ffeff78cc for type 'const uint64_t',
which requires 8 byte alignment".
This patch preprocesses the first several bytes to make the buf address
passed to __crc32_d or__crc32_cd is 8 byte aligned. And finally process
the trailing bytes.
For function spdk_crc32c_update in crc32c.c, memcpy was used to avoid
misaligned load problem. Update it with above solution to reduce extra
overhead.
Signed-off-by: Richael Zhuang <richael.zhuang@arm.com>
Change-Id: I7c7aaa41e1c042a96668158818b06729fb3ceec6
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16801
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Depending on the number of cores there are sporadic issues getting
elements of that pool although free elements are there during poll
group creation. Operation returns -ENOBUF. It results in odd notice
msg.
"nvmf_transport_poll_group_create: *NOTICE*: Unable to reserve the
full number of buffers for the pg buffer cache. Decrease the number of
cached buffers from 455 to 1366"
In this case 1366 is the actual number of available elements in the
pool. Few poll groups suceeds and few are ending up with the buffer
cache size set to 0.
Issue has been rootcaused as bug or behaviour change in DPDK v22.01.
Consider example:
We create DPDK mempool with 4K buffers, cache of 256. When first poll
group requests 512 buffers, DPDK mempool first looks in its per-core
cache, sees no buffers (mempool buffer cache doesn't get prepopulated)
and then requests 512 + 256 buffers from the backing pool. It returns
512 of the buffers to the user, and puts the other 256 buffers in the
cache ...it should only request 512 buffers total. For 8 cores and 512
buffers requested only 5 cores will get their buffers.
Disabling mempool cache seems to workaround the issue. More effective
cache is already implemented on nvmf generic layer.
Signed-off-by: Jacek Kalwas <jacek.kalwas@intel.com>
Change-Id: I3149dea95a4f24a75dd0074eda9468c4856d901d
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16913
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Add error check to unclaim mechanism. Issue #2920
showed that unclaiming CPU locks might fail and
we should catch errors to determine the cause.
Change-Id: Ifdfb7db2595d73f8bae13418ef145ad80e1d07ef
Signed-off-by: Krzysztof Karas <krzysztof.karas@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16958
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Add implementation of uuid_generate_sha1() for systems
that do not have this function in their system libraries.
Use uuid_generate_sha1 from uuid.h inside a new function
spdk_uuid_generate_sha1(). The reason for this addition
is to prepare for UUID generation correction to conform
to standards.
First part of series addressing #2788.
Change-Id: Ib357aa1ee832e886288d176d8a47efdaa326f537
Signed-off-by: Krzysztof Karas <krzysztof.karas@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16414
Reviewed-by: Artur Paszkiewicz <artur.paszkiewicz@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Return the real spdk_fd_group object so it can later be nested.
Change-Id: I84c8a174c7d177799fa484b350269082c61b18a5
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15474
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: John Levon <levon@movementarian.org>
Don't assume spdk_thread_poll() will ever get called. Instead, send
a message to process the exit.
Change-Id: Idd98e7e8164c5efebd0d7c9287e62731e7cbc998
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15551
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: John Levon <levon@movementarian.org>
Move away from relying on spdk_thread_poll() to do clean up in interrupt
mode. In the future, we don't want to have spdk_thread_poll() called at
all.
Change-Id: I5318a7889601a3d3463e35419918b7305f68ee8d
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15550
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: John Levon <levon@movementarian.org>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Eventually, we want to allow merging of spdk_fd_groups, removing a level
of indirection. That means that some interrupt handlers won't
necessarily fire with the spdk_thread context already set. Set it in the
wrappers to ensure it's right.
Change-Id: Ief18d58cf3ee005c2969a9c0ee132b34b24cbd61
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15476
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: John Levon <levon@movementarian.org>
This unifies the poller fds with the interrupt mechanism internally.
Change-Id: I57a270260981ff54670365dddb33a1d9bdb56781
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15754
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Added initialization of the prev_crc variable to avoid compilation error:
idxd.c: In function ?spdk_idxd_submit_copy_crc32c?:
idxd.c:1138:51: error: ?prev_crc? may be used uninitialized [-Werror=maybe-uninitialized]
1138 | desc->crc32c.addr = prev_crc;
| ~~~~~~~~~~~~~~~~~~^~~~~~~~~~
idxd.c:1081:18: note: ?prev_crc? was declared here
1081 | uint64_t prev_crc;
| ^~~~~~~~
Change-Id: I6b93d5d85b52e20f8a2c313c41b740f66eebe1c7
Signed-off-by: Marcin Spiewak <marcin.spiewak@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16900
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Fix for number of dwords which is 0 based as per spec.
Use bitwise operators instead of division and modulus.
Change-Id: Ib315bf9394ef599317f41429742e7b8054069549
Signed-off-by: Ankit Kumar <ankit.kumar@samsung.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16814
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
They aren't cleared before a task is submitted and might store pointers
from a previous operation. This can lead to issues if the previous
operation was using memory domains and we submit the task to a module
also supporting memory domains.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: Icafb924c2e936ee6a83d921ae48e953b98f00841
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16848
Community-CI: Mellanox Build Bot
Reviewed-by: Artur Paszkiewicz <artur.paszkiewicz@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Remove libuuid usage on FreeBSD and add dedicated implementation of
spdk_uuid API using functions from the standard library.
Fixes: #2878
Change-Id: Ie49ccb2842acad6064bffd789e4f64b7365b6e5c
Signed-off-by: Artur Paszkiewicz <artur.paszkiewicz@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16558
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
An example of async operation which can be handled on specific
transport layer could be creation of spdk thread followed by
a poller registration.
This change also aligns with transport destroy which is already
async operation.
Current transport create function is marked deprecated and is meant
for transports supporting sync create only to maintain backward
compatibility. Async version supports both create operations.
Signed-off-by: Jacek Kalwas <jacek.kalwas@intel.com>
Change-Id: I1f5a477819e58f30983d26f81a1416bed1279ecf
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16463
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
The spdk_bdev_ext_io_opts structure is used to pass extra options when
submitting a bdev IO request, without having to modify/add functions to
handle new options. Additionally, the structure has a size field to
allow adding new fields without breaking the ABI (and thus having to
bump up the major version of a library).
It is also a part of spdk_bdev_io and there are several reasons for
removing it from that structure:
1. The size field only makes sense in structures that are passed
through pointers. And spdk_bdev_ext_io_opts is indeed passed as a
pointer to spdk_bdev_{readv,writev}_blocks_ext(), however it is
also embedded in spdk_bdev_io (internal.ext_opts_copy), which is
also part of the API. It means that each time a new field is added
to spdk_bdev_ext_io_opts, the size of spdk_bdev_io will also
change, so we will need to bump the major version of libspdk_bdev
anyway, thus making spdk_bdev_ext_io_opts.size useless.
2. The size field also makes internal.ext_opts cumbersome to use, as
each time one of its fields is accessed, we need to check the size.
Currently the code doesn't do that, because all of the existing
spdk_bdev_ext_io_opts fields were present when this structure was
initially introduced, but we'd need to do check the size before
accessing any new fields.
3. spdk_bdev_ext_io_opts has a metadata field, while spdk_bdev_io
already has u.bdev.md_buf, which means that we store the same thing
in several different places in spdk_bdev_io (u.bdev.md_buf,
u.bdev.ext_opts->metadata, internal.ext_opts->metadata).
Therefore, this patch removes all references to spdk_bdev_ext_io_opts
from spdk_bdev_io and replaces them with fields (memory_domain,
memory_domain_ctx) that were missing in spdk_bdev_io. Unfortunately,
this change breaks the API and requires changes in bdev modules that
supported spdk_bdev_io.u.bdev.ext_opts.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I49b7524eb84d1d4d7f12b7ab025fec36da1ee01f
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16773
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
TP4146 introduced support for two new IO commands,
IO management receive and send.
Signed-off-by: Ankit Kumar <ankit.kumar@samsung.com>
Change-Id: Iaf37310b84e278df043dcf71a0c2ef912c2fca8e
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16520
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
TP4146 added support for 4 new log pages.
These are FDP configurations, reclaim unit handle usage,
FDP statistics and FDP events.
Updated the identify example file accordingly.
Signed-off-by: Ankit Kumar <ankit.kumar@samsung.com>
Change-Id: I5a20b728605257774d72bc184b50bc5008e142ea
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16518
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Format LBA size (FLBAS) is updated to have:
Bit 3:0 as least significant 4 bits for format index
Bit 6:5 as most significant 2 bits for format index
NVMe format command fields are updated accordingly.
Add a new helper function to fetch the correct format index.
Update examples and unit test files accordingly.
Signed-off-by: Ankit Kumar <ankit.kumar@samsung.com>
Change-Id: I2d6d9045b9d65ae91cb18843ca75b59cc27ed2f2
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16515
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
It isn't used in this function and the callers always pass NULL.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I07baa13a25b1e4e0b8832a093a53250392b10f10
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16682
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
This should help catching bugs when a failed sequence gets cleared its
failed state.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I9389a2610e94e766aaf4185445c36442c4d4a1f7
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16545
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
This will allow a platform driver to allocate a buffer in case it cannot
execute the whole sequence and the destination buffer of the last
operation is a "virtual" accel buffer.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: Ia947cf553619828a170c5d0563b4c355d7b5ead5
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16377
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
This will allow drivers to check if a task is using buffers from accel
domain. This is just a helper, since the same can be achieved by
calling `spdk_memory_domain_get_first("SPDK_ACCEL_DMA_DEVICE")`, but
there's only a single accel domain and it is a bit special, so it makes
sense to have a dedicated helper function for getting it.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I07db7445ed9b109e66ecdbc0483a6a158a551070
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16376
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
The goal of a platform driver is to execute chained accel operations in
the most efficient way possible. A driver is aware of the hardware
available on a platform and can execute several operations as a single
one. For instance, if we want to do DMA and then encrypt the data, the
driver can do both at the same time, if the hardware is capable of doing
that.
Platform drivers aren't required to support all operations. If a given
operation cannot be executed, the driver should notify accel to continue
processing a sequence, via spdk_accel_sequence_continue(), and that
operation will processed by a module assigned to its opcode.
It is required however, that all platform drivers support memory
domains, including the "virtual" accel domain. A method for allocating
those buffers will be added in the following patches.
This patch only adds methods to register and select platorm drivers, but
doesn't change the way a sequnce is executed (i.e. it doesn't use the
driver to execute it).
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I97a0b07e264601ab3cf980735319fe8cea54d38e
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16375
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
If req->data is set, with all the previous changes, then req->iovcnt
should also be more than zero.
Signed-off-by: John Levon <john.levon@nutanix.com>
Change-Id: I29b5f45541c9dba2dd896109dd43d2b5321ec467
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16274
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Also deprecate the existing spdk_nvmf_request_data() API, which is
incompatible with iovecs.
Signed-off-by: John Levon <john.levon@nutanix.com>
Change-Id: I44df8ff30a431873a0c2f34b0cdb58df858fd7e3
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16200
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Use req->iov instead of req->data in reservation handling code.
Signed-off-by: John Levon <john.levon@nutanix.com>
Change-Id: I6d79711d03f45bd5e118c6324d22decad887a788
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16199
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
If spdk_sock_flush() returns an error, there's no reason not to
disconnect the qpair, as it usually means that that socket's connection
has been terminated.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I54e9bebc38e2a24a3baf69eb18ec3c654b210318
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16644
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
The bahavior of spdk_sock_flush() was changed in 5433004ec to return the
number of flushed bytes and -1 with errno set to EAGAIN in case nothing
has been flushed (instead of returning 0). Therefore, we shouldn't
treat EAGAIN as an error in nvme_tcp_qpair_process_completions().
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I5473488b5b408cdc739921046f1a0cc2c98f98de
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16643
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
This never happens, as requests in this state are always immediately
transitioned to other states.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I0408ed9d8003d364bc38c86a9a50312721ab1284
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16642
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
It is possible for requests waiting for R2T ACK to receive H2C PDU
before receiving the ACK. Therefore, the following sequence:
1. Host sends a write request to the target.
2. Target sends R2T PDU to the host and sets request's state to
AWAITING_R2T_ACK.
3. Host sends H2C PDU to the target, but it doesn't reach the target
yet.
3. Host sends an abort command to abort that request. Request's state
is changed to READY_TO_COMPLETE.
4. Target receives the H2C PDU, sees that request's state is
READY_TO_COMPLETE, which is unexpected, and terminates the
connection.
will cause the target to terminate the connection, which is obviously
incorrect.
So, to avoid that, we can treat AWAITING_R2T_ACK state in the same way
as TRANSFERRING_HOST_TO_CONTROLLER and register a poller waiting for the
state to be changed.
Fixes#2789.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: Idddc627050000b74663dba397dc14d10aa0e284f
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16641
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
We don't need to allocate 2MiB aligned memory address for
vrings, this will waste memory and may invoke dynamic
memory allocation in DPDK sometimes.
Fix issue #2846.
Change-Id: I6410d417f92623b44c375359d5e2b5ec8ed815c0
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16651
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Starting in SPDK 23.01, calling spdk_bdev_register() and
spdk_bdev_examine() from a thread other than the app thread was
deprecated. This commit removes the deprecation and as such calling
these functions from a thread other than the app thread is an error.
As a side effect of this commit, all bdev module examine_config() and
examine_disk() callbacks will be called on the app thread.
Change-Id: Idaae06608101e2a513d9312ac5544ffe94effe4a
Signed-off-by: Mike Gerdts <mgerdts@nvidia.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15826
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
If multiple claims exist on a bdev, examine_disk() is called for each of
them.
Change-Id: I0a6dc3e4bd1da20bbcbddf97a16e04c62c82354c
Signed-off-by: Mike Gerdts <mgerdts@nvidia.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15290
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
This commit has no functional change. It refactors an if statement into
a case statement in preparation for supporting claims v2.
Signed-off-by: Mike Gerdts <mgerdts@nvidia.com>
Change-Id: I1862428c91a7066ad9079878d4c1b690a5ef631c
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15289
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>