Adding `psk` field to `spdk_nvme_ctrlr_opts`
Adding `psk` parameter to `bdev_nvme_attach_controller` RPC
Change-Id: Ie6f0d8b04ce472e6153934e985c026acded6cdfc
Signed-off-by: Boris Glimcher <Boris.Glimcher@emc.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14046
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Trim is now also available as a management operation via RPC.
Signed-off-by: Kozlowski Mateusz <mateusz.kozlowski@intel.com>
Signed-off-by: Artur Paszkiewicz <artur.paszkiewicz@intel.com>
Change-Id: I05b778a611e9809a14bfed50b01986bb4649a35c
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/13379
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
the current timeout parameter of the interface bdev_iscsi_set_opts is
duplicated with the timeout parameter of the JSONRPCClient parameter,
which may cause the iscsi timeout and JSONRPCClient parameters to
overwrite each other.
Signed-off-by: gongwei <gongwei833x@gmail.com>
Change-Id: I96604a7e1a495ac2e99518812297230680df42fd
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14306
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Added a new RPC, vmd_rescan, that forces the VMD driver to do a rescan
of all devices behind the VMD. A device that was previously removed via
spdk_vmd_remove_device() will be found again during vmd_rescan.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: Ide87eb44c1d6d524234820dc07c78ba5b8bcd3ad
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/13958
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Tom Nabarro <tom.nabarro@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Added new RPC, vmd_remove_device, that allows users to remove a PCI
device managed by the VMD library simulating a hot-remove.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: Ifb84818ce8d147d1d586b52590527e85fe9c10de
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/13957
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Tom Nabarro <tom.nabarro@intel.com>
The new name is consistent with the naming scheme of
<subsystem>_<action> that all of our other RPCs use.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I2cae7af5715add8eba26501cd192a6ac4884ec69
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/13952
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Tom Nabarro <tom.nabarro@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Adds API for fast shutdown - the ability for FTL to skip most
of the metadata persists made during clean shutdown, and relying
on their representation in shared memory instead. This allows for
faster update of SPDK (or just FTL, assuming no metadata changes),
with downtime reduction from 2-5 seconds to 500-1000 ms (for
14TiB+800GiB base and cache drives).
Signed-off-by: Artur Paszkiewicz <artur.paszkiewicz@intel.com>
Signed-off-by: Kozlowski Mateusz <mateusz.kozlowski@intel.com>
Change-Id: I5999d31698a81512db8d5893eabee7b505c80d06
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/13348
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
This commmit introduces a new bdev type backed up by DAOS DFS.
Design wise this bdev is a file named as the bdev itself in the DAOS POSIX
container that uses daos event queue per io channel.
Having an event queue per io channel is showing the best IO throughput.
The implementation uses the independent pool and container connections per
device's channel for the best IO throughput.
The semantic of usage is the same as any other bdev type.
To build SPDK with daos support, daos-devel package has to be installed.
The current supported DAOS version is v2.X, please see the installatoin and
setup guide here: https://docs.daos.io/v2.0/
$ ./configure --with-daos
To run it, the target machine should have daos_agent up and running, as
well as the pool and POSIX container ready to use, please see the
detailed requirements here: https://docs.daos.io/v2.0/admin/hardware/.
To export bdev over tcp:
$ ./nvmf_tgt &
$ ./scripts/rpc.py nvmf_create_transport -t TCP -u 2097152 -i 2097152
$ ./scripts/rpc.py bdev_daos_create daosdev0 <pool-label> <cont-label>
1048576 4096
$ ./scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk1:cnode1 -a -s
SPDK00000000000001 -d SPDK_Virtual_Controller_1
$ ./scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk1:cnode1
daosdev0
$ ./scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk1:cnode1
-t tcp -a <IP> -s 4420
On the initiator side, make sure that `nvme-tcp` module is loaded then
connect drives, for instance:
$ nvme connect-all -t tcp -a 172.31.91.61 -s 4420
$ nvme list
Signed-off-by: Denis Barakhtanov <denis.barahtanov@croit.io>
Change-Id: I51945465122e0fb96de4326db742169419966806
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12260
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Note, this change only sets defaults for the ID/KEY,
more specific use cases like NVMe/TCP may set the ID and KEY on a per connection basis.
Also simplify PSK identity string, that isn't NVMe focused.
NVMe libraries using this will need to construct more complicated
identity strings and pass them to the sock layer.
Example:
rpc.py sock_impl_set_options -i ssl --psk-key 4321DEADBEEF1234
rpc.py sock_impl_set_options -i ssl --psk-identity psk.spdk.io
./build/examples/perf --psk-key 4321DEADBEEF1234 --psk-identity psk.spdk.io
./build/examples/hello_sock --psk-key 4321DEADBEEF1234 --psk-identity psk.spdk.io
Change-Id: I1cb5b0b706bdeafbccbc71f8320bc8e2961cbb55
Signed-off-by: Boris Glimcher <Boris.Glimcher@emc.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/13759
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Docs explaining how to use the RPC are in the next patch in the
series.
Signed-off-by: paul luse <paul.e.luse@intel.com>
Change-Id: I7dab8fdbeb90cdfde8b3e916ed6d19930ad36e66
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12848
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
The RPC provides a list of initialized engine names along with
that engine's supported operations.
Signed-off-by: paul luse <paul.e.luse@intel.com>
Change-Id: I59f9e5cb7aa51a6193f0bd2ec31e543a56c12f17
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/13745
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
In prep for upcoming patch that will provide an RPC to override
and automatic assignment of an op code to an engine.
Signed-off-by: paul luse <paul.e.luse@intel.com>
Change-Id: I17d4b962fb376a77f97ce051a513679d0fba698e
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12829
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
This RPC prints only cntlid, to identify a controller
it is needed to issue bdev_get_bdevs and find more info
using cntlid. Printing the controller's trid helps to
identify the controller faster.
Signed-off-by: Alexey Marchuk <alexeymar@nvidia.com>
Change-Id: I97325b822528ef9e71afbe2ff1c30b3bce2ae203
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/13655
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Dong Yi <dongx.yi@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
This implementation of xNVMe BDEV module supports the char-device / ioctl-over-uring,
along with the "regular" io_uring, libaio, POSIX aio, emulated aio (via threadpools) etc.
Code changes done :
a. Addition of xNVMe submodule to SPDK
b. Modification of RPC scripts to Create / Delete xNVMe BDEVs
c. Implementation of xNVMe BDEV module
Signed-off-by: Krishna Kanth Reddy <krish.reddy@samsung.com>
Change-Id: If814ca1c784124df429d283015a6570068b44f87
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11161
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
The bdev_lvol_grow_lvstore will grow the lvstore size if the undering
bdev size is increased. It invokes spdk_bs_grow internally. The
spdk_bs_grow will extend the used_clusters bitmap. If there is no
enough space resereved for the used_clusters bitmap, the api will
fail. The reserved space was calculated according to the num_md_pages
at blobstore creating time.
Signed-off-by: Peng Yu <yupeng0921@gmail.com>
Change-Id: If6e8c0794dbe4eaa7042acf5031de58138ce7bca
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/9730
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
add zerocopy_threshold related information in jsonrpc.md
Change-Id: I9fbea29f39c532af41b33da81420c0c6d3200031
Signed-off-by: Richael Zhuang <richael.zhuang@arm.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12585
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
This patch adds virtio_blk abstraction for custom transports,
with the 'vhost_user_blk' first one being used.
Added spdk_virtio_blk_transport_ops describing the nessecary
callbacks to be implemented by each transport.
Please use SPDK_VIRTIO_BLK_TRANSPORT_REGISTER to register the transport.
Transports can use virtio_blk_process_request() to process the
incoming I/O from their queues.
virtio_blk_create_transport RPC was added to create one of the
registered transports, possibly with custom JSON arguments.
Added 'transport' argument to vhost_create_blk_controller RPC,
to specify which transport should create the controller.
By default the vhost_user_blk transport is used.
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Change-Id: Ic9d93a6e0f483796eb56b7174a678e41a6ea4808
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/9540
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
And associated RPC to enable.
Signed-off-by: paul luse <paul.e.luse@intel.com>
Change-Id: I06785bcd8b8957293ad41d13bab556fe62f29fd5
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12765
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
IDXD has always been used everywhere but technically it stands for
the driver, not the HW (Intel Data Streaming Accelerator Driver)
where the X comes from "Streaming Accelerator" somehow. Anyway, the
underlying hardware is just DSA. It doesn't matter much now but
upcoming patches will add support for a new HW accelerator called
the Intel In-Memory Analytics Accelerator which we'll call IAA and
it will use the same (mostly) device driver (IDXD) as DSA. So, calling
the HW what it is will lessen confusion when adding IAA support.
This patch just does renaming for the accel_fw module and associated
files (RPC, etc).
Signed-off-by: paul luse <paul.e.luse@intel.com>
Change-Id: Ib3b1f982cc60359ecfea5dbcbeeb33e4d69aee6a
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11984
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
It's now possible to specify a time to wait until a connection to the
discovery controller and the NVM controllers it exposes is made.
Whenever that time is exceeded, a callback is immediately executed.
However, depending on the stage of the discovery process, we might need
to wait a while before actually stopping it (e.g. because a controller
attach is in progress). That means that a discovery service might be
visible for a while after it timed out.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I2d01837b581e0fa24c8e777730d88d990c94b1d8
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12684
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
By default, failback to the preferred I/O path is done automatically
if it is restored. Some users may want to keep using the backup I/O
path even if the preferred I/O path is restored. In this case,
bdev_nvme_set_preferred_path can be used to do manual failback.
We may be able to clear/fill I/O path cache more strictly but it will
be complicated and have bugs. This patch does the minimal change,
just skips an apparent case.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: I78fe5faee6ff04e88ae3d7c6be6da1c20637c912
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12431
Community-CI: Mellanox Build Bot
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
The RPC returns a list of active discovery service connections. Each
discovery service is described by a name, its trid, and a list of
discovery service trids it refers to.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: Ifa4b9501dd353e7b4948ad830575a6c94dafd86b
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12380
Community-CI: Mellanox Build Bot
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
The NVMe bdev module supported active-passive policy for multipath mode
first. By this patch, the NVMe bdev module supports active-active policy
for multipath node next. Following the Linux kernel native NVMe multipath,
the NVMe bdev module supports round robin algorithm for active-active
policy.
The multipath policy, active-passive or active-active, is managed per
nvme_bdev. The multipath policy is copied to all corresponding
nvme_bdev_channels.
Different from active-passive, active-active caches even non_optimized
path to provide load balance across multiple paths.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: Ie18b24db60d3da1ce2f83725b6cd3079f628f95b
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12001
Community-CI: Mellanox Build Bot
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
If we specify a preferred path manually for each NVMe bdev, we will
be able to realize a simple static load balancing and make the failover
more controllable in the multipath mode.
The idea is to move I/O path to the NVMe-oF controller to the head of
the list and then clear the I/O path cache for each NVMe bdev channel.
We can set the I/O path to the I/O path cache directly but it must be
conditional and make the code very complex. Hence, let find_io_path() do
that.
However, a NVMe bdev channel may be acquired after setting the preferred
path. To cover such case, sort the nvme_ns list of the NVMe bdev too.
This feature supports only multipath mode. The NVMe bdev module supports
failover mode too. However, to support the latter, the new RPC needs to
have trid as parameters and the code and the usage will be come very
complex. Add a note for such limitation.
To verify one by one exactly, add unit test.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: Ia51c74f530d6d7dc1f73d5b65f854967363e76b0
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12262
Community-CI: Mellanox Build Bot
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Reviewed-by: <tanl12@chinatelecom.cn>
Reviewed-by: GangCao <gang.cao@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Add an new RPC bdev_nvme_get_io_paths to query all active I/O paths.
One io_path belongs to One nvme_bdev_channel.
Each nvme_bdev_channel is associated with one nvme_bdev.
If the RPC bdev_nvme_get_io_paths has a bdev name as a parameter
it can use spdk_for_each_channel() simply for the corresponding
nvme_bdev.
However, users will want to know I/O paths of all nvme_bdevs like
the RPC bdev_get_bdevs.
One io_path has one nvme_qpair. One nvme_qpair belongs to one
nvme_poll_group. By relying on these relationships, the RPC
bdev_nvme_get_io_paths traverses all nvme_poll_groups by using
spdk_for_each_channel() to g_bdev_nvme_ctrlrs.
The RPC bdev_nvme_get_io_paths has two modes, display all or
the specified NVMe bdev's active I/O paths.
The specified bdev name is used just for comparison and empty
array is returned if no matched io_path is found.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: I4a0dbf3ef7aaa9a7b7345fc03dc493cc6d37bc99
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12146
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
NVMe bdev name already includes the name of the NVMe bdev controller and
the NSID. CNTLID will be a good ID to identify a namespace from a NVMe
bdev when multipath is configured. However, the query RPCs,
bdev_get_bdevs and bdev_nvme_get_controllers had not returned such
information.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: I2f2e355ff13f69ced616be803a3152c838cdc980
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12276
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
As per the NVMe specification, a host can identify two areas of guest
memory: one of which is used for the host-written doorbells, and one of
which contains event indexes. The host writes to the shadow doorbell
area, but also writes to the controller's BAR0 doorbell area if the
corresponding event index is crossed by the update. This avoids many
mmio exits in interrupt mode, where BAR0 doorbells are not directly
mapped into the guest VM, with greatly improved performance.
This isn't a useful feature in BAR0 doorbells are mapped into the VM, so
we explicitly disable support in that case.
NB: the Windows NVMe driver doesn't yet support this feature.
Although the specification says that the admin queues should also engage
in this behaviour, in practice, no VM does, so have to include some
hacks to account for this.
Co-authored-by: John Levon <john.levon@nutanix.com>
Signed-off-by: John Levon <john.levon@nutanix.com>
Change-Id: I0646b234d31fbbf9a6b85572042c6cdaf8366659
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11492
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
The value of ack_timeout is calculated according to
the formula 2^(transport_ack_timeout) msec.
Signed-off-by: zhangduan <zhangd28@chinatelecom.cn>
Change-Id: I5a938635d70693ddd405fa5907555bb745b4df0f
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12215
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Up until now, importing an SPDK RPC python module was just a matter of
`import rpc`. It's fine until there's another module called `rpc`
installed on the system, in which case it's impossible to import both of
them. Therefore, to avoid this problem, all of the modules were moved
to a separate directory under the "spdk" namespace.
The decision to move to a location under a separate directory was
motivated by the fact that a directory called scripts/spdk would look
pretty confusing. Moreover, it should make it also easier to package
these scripts as a python package.
Other than moving the packages, all of the imports were updated to
reflect these changes. Files under python now use relative imports,
while those under scripts/ use the "spdk" namespace and have their
PYTHONPATH extended with python directory.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: Ib43dee73921d590a551dd83885e22870e72451cf
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/9692
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Setting this optional parameter to true makes the
RPC completion wait until the attach for all
discovered NVM subsystems have completed.
This is especially useful for fio or bdevperf, to
ensure that all of the namespaces are actually
available before testing.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: Icf04a122052f72e263a26b3c7582c81eac32a487
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12044
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
This opption allows the bdev_get_bdevs RPC to block until a bdev with
specified name appears. It can be useful, when a bdev is created
asynchronously and the exact moment at which it appears is not known.
For instance, with a discovery service, a bdev is created when a
namespace on a remote NVMeoF target is added, but it's not possible to
specify when that happens exactly.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I6c1f974fba445376ca9d45aac2639202547410cc
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11960
Community-CI: Mellanox Build Bot
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
These parameters will be used for any controller created
by the discovery service.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I221b791f38b9c5797ba084c647a98b82c102a121
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11942
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
In vfio-user transport, whenever one IO is completed, it will trigger
an interrupt to guest machine. This cost quite some overhead. This patch
adds an adaptive irq feature to reduce interrupt overhead and boost
performance.
Signed-off-by: Rui Chang <rui.chang@arm.com>
Change-Id: I585be072231a934fa2e4fdf2439405de95151381
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11840
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
- Added hexlify() and unhexlify() for key and key2. This is required
for keys that contain zero and non-ascii characters. Since binary
keys may contain zero character, strlen(key) cannot be used and
key_size and key2_size are used instead. Non-asci chars are not
allowed in json and using hexlified keys fixes this issue as well.
- Updated documentation to clearly state that hexlified keys must
be used.
- Updated test scripts.
Signed-off-by: Yuriy Umanets <yumanets@nvidia.com>
Change-Id: I3fce7839f7eaa67d0307071eba80b4cea472d731
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11891
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
SPDK has settled on what the optimal DSA configuration is, so let's
always use it.
Change-Id: I24b9b717709d553789285198b1aa391f4d7f0445
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11532
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Updated scheduler documentation with information
about thread and core workloads, scheduling process
and dynamic scheduler parameters.
Added descriptions of new dynamic scheduler parameters
in jsonrpc.md.
Change-Id: I567efe21c425c13c16235d7976aad6ae961381bb
Signed-off-by: Krzysztof Karas <krzysztof.karas@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11886
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Add three options for I/O error resiliency to spdk_nvme_bdev_opts.
Then the RPC bdev_nvme_set_options can configure these.
These can be overridden if these are given by the RPC bdev_nvme_attach_controller.
Change-Id: If3ee23aeef8b7585fe0fb5ec4695df5866fc1e74
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11830
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
This was a parameter on the nvmf_create_transport
RPC, and was replaced with max_io_qpairs_per_ctrlr to
reduce confusion on whether this number included the
admin queue or not.
nvmf_vhost test was using this deprecated parameter.
Change it to use -m (max_io_qpairs_per_ctrlr)
instead. '-p 4' would have been evaluated as 1 admin
queue + 3 I/O queues, but it's likely the intent
was for 4 I/O queues. This is a perfect example of
why this parameter was deprecated.
For reference, this was deprecated in June 2020,
commit 1551197db.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I4364fc0a76c9993b376932b6eea243d7cefca9cd
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11543
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Provides RPCs for the qpair error injection APIs to bdev_nvme.
These RPCs are useful in testing NVMeoF/NVMe behavior for various
error scenarios in production.
Signed-off-by: Michael Haeuptle <michael.haeuptle@hpe.com>
Change-Id: I0db7995d7a712d4f8a60e643d564faa6908c3a55
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/10992
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
It may take a long time to detect network transport error
when e.g. port is removed on remote target. This timeout
depends on 2 parameters - retry_count and ack_timeout.
bdev_nvme_set_options supports configuration of retry_count
but transport_ack_timeout is missed. Note: this parameter
is used by RDMA transport only.
Signed-off-by: Alexey Marchuk <alexeymar@mellanox.com>
Change-Id: I7c3090dc8e4078f64d444e2392a9e0a6ecdc31c0
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11175
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: <tanl12@chinatelecom.cn>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>