These were deprecated in 2019, it's time to remove
support for them now.
Change-Id: Ia09368e426a83274d9c7fc90ed8b0391f4d0b67c
Signed-off-by: wanghailiangx <hailiangx.e.wang@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12774
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
This patch adds virtio_blk abstraction for custom transports,
with the 'vhost_user_blk' first one being used.
Added spdk_virtio_blk_transport_ops describing the nessecary
callbacks to be implemented by each transport.
Please use SPDK_VIRTIO_BLK_TRANSPORT_REGISTER to register the transport.
Transports can use virtio_blk_process_request() to process the
incoming I/O from their queues.
virtio_blk_create_transport RPC was added to create one of the
registered transports, possibly with custom JSON arguments.
Added 'transport' argument to vhost_create_blk_controller RPC,
to specify which transport should create the controller.
By default the vhost_user_blk transport is used.
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Change-Id: Ic9d93a6e0f483796eb56b7174a678e41a6ea4808
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/9540
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
These were deprecated in 2019, it's time to remove
support for them now.
Change-Id: I56dbaef56ff793e48441219e07dc6b02dda0b470
Signed-off-by: wanghailiangx <hailiangx.e.wang@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12777
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
These were deprecated in 2019, it's time to remove
support for them now.
Change-Id: I5a715e9b9e991c6febec5e505384728281eee8b7
Signed-off-by: wanghailiangx <hailiangx.e.wang@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12773
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
These were deprecated in 2019, it's time to remove
support for them now.
Change-Id: I33a497fb134320f13606b66ad55fc7b068d011d9
Signed-off-by: wanghailiangx <hailiangx.e.wang@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12716
Community-CI: Mellanox Build Bot
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: GangCao <gang.cao@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Reviewed-by: Xiaodong Liu <xiaodong.liu@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
These were deprecated in 2019, it's time to remove
support for them now.
Change-Id: I477da05a42ca607fbad4d178aa541726197d7c83
Signed-off-by: wanghailiangx <hailiangx.e.wang@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12775
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
These were deprecated in 2019, it's time to remove
support for them now.
Change-Id: I9e203a52877802127df8144e68090d7975f9d200
Signed-off-by: wanghailiangx <hailiangx.e.wang@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12772
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
And associated RPC to enable.
Signed-off-by: paul luse <paul.e.luse@intel.com>
Change-Id: I06785bcd8b8957293ad41d13bab556fe62f29fd5
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12765
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
These were deprecated in 2019, it's time to remove
support for them now.
Change-Id: I25aea510648a55d751db3740b36fb9924d1f52ed
Signed-off-by: wanghailiangx <hailiangx.e.wang@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12747
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
IDXD has always been used everywhere but technically it stands for
the driver, not the HW (Intel Data Streaming Accelerator Driver)
where the X comes from "Streaming Accelerator" somehow. Anyway, the
underlying hardware is just DSA. It doesn't matter much now but
upcoming patches will add support for a new HW accelerator called
the Intel In-Memory Analytics Accelerator which we'll call IAA and
it will use the same (mostly) device driver (IDXD) as DSA. So, calling
the HW what it is will lessen confusion when adding IAA support.
This patch just does renaming for the accel_fw module and associated
files (RPC, etc).
Signed-off-by: paul luse <paul.e.luse@intel.com>
Change-Id: Ib3b1f982cc60359ecfea5dbcbeeb33e4d69aee6a
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11984
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
These were deprecated in 2019, it's time to remove
support for them now.
Change-Id: Ic80ce74344b24814dad792cfff6a4791d0430527
Signed-off-by: wanghailiangx <hailiangx.e.wang@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12741
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
It's now possible to specify a time to wait until a connection to the
discovery controller and the NVM controllers it exposes is made.
Whenever that time is exceeded, a callback is immediately executed.
However, depending on the stage of the discovery process, we might need
to wait a while before actually stopping it (e.g. because a controller
attach is in progress). That means that a discovery service might be
visible for a while after it timed out.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I2d01837b581e0fa24c8e777730d88d990c94b1d8
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12684
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
By default, failback to the preferred I/O path is done automatically
if it is restored. Some users may want to keep using the backup I/O
path even if the preferred I/O path is restored. In this case,
bdev_nvme_set_preferred_path can be used to do manual failback.
We may be able to clear/fill I/O path cache more strictly but it will
be complicated and have bugs. This patch does the minimal change,
just skips an apparent case.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: I78fe5faee6ff04e88ae3d7c6be6da1c20637c912
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12431
Community-CI: Mellanox Build Bot
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
These were deprecated in 2019, it's time to remove
support for them now.
Change-Id: I3b75eea83bd7d700d20a6189e8fb6d1f066dc9b4
Signed-off-by: wanghailiangx <hailiangx.e.wang@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12603
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
These were deprecated in 2019, it's time to remove
support for them now.
Change-Id: I32dd9960bc397244d8e3d0a384fc8b67e907bf68
Signed-off-by: wanghailiangx <hailiangx.e.wang@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12601
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
These were deprecated in 2019, it's time to remove
support for them now.
Change-Id: I6931e80c836b568dec8989dad2a7be4e112c42b4
Signed-off-by: wanghailiangx <hailiangx.e.wang@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12577
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
These were deprecated in 2019, it's time to remove
support for them now.
Fix ocf test script that was still using the
deprecated get_bdevs RPC name - change it to
bdev_get_bdevs.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I7f8caedc250b80503671a0236694181613f63860
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12553
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
These were deprecated in 2019, it's time to remove
support for them now.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I2c9918ed0296f644b0728c5106c47d93e3c7ec30
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12552
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
The RPC returns a list of active discovery service connections. Each
discovery service is described by a name, its trid, and a list of
discovery service trids it refers to.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: Ifa4b9501dd353e7b4948ad830575a6c94dafd86b
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12380
Community-CI: Mellanox Build Bot
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
The NVMe bdev module supported active-passive policy for multipath mode
first. By this patch, the NVMe bdev module supports active-active policy
for multipath node next. Following the Linux kernel native NVMe multipath,
the NVMe bdev module supports round robin algorithm for active-active
policy.
The multipath policy, active-passive or active-active, is managed per
nvme_bdev. The multipath policy is copied to all corresponding
nvme_bdev_channels.
Different from active-passive, active-active caches even non_optimized
path to provide load balance across multiple paths.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: Ie18b24db60d3da1ce2f83725b6cd3079f628f95b
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12001
Community-CI: Mellanox Build Bot
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
If we specify a preferred path manually for each NVMe bdev, we will
be able to realize a simple static load balancing and make the failover
more controllable in the multipath mode.
The idea is to move I/O path to the NVMe-oF controller to the head of
the list and then clear the I/O path cache for each NVMe bdev channel.
We can set the I/O path to the I/O path cache directly but it must be
conditional and make the code very complex. Hence, let find_io_path() do
that.
However, a NVMe bdev channel may be acquired after setting the preferred
path. To cover such case, sort the nvme_ns list of the NVMe bdev too.
This feature supports only multipath mode. The NVMe bdev module supports
failover mode too. However, to support the latter, the new RPC needs to
have trid as parameters and the code and the usage will be come very
complex. Add a note for such limitation.
To verify one by one exactly, add unit test.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: Ia51c74f530d6d7dc1f73d5b65f854967363e76b0
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12262
Community-CI: Mellanox Build Bot
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Reviewed-by: <tanl12@chinatelecom.cn>
Reviewed-by: GangCao <gang.cao@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Add an new RPC bdev_nvme_get_io_paths to query all active I/O paths.
One io_path belongs to One nvme_bdev_channel.
Each nvme_bdev_channel is associated with one nvme_bdev.
If the RPC bdev_nvme_get_io_paths has a bdev name as a parameter
it can use spdk_for_each_channel() simply for the corresponding
nvme_bdev.
However, users will want to know I/O paths of all nvme_bdevs like
the RPC bdev_get_bdevs.
One io_path has one nvme_qpair. One nvme_qpair belongs to one
nvme_poll_group. By relying on these relationships, the RPC
bdev_nvme_get_io_paths traverses all nvme_poll_groups by using
spdk_for_each_channel() to g_bdev_nvme_ctrlrs.
The RPC bdev_nvme_get_io_paths has two modes, display all or
the specified NVMe bdev's active I/O paths.
The specified bdev name is used just for comparison and empty
array is returned if no matched io_path is found.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Change-Id: I4a0dbf3ef7aaa9a7b7345fc03dc493cc6d37bc99
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12146
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
As per the NVMe specification, a host can identify two areas of guest
memory: one of which is used for the host-written doorbells, and one of
which contains event indexes. The host writes to the shadow doorbell
area, but also writes to the controller's BAR0 doorbell area if the
corresponding event index is crossed by the update. This avoids many
mmio exits in interrupt mode, where BAR0 doorbells are not directly
mapped into the guest VM, with greatly improved performance.
This isn't a useful feature in BAR0 doorbells are mapped into the VM, so
we explicitly disable support in that case.
NB: the Windows NVMe driver doesn't yet support this feature.
Although the specification says that the admin queues should also engage
in this behaviour, in practice, no VM does, so have to include some
hacks to account for this.
Co-authored-by: John Levon <john.levon@nutanix.com>
Signed-off-by: John Levon <john.levon@nutanix.com>
Change-Id: I0646b234d31fbbf9a6b85572042c6cdaf8366659
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11492
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
The value of ack_timeout is calculated according to
the formula 2^(transport_ack_timeout) msec.
Signed-off-by: zhangduan <zhangd28@chinatelecom.cn>
Change-Id: I5a938635d70693ddd405fa5907555bb745b4df0f
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12215
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Up until now, importing an SPDK RPC python module was just a matter of
`import rpc`. It's fine until there's another module called `rpc`
installed on the system, in which case it's impossible to import both of
them. Therefore, to avoid this problem, all of the modules were moved
to a separate directory under the "spdk" namespace.
The decision to move to a location under a separate directory was
motivated by the fact that a directory called scripts/spdk would look
pretty confusing. Moreover, it should make it also easier to package
these scripts as a python package.
Other than moving the packages, all of the imports were updated to
reflect these changes. Files under python now use relative imports,
while those under scripts/ use the "spdk" namespace and have their
PYTHONPATH extended with python directory.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: Ib43dee73921d590a551dd83885e22870e72451cf
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/9692
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
The concat module can combine multiple underlying bdevs to a single
bdev. It is a special raid level. You can add a new bdev to the end of
the concat bdev, then the concat bdev size is increased, and it won't
change the layout of the exist data. This is the major difference
between concat and raid0. If you add a new underling device to raid0,
the whole data layout will be changed. So the concat bdev is extentable.
Change-Id: Ibbeeaf0606ff79b595320c597a5605ab9e4e13c4
Signed-off-by: Peng Yu <yupeng0921@gmail.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11070
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Setting this optional parameter to true makes the
RPC completion wait until the attach for all
discovered NVM subsystems have completed.
This is especially useful for fio or bdevperf, to
ensure that all of the namespaces are actually
available before testing.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: Icf04a122052f72e263a26b3c7582c81eac32a487
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12044
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
This opption allows the bdev_get_bdevs RPC to block until a bdev with
specified name appears. It can be useful, when a bdev is created
asynchronously and the exact moment at which it appears is not known.
For instance, with a discovery service, a bdev is created when a
namespace on a remote NVMeoF target is added, but it's not possible to
specify when that happens exactly.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: I6c1f974fba445376ca9d45aac2639202547410cc
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11960
Community-CI: Mellanox Build Bot
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
These parameters will be used for any controller created
by the discovery service.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I221b791f38b9c5797ba084c647a98b82c102a121
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11942
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
In vfio-user transport, whenever one IO is completed, it will trigger
an interrupt to guest machine. This cost quite some overhead. This patch
adds an adaptive irq feature to reduce interrupt overhead and boost
performance.
Signed-off-by: Rui Chang <rui.chang@arm.com>
Change-Id: I585be072231a934fa2e4fdf2439405de95151381
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11840
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
SPDK has settled on what the optimal DSA configuration is, so let's
always use it.
Change-Id: I24b9b717709d553789285198b1aa391f4d7f0445
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11532
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Make use of code implemented in previous patches in the series
to get and set dynamic scheduler values.
Modifiy app.py and rpc.py to accomodate new changes and allow
user to specify scheduler parameters in the RPC calls.
Change-Id: I6173aefbf1d774b91b80ee5bce67eea80a2ab23d
Signed-off-by: Krzysztof Karas <krzysztof.karas@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11449
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Add three options for I/O error resiliency to spdk_nvme_bdev_opts.
Then the RPC bdev_nvme_set_options can configure these.
These can be overridden if these are given by the RPC bdev_nvme_attach_controller.
Change-Id: If3ee23aeef8b7585fe0fb5ec4695df5866fc1e74
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11830
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Now that users need to explicitly add a listener for the discovery
subsystem, make that a bit easier when using rpc.py. Instead of
having to type out nqn.2014-08.org.nvmexpress.discovery, allow
user to just specify 'discovery' as the NQN and rpc.py will
convert it to the discovery NQN before sending the RPC.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I4854d4f072f1758fdd6b37a4c3685e2a2d015caa
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11540
Community-CI: Mellanox Build Bot
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
This parameter was ignored, and was a parameter to the
nvmf_set_config RPC.
For reference, this was deprecated in June 2020, commit
c37cf9fb.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I013f4d7cf874e7e26a8a1d299fdf9d8fa05da580
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11544
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
This was a parameter on the nvmf_create_transport
RPC, and was replaced with max_io_qpairs_per_ctrlr to
reduce confusion on whether this number included the
admin queue or not.
nvmf_vhost test was using this deprecated parameter.
Change it to use -m (max_io_qpairs_per_ctrlr)
instead. '-p 4' would have been evaluated as 1 admin
queue + 3 I/O queues, but it's likely the intent
was for 4 I/O queues. This is a perfect example of
why this parameter was deprecated.
For reference, this was deprecated in June 2020,
commit 1551197db.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I4364fc0a76c9993b376932b6eea243d7cefca9cd
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11543
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
rpc.py users can pipe RPC calls through stdin, which
reduces overhead compared to calling rpc.py
separately for each RPC.
It is common to put these RPC calls in a file, and
then pipe that file to rpc.py. To allow commenting
out RPC calls when doing debugging, have rpc.py
ignore any lines which begin with '#', allowing users
to comment out RPC calls (or even add comments
explaining the RPC calls) with a '#' character.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I8d9c6ac95dd5864c16e4d69ba80f81799068e808
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11506
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Dong Yi <dongx.yi@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Provides RPCs for the qpair error injection APIs to bdev_nvme.
These RPCs are useful in testing NVMeoF/NVMe behavior for various
error scenarios in production.
Signed-off-by: Michael Haeuptle <michael.haeuptle@hpe.com>
Change-Id: I0db7995d7a712d4f8a60e643d564faa6908c3a55
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/10992
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
It may take a long time to detect network transport error
when e.g. port is removed on remote target. This timeout
depends on 2 parameters - retry_count and ack_timeout.
bdev_nvme_set_options supports configuration of retry_count
but transport_ack_timeout is missed. Note: this parameter
is used by RDMA transport only.
Signed-off-by: Alexey Marchuk <alexeymar@mellanox.com>
Change-Id: I7c3090dc8e4078f64d444e2392a9e0a6ecdc31c0
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11175
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-by: <tanl12@chinatelecom.cn>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
If ctrlr_loss_timeout_sec is set to -1, reconnect is tried repeatedly
indefinitely, and I/Os continue to be queued.
This patch adds another option fast_io_fail_timeout_sec, a flag
fast_io_fail_timedout to nvme_ctrlr.
If the time fast_io_fail_timeout_sec passed after starting reset,
set fast_io_fail_timedout to true not to use the path for I/O submission.
fast_io_fail_timeout_sec is initialized to zero as same as
ctrlr_loss_timeout_sec and reconnect_delay_sec.
The name of the parameter follows the famous DM-multipath, its fast_io_fail_tmo.
Change-Id: Ib870cf8e2fd29300c47f1df69617776f4e67bd8c
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/10301
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Previously reconnect retry was not controlled and was repeated indefinitely.
This patch adds two options, ctrlr_loss_timeout_sec and reconnect_delay_sec,
to nvme_ctrlr and add reset_start_tsc, reconnect_is_delayed, and
reconnect_delay_timer to nvme_ctrlr to control reconnect retry.
Both of ctrlr_loss_timeout_sec and reconnect_delay_sec are initialized to
zero. This means reconnect is not throttled as we did before this patch.
A few more changes are added.
Change nvme_io_path_is_failed() to return false if reset is throttled
even if nvme_ctrlr is reseting or is to be reconnected.
spdk_nvme_ctrlr_reconnect_poll_async() may continue returning -EAGAIN
infinitely. To check out such exceptional case, use ctrlr_loss_timeout_sec.
Not only ctrlr reset but also non-multipath ctrlr failover is controlled.
So we need to include path failover into ctrlr reconnect.
When the active path is removed and switched to one of the alternative paths,
if ctrlr reconnect is scheduled, connecting to the alternative path is left
to the scheduled reconnect.
If reset or reconnect ctrlr is failed and the retry is scheduled,
switch the active path to one of alternative paths.
Restore unit test cases removed in the previous patches.
Change-Id: Idec636c4eced39eb47ff4ef6fde72d6fd9fe4f85
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/10128
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Monica Kenguva <monica.kenguva@intel.com>
In project practice, config_file and key_file are often used to connect
to a rados cluster, config_file includes "mon_host" and other rados
configurations like "rbd_cache", and key_file includes the secret key
and the access authority to each pool for current user. This patch adds
key_file option, user can specify config_file and key_file or only config_param
to connect rados cluster. This will make it much more flexible for users with
his/her convenience.
Signed-off-by: Tan Long <tanl12@chinatelecom.cn>
Change-Id: I6b49aad70b578bdeb3ac8ea9ca0fcbd931582025
Signed-off-by: Tan Long <tanl12@chinatelecom.cn>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/10485
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Reviewed-by: GangCao <gang.cao@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Xiaodong Liu <xiaodong.liu@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Add support to enable individual traces through rpc commands
and modify jsonrpc.md to describe the changes.
Change-Id: I3664fc28f1c25a76eade4cff0a0ab1870172f8de
Signed-off-by: Krzysztof Karas <krzysztof.karas@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/10518
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
This RPC will stop the specified discovery service,
including detaching from any controllers that were
attached as part of that discovery service.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I9222876457fc45e1acde680a7bd1925917c22308
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/10832
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
This patch adds the framework for a discovery
service in the bdev/nvme module.
Users can specify an IP/port of a discovery service.
The bdev/nvme module will connect to a discovery
controller, get the discovery log page, and then
register for AERs. It will connect to each
subsystem specified in the initial log page.
AER completions will trigger fetching the log
page again, at which point new subsystems will
be connected to, or removed subsystems will be
detached.
This patch does the following:
* Adds the new start_discovery RPC
* Connects to the discovery controller
* Gets the discovery log page
* Registers for AERs
* Detach from discovery controllers at shutdown
Subsequent patches in this series will:
* Connect to subsystems listed in discovery log page
* Detach from subsystems that were listed in earlier
discovery log pages but subsequently removed
* Add a stop_discovery RPC
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I54bfa896a48c5619676f156b5ea9f2d1f886c72f
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/10694
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>