Note: a lot of the TCP and FC trace registers were
specifying '1' which means the arg type is a pointer,
but in reality it is always passing 0 for arg1. So
this patch just changes them to SPDK_TRACE_ARG_TYPE_INT.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I18d3cedd21e516f16cb2cd0a7f8c16670b1895d7
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/7763
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Broadcom CI
Community-CI: Mellanox Build Bot
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
It can be useful for testing or development purposes to force clients to write
to doorbells using vfio-user messages instead of directly into shared memory;
add a transport-specific option to disable the shared mapping.
Signed-off-by: John Levon <john.levon@nutanix.com>
Change-Id: I7ed062fbe211ba27c85d00b12d81a0f84a8322ed
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/7554
Community-CI: Broadcom CI
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
(Note: this patch was previously applied as b32cfc46 and then reverted
as 63642bef.)
Today the in-guest nvme device shows physical_block_size=512 even though
the backend iSCSI bdev supports physical_block_size=4K
iSCSI targets exposes physical block size using
logical_block_per_physical_block_exponent in READ_CAPACITY_16
NPWG is one of the way to let Linux nvme driver set
physical_block_size of the nvme block device.
This patch adds spdk_bdev.phys_blocklen which is updated if the iSCSI
backend exposes physical_block_size.
Later phys_blocklen is used in nvmf to set NPWG and NAWUPF to report
back during NS identity.
Linux driver uses min(nawupf, npwg) to set physical_block_size.
Similarly in scsi_bdev fill lbppbe in READ_CAP16 response
based on spdk_bdev.phys_blocklen.
Fixes#1884
Signed-off-by: Swapnil Ingle <swapnil.ingle@nutanix.com>
Change-Id: I0b6c81f1937e346d448f49c927eda8c79d2d75c0
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/7739
Community-CI: Broadcom CI
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
1) use spdk_bdev_get_name() accessor
2) use __SPDK_BDEV_MODULE_ONLY #define
The latter allows nvmf to just get the spdk_bdev_module
definitions and APIs that it needs for claiming bdevs
for purposes of avoiding the same namespace used in
different subsystems.
This also ensures that future changes to structures
like spdk_bdev and spdk_bdev_io will not cause
lib/nvmf so version changes.
Note: we include bdev_module.h explicitly in the
nvmf/subsystem unit tests now, before including
subsystem.c, because the unit tests do depend on
knowing the internal structure of spdk_bdev.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I2f499a741d19f4749eadb402641f28137245fd23
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/7738
Community-CI: Broadcom CI
Community-CI: Mellanox Build Bot
Reviewed-by: Anil Veerabhadrappa <anil.veerabhadrappa@broadcom.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
libvfio-user DMA APIs report all regions notified by the client, including those
that don't have a corresponding shared mapping. There are several of these for
a typical VM, so just ignore this case.
Signed-off-by: John Levon <john.levon@nutanix.com>
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
Change-Id: I37b06f4bc6d1818a03c8742616ed142f575d3f0e
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/7532
Community-CI: Broadcom CI
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
It is already set by nvmf_tcp_req_pdu_init
when we get the pdu. So we do not set it again.
Change-Id: I034bbc46e600afd802457c0b152e303f16bafba3
Signed-off-by: Ziye Yang <ziye.yang@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/7714
Community-CI: Broadcom CI
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
This reverts commit b32cfc467b.
This commit fails the ABI checks and only got through because the checks
were disabled until 21.04 hit.
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Change-Id: Id26b8f8ba551193d99b1ccbd31b35378b4095a20
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/7731
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Community-CI: Broadcom CI
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Today the in-guest nvme device shows physical_block_size=512 even though
the backend iSCSI bdev supports physical_block_size=4K
iSCSI targets exposes physical block size using
logical_block_per_physical_block_exponent in READ_CAPACITY_16
NPWG is one of the way to let Linux nvme driver set
physical_block_size of the nvme block device.
This patch adds spdk_bdev.phys_blocklen which is updated if the iSCSI
backend exposes physical_block_size.
Later phys_blocklen is used in nvmf to set NPWG and NAWUPF to report
back during NS identity.
Linux driver uses min(nawupf, npwg) to set physical_block_size.
Similarly in scsi_bdev fill lbppbe in READ_CAP16 response
based on spdk_bdev.phys_blocklen.
Fixes#1884
Signed-off-by: Swapnil Ingle <swapnil.ingle@nutanix.com>
Change-Id: I0b6c81f1937e346d448f49c927eda8c79d2d75cf
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/7310
Community-CI: Broadcom CI
Community-CI: Mellanox Build Bot
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: <dongx.yi@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
tcp transport doesn't send a response capsule when
c2h_success is set even if cdw0 or cdw1 are non-0.
Signed-off-by: Ed rodriguez <edwinr@netapp.com>
Signed-off-by: John Meneghini johnm@netapp.com
Change-Id: Ieba81fcc50342a2009f7931526e6f8392e26b6a5
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/6808
Community-CI: Broadcom CI
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Also use debug log when the memory region isn't 2MiB aligned,
The QEMU may only use one page for a memory region, we are sure
these memory regions will not be used as NVMe data buffers.
Previously libvfio-user will help us to round up these memory
regions to 2MiB alignment, and it doesn't do it anymore, this
isn't an error case so change it to debug log.
Change-Id: I6c397f50407d4f2a14f78d9f99fffc2e4054ff51
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/7545
Community-CI: Broadcom CI
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
private tag added to the experimental or internal rpcs
which might be removed in future or not documented.
Signed-off-by: Monica Kenguva <monica.kenguva@intel.com>
Change-Id: I3e967252412f2491860eea5fa69750a7562b994a
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/7510
Community-CI: Broadcom CI
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
The API `spdk_nvme_map_prps` is used in nvmf/vfio-user to
remap VM's NVMe command data buffer to local virtual address,
and for command using PRP, there maybe multiple pages, when
parsing the PRP list to local IOVs, we need a parameter to check
that the maximum number of vectors can't exceed the IOVs, this API
can't meet the requirement, while here, we add a new API `spdk_nvme_map_cmd`
and with a new parameter `max_iovcnt` to fix this case, and it can
also cover the command using SGL in the coming patches.
Change-Id: I71063524bed16ee3434103867a556d3741e55326
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/7278
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Community-CI: Broadcom CI
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
This is required by libvfio-user APIs.
Change-Id: I675a3be0a9650d146c8d37e42debf1191656903b
Signed-off-by: John Levon <john.levon@nutanix.com>
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/7472
Community-CI: Broadcom CI
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: <dongx.yi@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I068e74f5b7d078ad37572eff47e772ad6967b827
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/7436
Community-CI: Broadcom CI
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Unlike tcp/rdma transport, the vfio-user transport doesn't need to
wait for the data buffers, so here we add two request states for
now.
The request state will help us for coming request abort API.
Change-Id: Ibbb193fbbd358333f81aa29341493c19ab7bd108
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/7435
Community-CI: Broadcom CI
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Previously we reset them when getting a new request, but it's more
reasonable in the completion path.
Change-Id: I3dab35ce471d2a5bbd37576540d30a09dcf93410
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/7434
Community-CI: Broadcom CI
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Also rename transport request and controller variables
with "vu_" prefix.
The consolidated function will be used in coming patch.
Change-Id: I5219c13d7089dfdaea4a54e0b15cc5e6ecf2eb16
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/7433
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Broadcom CI
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Change-Id: I2e47b148209ce4c232dbdc5f20c90548be995e1a
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/7334
Community-CI: Broadcom CI
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
This will ensure that we can't exceed the iovcnt when parse
NVMe PRP list to req->iov.
Also comment that the iovcnt in vfio-user transport is used to track
each gpa_to_vva map, for NVMe PRP list command, the PRP2 itself also
will use one entry, so we need add one more entry for this case.
Fix issue #1864.
Change-Id: I06c7137e2c4637c9501f82a9eb1c8e4395d819cd
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/7264
Community-CI: Broadcom CI
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: <dongx.yi@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
For NVMe backend device, we should use vtophys to calculate
physical address when doing DMA from/to VM to drives.
Fix#1822.
Change-Id: Ib8fbc371e19e77a20202d408340e7d65644b1eeb
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/7261
Community-CI: Broadcom CI
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Previously we poll the MMIO callbacks in the context of ADMIN queue's
poll group, here we do some improvement to start a poller to do MMIO
poll, then the group poll will only process NVMe commands while the
MMIO poller will process MMIO access.
This is useful when doing live migration, because the migration region
defined by VFIO is a BAR region, we should stop polling queue pairs
but ack the MMIO accesses during the live migration.
Change-Id: I63bac44889cbe0c31d47599810aab8335dfd4ff5
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/7251
Community-CI: Broadcom CI
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: <dongx.yi@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Just code movement for the coming patch.
Change-Id: I7e844bc27a037e086796f9659351f20cdbb517fb
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/7333
Community-CI: Broadcom CI
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
nvmf_poll_group_remove_subsystem_msg() disconnects all
qpairs associated with controllers in the specified
subsystem. If it finds any controllers that need to
be disconnected, it sends a message to the running
thread to execute the same function again later.
But when it runs again later, the qpair may no longer
be in the poll group, but there could still be
outstanding messages being sent between threads. For
example, _nvmf_qpair_destroy() needs to send a message
to the ctrlr->thread to clear the qpair mask bit.
All of this could result in the nvmf target starting
to destroy poll groups prematurely. Destroy poll
groups results in the nvmf spdk_threads exiting. If
there are still messages being processed from
the STOP_SUBSYSTEMS target state, we can get
use-after-free errors since processing of those
messages could access freed memory associated with
the exited thread.
Fixes issue #1850.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I1e63b9addb2956495a69b5108a41e029f6f9a85d
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/7275
Community-CI: Broadcom CI
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: <dongx.yi@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Custom transport may not provide the `qpair_abort_request`
callback function, so here for transport API we will just
call it when it's not empty. We will add the callback
support with vfio-user in another patch.
Fix#1883.
Change-Id: Icd82a26bde4ed90068bc85ee04cce9642cb6135d
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/7291
Community-CI: Broadcom CI
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: <dongx.yi@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
We used the controller ready field to indicate ADMIN queue connection,
but the accept poller and ADMIN poll group may run in different
threads, this may lead vfu_attach_ctx() be called several times, so
change the 'ready' to true when a new socket connection is created.
Fix issue #1854.
Change-Id: Iab6ffd6dffb3fff5cf893e79774bc28fe0b2830c
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/7073
Community-CI: Broadcom CI
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
This will make the code easier to understand.
Change-Id: I7112d3fd5f0d6dce9b66d44375b68ce7d1e8951d
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/7072
Community-CI: Broadcom CI
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
unmap_q is only be called in unmap_qp, so remove this function to make the
code more clear to read.
Change-Id: I627c7a1efdcb85476cb618fced8b0bfc2d8f1f62
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/6886
Community-CI: Broadcom CI
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: <dongx.yi@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
When killing QEMU or remote client is terminated normally,
we can release current controller related data structure,
users may restart QEMU to connect the same socket file
again, for the new connection, vfio-user will create
a new controller data structure for it.
Here we add a lock in the endpoint data structure to protect
number of connected queue pairs variable, because controller
data structure is like a session, while endpoint is related
with the socket file, so it's safe here. Moreover, we can
use this lock to protect live migration related data
structures in future.
Change-Id: Ie7060041a253604e7a2242813ec284eae46fe4e8
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/6862
Community-CI: Broadcom CI
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: <dongx.yi@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Using nvmf_tcp_destroy() would destroy ttransport->lock, which hasn't
been initialized by that point yet.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Change-Id: Ie9ced97ef520236dddaa70453b6807e8382ce534
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/7235
Community-CI: Broadcom CI
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
The max_io_size transport option should be a power of 2 and be >= 8KB.
Max data tranfer size is defined in NVMe-oF spec as 2^(mdts cmd field) * 4KB.
Mdts cmd field is calculated as spdk_u32log2(transport->opts.max_io_size / 4096),
so max_io_size < 8KB results in mdts=0, which means no size limit (according to spec).
User can set max_io_size = 0 explicitly to allow no size limit.
Signed-off-by: Maciej Szulik <maciej.szulik@intel.com>
Change-Id: Id88a77efce5f217e1fc7750f61c0bd330aaa3791
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/6384
Community-CI: Broadcom CI
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jacek Kalwas <jacek.kalwas@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
We can send a message to repeat subsystem pause
and free a context that will be used later
Signed-off-by: Alexey Marchuk <alexeymar@mellanox.com>
Change-Id: Ia5e8b0ff43f5e38bd8e659a8a64d42926e1d3c6e
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/6661
Community-CI: Broadcom CI
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Ziye Yang <ziye.yang@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: <dongx.yi@intel.com>
With this change, each polling group will use one
accel_engine channel. This change will be more suitable
to utlize the underlying accelerated device.
Signed-off-by: Ziye Yang <ziye.yang@intel.com>
Change-Id: Ibab183a1f65baff7e58529ee05e96b1b04731285
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/7055
Community-CI: Broadcom CI
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Actually we already do this when removeing a memory region, but
the check for it is too strict, we should unmap queue pairs when
the queue pair is in the memory region.
Change-Id: Ia646a0255e32ecdd0a70537a8011ce622eb59195
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/6861
Community-CI: Broadcom CI
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
The error response should be processed at the beginning of this
function.
Change-Id: Id583951c82981cf58984ab68b23ad6f7ea80cd3f
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/6859
Community-CI: Broadcom CI
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: <dongx.yi@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
When starting VM, there are error logs such as:
vfio_user.c: 510:acq_map: *ERROR*: Map ACQ failed, ACQ 3ffde000, errno -1
vfio_user.c:1043:map_admin_queue: *ERROR*: /var/run/muser/domain/muser1/1: failed to map CQ0: -1
vfio_user.c:1103:memory_region_add_cb: *NOTICE*: Failed to map SQID 1 0x3ffd8000-0x3ffdc000, will try again in next poll
This isn't the error case, because when the Guest memory hot add/remove from QEMU, vfio-user
target will stop and unmap all queue pairs and remap them again, so let's use a more friendly
log instead.
Also use a notice log when adding listener.
Change-Id: Iaa4dc29e02523b5e85ec716d200ec355f8a575ed
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/6650
Community-CI: Broadcom CI
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: <dongx.yi@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
nvmf_subsystem_remove_listener RPC handler may fail to remove
the listener (e.g. it doesn't exist) but in eror case we
spdk_nvmf_transport_stop_listen_async and send an error
response. In a completion callback passed to
spdk_nvmf_transport_stop_listen_async we try to send a
response again but the response handler had already been
released and we dereference a NULL pointer.
The fix is to skip spdk_nvmf_transport_stop_listen_async
in error case and continue with the subsystem resuming.
Fixes github issue #1821
Change-Id: I8d96b943cca25d9f95d19e8ea600242f019e6b21
Signed-off-by: Alexey Marchuk <alexeymar@mellanox.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/6699
Community-CI: Broadcom CI
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: <dongx.yi@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
When we iterate qpairs that belong to a subsystem
and try to disconnect them, there is a chance that
some qpair can be disconnected on transport level,
e.g. the initiator may receive a disconnect for
the first qpair and disconnect others. That may lead
to a dead loop when we call spdk_nvmf_qpair_disconnect
with a callback, the callback is called immediatelly
and tries to disconnect the qpair again.
To solve this problem, move part of nvmf_poll_group_remove_subsystem
function to another function nvmf_poll_group_remove_subsystem_msg
which disconnects all qpair at once without any callback
and calls itself via thread_send_msg untill all qpairs are
disconnected.
Fixes github issue #1780
Change-Id: I1000cda73e6164917fc13f7f374366af90571b99
Signed-off-by: Alexey Marchuk <alexeymar@mellanox.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/6597
Community-CI: Broadcom CI
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: <dongx.yi@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
This change refactors the way nvmf_get_stats RPC works.
The RPC layer passes JSON write context to custom dump function defined within transport ops.
The RPC layer no longer needs to know the structure of transport poll group statictics.
Functions and structures used in the previous flow have been deprecated and will be removed.
JSON returned for RDMA transport should be the same as before this change.
Signed-off-by: Maciej Szulik <maciej.szulik@intel.com>
Change-Id: I03308c45be120793d316bf79814a1295afd9fb95
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/6681
Community-CI: Broadcom CI
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
spdk_nvmf_tgt_listen() is deprecated, so moved
the remaining instance to spdk_nvmf_tgt_listen_ext().
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Change-Id: I32b54e99f83fa10f1074f80aad82bb0608c9ae11
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/6630
Community-CI: Broadcom CI
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-by: Anil Veerabhadrappa <anil.veerabhadrappa@broadcom.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
This statistic is incremented when we don't reap
anything from the CQ. Together with the total number
of polls it can be useful to estimate idle percentage.
Change-Id: I61b51d049b0bc506fb8a896e225187e46e75a564
Signed-off-by: Alexey Marchuk <alexeymar@mellanox.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/6295
Community-CI: Broadcom CI
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Previously we only process Read/Write/Flush IO commands, we should
not block the DSM command in vfio-user layer if the backend block
device can support it.
Change-Id: Ia6b90397adcc36015f331f011a5bdf3e3d6562d8
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/6525
Community-CI: Broadcom CI
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
nvmf_create_transport rpc parameter to configure the CQ size helps
if the user is aware of CQ size needed as iWARP doesn't support CQ resize.
Fixes issue #1747
Signed-off-by: Monica Kenguva <monica.kenguva@intel.com>
Change-Id: Ia9ba2b5f612993be27ebfa3455fb4fefd80ae738
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/6495
Community-CI: Broadcom CI
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>