Commit Graph

5906 Commits

Author SHA1 Message Date
Darek Stojaczyk
880ddb7436 vhost: prepare to add a separate cpl cb to foreach_session()
Currently vhost_dev_foreach_session() accepts a single
callback function for both iterating through all active
sessions and for signaling the end of iteration (called
last time with vsession param == NULL). Now that the
final signal has completely different semantics and is
called on a specific thread, it makes sense to put in
a separate function.

In this patch we prepare separate functions for the final
call, but still call them in the original callback. In
a separate patch we'll start passing both functions
directly to foreach_session().

Change-Id: I9f4338d9696f7bd15ca2d6655c6a3851569aff75
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/466731
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Vitaliy Mysak <vitaliy.mysak@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2019-09-09 01:39:41 +00:00
Darek Stojaczyk
5d6361b5dd vhost/scsi: remove return code from remove_scsi_tgt()
The function could never fail, so make it return void
rather than int. This serves as cleanup.

Change-Id: I16a857ecee8d162f546fd097acaa2e66d51ebffa
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/466730
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Vitaliy Mysak <vitaliy.mysak@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
2019-09-09 01:39:41 +00:00
Darek Stojaczyk
d4f7bf9cdd vhost: remove redundant vdev == NULL checks in foreach_session()
Historically the callbacks from vhost_dev_foreach_session()
could be called with vdev argument == NULL, which would
mean that device was removed after enqueuing the event
and before consuming it. Now we keep track of pending
asynchronous operations on each vhost device and don't
allow removing it if there are any unconsumed events,
so the the vdev == NULL checks are redundant. Remove them.

Change-Id: I7aa3785080d20ed06e008c081d3f37a949228f5a
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/466729
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Vitaliy Mysak <vitaliy.mysak@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
2019-09-09 01:39:41 +00:00
Darek Stojaczyk
0cf5d5160b vhost: remove spdk_ prefix from private functions
Remove them all at once. spdk_ prefix should be
only applied to publicly exported functions.

Change-Id: Ib6d2bd0954ec5cb7c8cf253d79b9d3cd8aa0eeef
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/466728
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Vitaliy Mysak <vitaliy.mysak@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
2019-09-09 01:39:41 +00:00
Shuhei Matsumoto
9796768132 nvmf: Move pending_data_buf_queue to common struct spdk_nvmf_transport_poll_group
This unifies buffer management among transports further and is a
preparation to make buffer allocation asynchronous.

Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: I8c588eeac4081f50fe32605feb7352f72c628d95
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/466847
Reviewed-by: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Anil Veerabhadrappa <anil.veerabhadrappa@broadcom.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2019-09-09 00:42:22 +00:00
Shuhei Matsumoto
cb5c661274 nvmf/fc: Move pending_data_buf_queue from fc_conn to fc_poll_group
I/O buffer cache is per transport_poll_group now. Hence moving
pending_data_buf_queue from struct spdk_nvmf_fc_conn to struct
spdk_nvmf_fc_poll_group is reasonable and do it in this patch.

This change is based on RDMA and TCP transport.

Further unification among transports will be done in subsequent
patches.

Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: Ic857046be8da238cb3ff9e89b83cdac5f6349bcf
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/466844
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2019-09-09 00:42:22 +00:00
Shuhei Matsumoto
2ed1b6c253 nvmf/fc: Use transport pointer stored in transport_poll_group
The pointer to transport is set to struct nvmf_transport_poll_group
in nvmf_transport_poll_group_create() after returning
nvmf_fc_poll_group_create(). Hence use it and remove ftransport pointer
from struct nvmf_fc_poll_group.

Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: I9f2b2ade77afa18d0e97949fc0c2403eb000cdad
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/467060
Reviewed-by: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Reviewed-by: Anil Veerabhadrappa <anil.veerabhadrappa@broadcom.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2019-09-09 00:42:22 +00:00
Shuhei Matsumoto
b913e01644 nvmf/fc: Rename pointer to nvmf_fc_transport from fc_transport to ftransport
RDMA transport have used rtransport and TCP transport have used
ttransport, respectively. So FC transport changes to use ftransport
instead of fc_transport.

Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: I7d98eb2f6efbae7e2b4784f31b9de5e1a81bc2ac
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/467059
Reviewed-by: Anil Veerabhadrappa <anil.veerabhadrappa@broadcom.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2019-09-09 00:42:22 +00:00
Shuhei Matsumoto
b9dc11f98d nvmf/fc: Rename transport_poll_group instance in nvmf_fc_poll_group to group
Both RDMA and TCP transport have uesd group for such case. Hence
FC transport changes to use group instead of tp_poll_group.

Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: Ic4b401179da506bb204c3ec48650db87f91fe72a
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/466843
Reviewed-by: Anil Veerabhadrappa <anil.veerabhadrappa@broadcom.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2019-09-09 00:42:22 +00:00
Shuhei Matsumoto
01df17d007 nvmf/fc: Use pointer stored in transport_poll_group and remove it from fc_poll_group
The pointer to nvmf_poll_group is set in nvmf_transport_poll_group_create()
after returning nvmf_fc_poll_group_create(). Hence holding it into
struct spdk_nvmf_fc_poll_group is duplicated and can be removed.

Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: I7087c5cdb94b0b0c5f51b0b63b631c08266c90d0
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/466842
Reviewed-by: Anil Veerabhadrappa <anil.veerabhadrappa@broadcom.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2019-09-09 00:42:22 +00:00
Shuhei Matsumoto
99ea1d3612 nvmf/fc: Rename nvmf_fc_poll_group pointer held in struct to fgroup
RDMA transport have used rgroup and TCP transport have used tgroup
for such case. Hence FC transport changes to use fgroup instead of
fc_poll_group.

Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: I91b7ad6a1c6e45caf92801b0635b18d48b3c9810
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/466841
Reviewed-by: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Anil Veerabhadrappa <anil.veerabhadrappa@broadcom.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2019-09-09 00:42:22 +00:00
Konrad Sztyber
bd78196c09 lib/ftl: delay processing ANM events initialization is completed
Start processing ANM events only after the device is fully initialized.
Otherwise some of the structures are partially filled and can be
interpreted incorrectly.

Change-Id: Ia741730cf15d44d76ce8afa7955e6a5bf42ca42b
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/466935
Reviewed-by: Mateusz Kozlowski <mateusz.kozlowski@intel.com>
Reviewed-by: Wojciech Malikowski <wojciech.malikowski@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2019-09-04 18:42:57 +00:00
Konrad Sztyber
a2714d414f lib/ftl: track number of pending write buffer entries
Track the number of acquired but not yet submitted write buffer entries
to be able to correctly calculate the required number of entries to be
padded.

Change-Id: Ie201681937ad1d03ec125aa5912311c54a7e35c9
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/466934
Reviewed-by: Mateusz Kozlowski <mateusz.kozlowski@intel.com>
Reviewed-by: Wojciech Malikowski <wojciech.malikowski@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2019-09-04 18:42:57 +00:00
Konrad Sztyber
cf3d42961b lib/ftl: flush the write buffer during nv_cache recovery
When recovering the data from the non-volatile cache, the data inside
the volatile cache needs to be flushed before flushing active bands.
Otherwise, if the number of blocks in a band is smaller than the number
of blocks inside the volatile cache, part of the data may not get
flushed.

Change-Id: I4e99709c8c2a526a928578870d7fbd5fef37db02
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/466883
Reviewed-by: Mateusz Kozlowski <mateusz.kozlowski@intel.com>
Reviewed-by: Wojciech Malikowski <wojciech.malikowski@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2019-09-04 18:42:57 +00:00
Seth Howell
20b35d769d nvmf: don't keep a global discovery log page.
Keeping a global discovery log page was meant to be a time saving
mechanism, but in the current implementation, it doesn't work properly,
and can cause undesirable behavior and potential crashes. There are two
main problems with keeping a global log page.

1. Admin qpairs can be assigned to any SPDK thread. This means that when
multiple initiators connect to the host and request the discovery log,
they can both be running through the spdk_nvmf_ctrlr_get_log_page
function at the same time. In the event that the discovery generation
counter is incremented while these accesses are occurring, it can cause
one or both of the threads to update the log at the same time. This
results in both logs trying to free the old log page (double free) and
set their log as the new one (possible memory leak).

2. The second problem is that each host is supposed to get a unique
discovery log based on the subsystems to which they have access.
Currently the code relies on whether the discovery log page offset in
the request is equal to 0 to determine if it should load a new discovery
log page or use the cached one. This is inherently faulty because it
relies on initiator provided value to determine what information to
provide from the log page. An initiator could easily send a discovery
request with an offset greater than 0 on purpose to procure most of a
log page provided to another host.

Overall, I think it's safest to not cache the log page at all anymore
and rely on a thread local fresh log page each time.

Reported-by: Curt Bruns <curt.e.bruns@intel.com>

Change-Id: Ib048e26f139927d888fed7019e0deec346359582
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/466839
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2019-09-03 00:30:59 +00:00
Darek Stojaczyk
be04cfc342 env_dpdk/memory: aggregate adjacent vfio mappings
In the past, memory in spdk could have been unregistered in
different chunks than it was registered, so to account
for that the vtophys code used to register each hugepage
(2MB chunk of memory) separately to the VFIO driver. This
really made the code generally simple.

Now that memory in spdk can only be unregistered in the same
chunks it was registered in, we no longer have to register
each hugepage to VFIO separately. We could register the
entire memory region with just a single VFIO ioctl instead,
so that's we'll do now.

This serves as an optimization as we obviously send less
ioctls now, but most importantly it prevents SPDK from
reaching a VFIO registrations limit that was introduced
in Linux 5.1. [1]

The default limit is 65535, which results in SPDK being able to
make only the first 128GB of memory DMA-able. This is most
problematic for vhost where we need to register the memory
of all the VMs.

Fixes #915

[1] 492855939bdb59c6f947b0b5b44af9ad82b7e38c
("vfio/type1: Limit DMA mappings per container")

Change-Id: Ida40306b2684e20daa2fd8d12e0df2eef5a4bff1
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/432442
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
2019-08-30 19:41:49 +00:00
Darek Stojaczyk
43f4e3932a env_dpdk/memory: implement contiguity check for vtophys map
We'll be now able to check contiguity for more than 2MB
regions.

Change-Id: I738ff451d534075c944972918d08e5e0cadea4f5
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/466073
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2019-08-30 19:41:49 +00:00
Shuhei Matsumoto
0b068f8530 nvmf/rdma: Pass nvmf_request to nvmf_rdma_fill_buffers
Most variables related with I/O buffer are in struct spdk_nvmf_request
now. So we can pass nvmf_request instead of nvmf_rdma_request to
nvmf_rdma_request_fill_buffers and do it in this patch.

Additionally, we use the cached pointer to nvmf_request in
spdk_nvmf_rdma_request_fill_iovs which is the caller to
nvmf_rdma_request_fill_buffers in this patch.

Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: Ia7664e9688bd9fa157504b4f5075f79759d0e489
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/466212
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
2019-08-30 16:56:46 +00:00
Shuhei Matsumoto
b4778363b4 nvmf/tcp: Pass nvmf_request to nvmf_tcp_req_fill_buffers
Most variables related with I/O buffer are in struct spdk_nvmf_request
now. So we can pass nvmf_request instead of nvmf_tcp_req to
nvmf_tcp_req_fill_buffers and do it in this patch.

Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: I00eff578a98891e99fcb9a3aafa3d99126d6f1c1
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/466089
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
2019-08-30 16:56:46 +00:00
Shuhei Matsumoto
90a2be2006 nvmf/fc: Pass nvmf_request to nvmf_fc_request_fill_buffers
Most variables related with I/O buffer are in struct spdk_nvmf_request
now. So we can pass nvmf_request instead of nvmf_fc_request to
nvmf_fc_request_fill_buffers and do it in this patch.

Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: Ibe87e7641e5c364b20a6d877ce7928c612b0b83a
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/466088
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Anil Veerabhadrappa <anil.veerabhadrappa@broadcom.com>
Reviewed-by: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
2019-08-30 16:56:46 +00:00
Shuhei Matsumoto
9412a8370d nvmf/fc: Use STAILQ for pending_data_buf_queue
This is a small performance optimization and an effor to unify I/O
buffer management further among transports.

it is ensured that the request is the first of STAILQ when
nvmf_fc_request_execute() completes successfully.

Hence change TAILQ_REMOVE to STAILQ_REMOVE_HEAD for the case.

Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: If982842bf53ba00426a854a18eaadf8a1b8d642d
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/466676
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Anil Veerabhadrappa <anil.veerabhadrappa@broadcom.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
2019-08-30 16:56:46 +00:00
Shuhei Matsumoto
6c8b297262 nvmf/fc: Rename pending_queue to pending_data_buf_queue
This is an effort to unify I/O buffer management further among
transports. RDMA and TCP transport have named pending_queue
pending_data_buf_queue. So FC transport follows RDMA and TCP transport.

The next patch will change pending_data_buf_queue to use STAILQ
instead of TAILQ.

Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: I57c3c678a1e92ec262eb8940418529a62b6768c3
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/466675
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
2019-08-30 16:56:46 +00:00
Shuhei Matsumoto
2bc819dd52 nvmf/tcp: Use STAILQ for queued_c2h_data_tcp_req and pending_data_buf_queue
This is a small performance optimization and an effort to unify
I/O buffer management further among transports.

It is ensured that the request is the first of STAILQ when
spdk_nvmf_tcp_send_c2h_data() is called or the case
TCP_REQUEST_STATE_NEED_BUFFER is executed in spdk_nvmf_tcp_req_process().

Hence change TAILQ_REMOVE to STAILQ_REMOVE_HEAD for these two cases.

Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: I0b195874ac22a8d5ecfb283a9865d2615b7d5912
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/466637
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ziye Yang <ziye.yang@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2019-08-30 16:56:46 +00:00
Darek Stojaczyk
4368937b4e bdev/part: remove thread safety from part_construct()
spdk_bdev_part_construct() must be now called on the
same thread that called spdk_bdev_part_base_construct().

This was always the case so far and I don't see any other
case where thread safety could be useful, so just remove
it. The doxygen doesn't say anything about it either.

Even in GPT case, we create a base directly as a part of
examine and then create part bdevs in the spdk_bdev_read()
completion callback, but that callback will be always
executed on the same thread which issued the read.

Change-Id: I752f2a7f08c9faf4231ed53a46b700b33fa13697
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/466024
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
2019-08-30 15:48:02 +00:00
Ziye Yang
5e7b8d18f3 nvmf/tcp: Remove the potential pdu hdr memory copy.
In this patch, we directly point the hdr_p
to the memory owned by the pdu_recv_buf to avoid
memory copy.

Change-Id: Iee0dd98058928f429bf7ad22103cd4826226400f
Signed-off-by: Ziye Yang <ziye.yang@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/465158
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
2019-08-30 02:25:22 +00:00
Seth Howell
7463b0dea3 mk: standardize DIRS-x assignments.
Most of the assignments followed the DIRS-($(CONFIG_X)) pattern, but
there were a couple of assignments using a different pattern.

Change-Id: I7c80fec2813c32cb7676912d72805565f77b2e3d
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/466469
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-by: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2019-08-29 20:16:18 +00:00
Shuhei Matsumoto
8a80461ac6 nvmf/tcp: execute buffer allocation only if request is the first of pendings
RDMA transport executes spdk_nvmf_rdma_request_parse_sgl() only if
the request is the first of the pending requests in the case
RDMA_REQUEST_STATE_NEED_BUFFER in the state machine
spdk_nvmf_rdma_requests_process().

This made RDMA transport possible to use STAILQ for pending requests
because STAILQ_REMOVE parses from head and is slow when the target is in
the middle of STAILQ.

On the other hand, TCP transport executes spdk_nvmf_tcp_req_parse_sgl()
even if the request is in the middle of the pending request in the case
TCP_REQUEST_STATE_NEED_BUFFER in the state machine
spdk_nvmf_tcp_req_process() if the request has in-capsule data.

Hence TCP transport have used TAILQ for pending requests.

This patch removes the condition if the request has in-capsule data
from the case TCP_REQUEST_STATE_NEED_BUFFER.

The purpose of this patch is to unify I/O buffer management further.

Performance degradation was not observed even after this patch.

Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: Idc97fe20f7013ca66fd58587773edb81ef7cbbfc
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/466636
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2019-08-29 18:17:38 +00:00
Shuhei Matsumoto
0f73c253b5 nvmf/fc: Replace FC specific get/free_buffers by common APIs
Use spdk_nvmf_request_get_buffers() and spdk_nvmf_request_free_buffers(),
and then remove nvmf_fc_request_free_buffers() and nvmf_fc_request_get_buffers().

Set fc_req->data_from_pool to false after spdk_nvmf_request_free_buffers().

Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: I046a642156411da3935bc2fa2c2816fc2e025147
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/465877
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Anil Veerabhadrappa <anil.veerabhadrappa@broadcom.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
2019-08-29 18:17:38 +00:00
Shuhei Matsumoto
9968035884 nvmf/tcp: Replace TCP specific get/free_buffers by common APIs
Use spdk_nvmf_request_get_buffers() and spdk_nvmf_request_free_buffers(),
and then remove spdk_nvmf_tcp_request_free_buffers() and
spdk_nvmf_tcp_request_get_buffers().

Set tcp_req->data_from_pool to false after spdk_nvmf_request_free_buffers().

Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: I286b48149530c93784a4865b7215b5a33a4dd3c3
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/465876
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2019-08-29 18:17:38 +00:00
Shuhei Matsumoto
85b9e716e9 nvmf/rdma: Replace RDMA specific get/free_buffers by common APIs
Use spdk_nvmf_request_get_buffers() and spdk_nvmf_request_free_buffers(),
and then remove spdk_nvmf_rdma_request_free_buffers() and
nvmf_rdma_request_get_buffers().

Set rdma_req->data_from_pool to false after
spdk_nvmf_request_free_buffers().

Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: Ie1fc4c261c3197c8299761655bf3138eebcea3bc
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/465875
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2019-08-29 18:17:38 +00:00
Shuhei Matsumoto
cc4d1f82cc nvmf: Add spdk_nvmf_request_get/free_buffers() usable among transports
This patch adds new APIs spdk_nvmf_request_get_buffers() and
spdk_nvmf_request_free_buffers() to be used among transports.
Subsequent patches will replace transport specific APIs by them.

Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: Ib153e2c5806b7276915a0aa91179fe9dbcb2a1f0
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/465874
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Anil Veerabhadrappa <anil.veerabhadrappa@broadcom.com>
Reviewed-by: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2019-08-29 18:17:38 +00:00
Shuhei Matsumoto
005b053a02 nvmf: Move data_from_pool flag to common struct spdk_nvmf_request
This is a prepration to unify buffer management among transports.

Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: I6b1c208207ae3679619239db4e6e9a77b33291d0
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/466002
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2019-08-29 18:17:38 +00:00
Shuhei Matsumoto
04ae83ec93 nvmf: Move allocated buffer pointers to common struct spdk_nvmf_request
This is a preparation to unify buffer management among transports.
struct spdk_nvmf_request already has SPDK_NVMF_MAX_SGL_ENTRIES (16) * 2
iovecs. Hence incresing the number of buffers twice will be no problem.

Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: Idb525abbf35dc9f4b8547b785b5dfa77d106d8c9
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/465873
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
2019-08-29 18:17:38 +00:00
Mateusz Kozlowski
a3b7ae8ab6 lib/ftl: IO channel handling for nv cache
Moved scrubbing of nv cache to core thread. Added IO channel which is
used in user context, during shutdown of nv cache.

Signed-off-by: Mateusz Kozlowski <mateusz.kozlowski@intel.com>
Change-Id: I88e680324e361bf7e0c0a9a9d29323f179c56e3b
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/465932
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2019-08-29 18:10:19 +00:00
Evgeniy Kochetov
01887d3c96 nvmf/rdma: Fix data WR release
One of stop conditions in data WR release function was wrong. This
can cause release of uncompleted data WRs. Release of WRs that are
not yet completed leads to different side-effects, up to data
corruption.

The issue was introduced with send WR batching feature in commit
9d63933b7f.

This patch fixes stop condition and contains some refactoring to
simplify WR release function.

Signed-off-by: Evgeniy Kochetov <evgeniik@mellanox.com>
Signed-off-by: Sasha Kotchubievsky <sashakot@mellanox.com>
Signed-off-by: Alexey Marchuk <alexeymar@mellanox.com>
Change-Id: Ie79f64da345e38038f16a0210bef240f63af325b
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/466029
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Reviewed-by: Seth Howell <seth.howell@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2019-08-29 18:09:14 +00:00
Ziye Yang
d50736776c nvmf/tcp: Use a big buffer for PDU receving.
Purpose: Reduce the recv/readv system call.
Method: Use a big recv buffer to conduct the read.
Though it will introduce addtional buffer copy,
we hope that the overhead introduced by buffer copy will
be smaller compared with frequent recv/readv system call overhead.
And the design is to make a trade off between them.

Signed-off-by: Ziye Yang <ziye.yang@intel.com>
Change-Id: I9286fd9cec0b512cea8e3f2c335c5bf862b98573
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/464842
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2019-08-28 15:38:02 +00:00
Ziye Yang
ea5ad0b286 nvme/tcp: Change hdr in nvme_tcp_pdu to pointer
Purpose: Prepare the further optimnization in the
target side whening receving pdu headers, we expect
to use zero copy.

Change-Id: Iae7f9106844736d7160d39d0af1f5941084422ec
Signed-off-by: Ziye Yang <ziye.yang@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/465380
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
2019-08-28 15:38:02 +00:00
Tomasz Zawadzki
1428692e1a lib/event: remove app.c dependency from loading json_config
Originally loading json_config using spdk_app_json_config_load_subsystem()
implied issuing start_subsystem_init RPC. This required a workaround
in the callback of RPC spdk_rpc_start_subsystem_init_cpl(), in order
to skip starting the app in json_config load path.

This made it difficult to load json_config without implicitly using
rest of the event framework. It will be usefull for example in
fio_plugin, which does not use the app.c API.

With change in this patch json_config load path directly calls
spdk_subsystem_init() C call.
Meanwhile start_subsystem_init RPC no longer needs a workaround
for json_config load path.

Change-Id: I535e079339cedaf0950767a8204002ab5885d8a5
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/463978
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2019-08-28 15:26:12 +00:00
Tomasz Zawadzki
b85881ec7c lib/event: remove app.c dependency from subsystem initialization
This change adds return code to spdk_subsystem_init().
Making it's caller responsible for handling application
state - such as calling spdk_app_stop().

This change implies that start_subsystem_init RPC does not
stop the application on failure, only reports back the error.

Renamed g_app_start/stop variables to now more relevant
g_subsystem_start/stop.

Change-Id: I66a7da6ecfb234a569c65279cc4b210ddac53d2a
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/464412
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2019-08-28 15:26:12 +00:00
Pawel Kaminski
139e4c0783 lib/rpc: Add include_aliases flag to rpc_get_methods implementation.
When getting the list of available RPCs from a tool like rpc.py,
aliases often should be hidden.
However, when getting the list of RPCs available
for loading from a JSON config file, aliases should be included.

Change-Id: Ie22d8b0ec2515d37dbfadf01b5cb709c160beb3e
Signed-off-by: Pawel Kaminski <pawelx.kaminski@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/465656
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
2019-08-28 15:19:17 +00:00
Konrad Sztyber
1f133d7279 lib/ftl: track defragged bands in ftl_reloc
Track the band under defrag inside the reloc module.  This allows for
multiple bands being defragged at the same time (e.g. extra one due to
write fault) as well as makes it easier to handle cases when relocating
a band that has no valid blocks.

Change-Id: Ia54916571040f5f4dfdb8f7cdb47f28435a466d8
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/465937
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Mateusz Kozlowski <mateusz.kozlowski@intel.com>
Reviewed-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2019-08-27 18:55:40 +00:00
Seth Howell
407e88fd2a lib/mk: update OCF build.
The OCF build was broken by some of the recent changes
to the Makefiles. This change aims to fix that by separating out the ocf
environment from the ocf bdev.

Change-Id: Id445340033898e9ae70a4bcfc799951110762d55
Signed-off-by: Seth Howell <seth.howell@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/465808
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2019-08-27 18:49:56 +00:00
Maciej Szwed
79ed1ba18d bdev: Add spdk_bdev_open_ext function
This patch adds new interface for opening bdev and
implements new style remove event. With that changes
user can be notified about different types of events
that occur in regards to bdev. spdk_bdev_open_ext
function uses bdev name as an argument instead of bdev
structure to remove race condition where user gets
the bdev structure and bdev is removed after getting
that structure and before open function is called.

spdk_bdev_open is now deprecated.

Signed-off-by: Maciej Szwed <maciej.szwed@intel.com>
Change-Id: I44ebeb988bc6a2f441fc6a0c38a30668aad999ad
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/455647
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
2019-08-26 19:07:56 +00:00
Shuhei Matsumoto
eab7360bcb nvmf/tcp: Factor out getting and filling buffers from nvmf_tcp_req_fill_iovs
This follows the practice of RDMA transport and is a preparation to
unify buffer allocation among transports.

Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: Ib85625f2a0eca01ef4028685dd838d6c41faad7b
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/465872
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2019-08-26 19:04:24 +00:00
Shuhei Matsumoto
72c10f7094 nvmf/tcp: Use spdk_mempool_get_bulk in nvmf_tcp_req_fill_iovs
This follows the practice of RDMA transport and a preparation to
unify buffer management among transports.

Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: I4e9b81b2bec813935064a6d49109b6a0365cb950
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/465871
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2019-08-26 19:04:24 +00:00
Shuhei Matsumoto
8aac212005 nvmf/tcp: Pass number of alloc buffers s as param to nvmf_tcp_request_free_buffers
This is a preparation to the next patch to use spdk_mempool_get_bulk.

Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: I28a5ad941004f139c9032d85c2ef92680081f1ce
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/465870
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2019-08-26 19:04:24 +00:00
Shuhei Matsumoto
5437470cdc nvmf/fc: Factor out getting and filling buffers from nvmf_fc_request_alloc_buffers
This follows the practice of RDMA transport and  is a preparation to
unify buffer allocation among transports.

Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: I3cd4377ae31e47bbde697837be2d9bc1b1b582f1
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/465869
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Reviewed-by: Anil Veerabhadrappa <anil.veerabhadrappa@broadcom.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2019-08-26 19:04:24 +00:00
Shuhei Matsumoto
71ae39594f nvmf/fc: Use buffer cache in nvmf_fc_request_alloc/free_buffers
FC transport can use buffer cache as same as RDMA and TCP transport
now. The next patch will factor out getting buffers and filling
buffers to iovs in nvmf_fc_request_alloc_buffers().

Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: I0d7b4552f6ba053ba8fb5b3ca8fe7657b86f9984
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/465868
Reviewed-by: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Anil Veerabhadrappa <anil.veerabhadrappa@broadcom.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2019-08-26 19:04:24 +00:00
Shuhei Matsumoto
fbb0f0faf9 nvmf/fc: Pass transport and num_buffers as params to nvmf_fc_request_free_buffers
This is a preparation to the next patch to use buffer cache in
FC transport.

Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: I116b064ea0b0a437f9a3293a6f3d46a0e5fc8ecf
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/465867
Reviewed-by: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Anil Veerabhadrappa <anil.veerabhadrappa@broadcom.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2019-08-26 19:04:24 +00:00
Shuhei Matsumoto
e3b8c31d03 nvmf/fc: Use spdk_mempool_get_bulk in nvmf_fc_request_alloc_buffers
This follows the practice of RDMA transport and a preparation to
unify buffer management among transport types.

Signed-off-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Change-Id: Ic7dc8e6b826baf7f471d192630e8a048a35056ac
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/465866
Reviewed-by: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Reviewed-by: Anil Veerabhadrappa <anil.veerabhadrappa@broadcom.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2019-08-26 19:04:24 +00:00