bdev_virtio: added doc page

Change-Id: Ia88ae52117068ac395dad9ad3d7ac818e41077fb
Signed-off-by: Dariusz Stojaczyk <dariuszx.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/380956
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-by: Daniel Verkamp <daniel.verkamp@intel.com>
This commit is contained in:
Dariusz Stojaczyk 2017-10-02 19:31:06 +02:00 committed by Daniel Verkamp
parent 4a3ef93344
commit 71ea826507
5 changed files with 77 additions and 1 deletions

View File

@ -15,6 +15,7 @@ The development kit currently includes:
* [NVMe over Fabrics target](http://www.spdk.io/doc/nvmf.html) * [NVMe over Fabrics target](http://www.spdk.io/doc/nvmf.html)
* [iSCSI target](http://www.spdk.io/doc/iscsi.html) * [iSCSI target](http://www.spdk.io/doc/iscsi.html)
* [vhost target](http://www.spdk.io/doc/vhost.html) * [vhost target](http://www.spdk.io/doc/vhost.html)
* [Virtio-SCSI driver](http://www.spdk.io/doc/virtio.html)
# In this readme: # In this readme:

View File

@ -799,7 +799,8 @@ INPUT = ../include/spdk \
nvme-cli.md \ nvme-cli.md \
nvmf.md \ nvmf.md \
vagrant.md \ vagrant.md \
vhost.md vhost.md \
virtio.md
# This tag can be used to specify the character encoding of the source files # This tag can be used to specify the character encoding of the source files
# that doxygen parses. Internally doxygen uses the UTF-8 encoding. Doxygen uses # that doxygen parses. Internally doxygen uses the UTF-8 encoding. Doxygen uses

View File

@ -142,6 +142,38 @@ Configuration file syntax:
This exports 1 rbd block device, named Ceph0. This exports 1 rbd block device, named Ceph0.
## Virtio SCSI {#bdev_config_virtio_scsi}
The SPDK Virtio SCSI driver allows creating SPDK block devices from Virtio SCSI LUNs.
Use the following configuration file snippet to bind all available Virtio-SCSI PCI
devices on a virtual machine. The driver will perform a target scan on each device
and automatically create block device for each LUN.
~~~
[VirtioPci]
# If enabled, the driver will automatically use all available Virtio-SCSI PCI
# devices. Disabled by default.
Enable Yes
~~~
The driver also supports connecting to vhost-user devices exposed on the same host.
In the following case, the host app has created a vhost-scsi controller which is
accessible through the /tmp/vhost.0 domain socket.
~~~
[VirtioUser0]
# Path to the Unix domain socket using vhost-user protocol.
Path /tmp/vhost.0
# Maximum number of request queues to use. Default value is 1.
Queues 1
#[VirtioUser1]
#Path /tmp/vhost.1
~~~
Each Virtio-SCSI device may export up to 64 block devices named VirtioScsi0t0 ~ VirtioScsi0t63.
## GPT (GUID Partition Table) {#bdev_config_gpt} ## GPT (GUID Partition Table) {#bdev_config_gpt}
The GPT virtual bdev driver examines all bdevs as they are added and exposes partitions The GPT virtual bdev driver examines all bdevs as they are added and exposes partitions

View File

@ -31,6 +31,7 @@
- @ref blob - @ref blob
- @ref blobfs - @ref blobfs
- @ref vhost - @ref vhost
- @ref virtio
# Tools {#tools} # Tools {#tools}

41
doc/virtio.md Normal file
View File

@ -0,0 +1,41 @@
# Virtio SCSI driver {#virtio}
# Introduction {#virtio_intro}
Virtio SCSI driver is an initiator for SPDK @ref vhost application. The
driver allows any SPDK app to connect to another SPDK instance exposing
a vhost-scsi device. The driver will enumerate targets on the device (which acts
as a SCSI controller) and create *virtual* bdevs usable by any SPDK application.
Sending an I/O request to the Virtio SCSI bdev will put the request data into
a Virtio queue that is processed by the host SPDK app exposing the
controller. The host, after sending I/O to the real drive, will put the response
back into the Virtio queue. Then, the response is received by the Virtio SCSI
driver.
The driver, just like the SPDK @ref vhost, is using pollers instead of standard
interrupts to check for an I/O response. It bypasses kernel interrupt and context
switching overhead of QEMU and guest kernel, significantly boosting the overall
I/O performance.
Virtio SCSI driver supports two different usage models:
* PCI - This is the standard mode of operation when used in a guest virtual
machine, where QEMU has presented the virtio-scsi controller as a virtual
PCI device.
* User vhost - Can be used to connect to a vhost-scsi socket directly on the
same host.
# Multiqueue {#virtio_multiqueue}
The Virtio SCSI controller will automatically manage virtio queue distribution.
Currently each thread doing an I/O on a single bdev will get an exclusive queue.
Multi-threaded I/O on bdevs from a single Virtio-SCSI controller is not supported.
# Limitations {#virtio_limitations}
The Virtio SCSI driver is still experimental. Current implementation has many
limitations:
* supports only up to 8 hugepages (implies only 1GB sized pages are practical)
* single LUN per target
* only SPDK vhost-scsi controllers supported
* no RPC
* no multi-threaded I/O for single-queue virtio devices