doc/vhost: add vhost-nvme target description
Change-Id: I633e2296ae80c213e0e27fd669e184eadb9978ed Signed-off-by: Changpeng Liu <changpeng.liu@intel.com> Reviewed-on: https://review.gerrithub.io/396090 Reviewed-by: Dariusz Stojaczyk <dariuszx.stojaczyk@intel.com> Reviewed-by: Daniel Verkamp <daniel.verkamp@intel.com> Reviewed-by: Pawel Wodkowski <pawelx.wodkowski@intel.com> Reviewed-by: Ben Walker <benjamin.walker@intel.com> Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
This commit is contained in:
parent
ef4a5bc922
commit
b4ce4e378c
42
doc/vhost.md
42
doc/vhost.md
@ -79,6 +79,15 @@ the following command to confirm your QEMU supports userspace vhost-blk.
|
||||
qemu-system-x86_64 -device vhost-user-blk-pci,help
|
||||
~~~
|
||||
|
||||
Userspace vhost-nvme target was added as experimental feature for SPDK 18.04
|
||||
release, patches for QEMU are available in SPDK's QEMU repository only.
|
||||
|
||||
Run the following command to confirm your QEMU supports userspace vhost-nvme.
|
||||
|
||||
~~~{.sh}
|
||||
qemu-system-x86_64 -device vhost-user-nvme,help
|
||||
~~~
|
||||
|
||||
# Starting SPDK vhost target {#vhost_start}
|
||||
|
||||
First, run the SPDK setup.sh script to setup some hugepages for the SPDK vhost target
|
||||
@ -176,6 +185,26 @@ extra `-r` or `--readonly` parameter.
|
||||
scripts/rpc.py construct_vhost_blk_controller --cpumask 0x1 -r vhost.1 Malloc0
|
||||
~~~
|
||||
|
||||
### Vhost-NVMe (experimental)
|
||||
|
||||
The following RPC will attach the Malloc0 bdev to the vhost.0 vhost-nvme
|
||||
controller. Malloc0 will appear as Namespace 1 of vhost.0 controller. Users
|
||||
can use `--cpumask` parameter to specify which cores should be used for this
|
||||
controller. Users must specify the maximum I/O queues supported for the
|
||||
controller, at least 1 Namespace is required for each controller.
|
||||
|
||||
~~~{.sh}
|
||||
$rpc_py construct_vhost_nvme_controller --cpumask 0x1 vhost.2 16
|
||||
$rpc_py add_vhost_nvme_ns vhost.2 Malloc0
|
||||
~~~
|
||||
|
||||
Users can use the following command to remove the controller, all the block
|
||||
devices attached to controller's Namespace will be removed automatically.
|
||||
|
||||
~~~{.sh}
|
||||
$rpc_py remove_vhost_controller vhost.2
|
||||
~~~
|
||||
|
||||
## QEMU {#vhost_qemu_config}
|
||||
|
||||
Now the virtual machine can be started with QEMU. The following command-line
|
||||
@ -214,6 +243,13 @@ Finally, specify the SPDK vhost devices:
|
||||
-device vhost-user-blk-pci,id=blk0,chardev=char1
|
||||
~~~
|
||||
|
||||
### Vhost-NVMe (experimental)
|
||||
|
||||
~~~{.sh}
|
||||
-chardev socket,id=char2,path=/var/tmp/vhost.2
|
||||
-device vhost-user-nvme,id=nvme0,chardev=char2,num_io_queues=4
|
||||
~~~
|
||||
|
||||
## Example output {#vhost_example}
|
||||
|
||||
This example uses an NVMe bdev alongside Mallocs. SPDK vhost application is started
|
||||
@ -376,6 +412,12 @@ scripts/rpc.py delete_bdev Malloc0
|
||||
|
||||
# Known bugs and limitations {#vhost_bugs}
|
||||
|
||||
## Vhost-NVMe (experimental) can only be supported with latest Linux kernel
|
||||
|
||||
Vhost-NVMe target was designed for one new feature of NVMe 1.3 specification, Doorbell
|
||||
Buffer Config Admin command, which is used for emulated NVMe controller only. Linux 4.12
|
||||
added this feature, so a new Guest kernel later than 4.12 is required to test this feature.
|
||||
|
||||
## Windows virtio-blk driver before version 0.1.130-1 only works with 512-byte sectors
|
||||
|
||||
The Windows `viostor` driver before version 0.1.130-1 is buggy and does not
|
||||
|
Loading…
Reference in New Issue
Block a user