doc/vhost: minor wording tweaks and cleanup
Tweak some wording to be clearer, and add newlines after each section header for consistency. Change-Id: I186c7d81b511798838c940c00a18571a08d7fe61 Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com> Reviewed-on: https://review.gerrithub.io/368209 Tested-by: SPDK Automated Test System <sys_sgsw@intel.com> Reviewed-by: Ben Walker <benjamin.walker@intel.com> Reviewed-by: Jim Harris <james.r.harris@intel.com>
This commit is contained in:
parent
12d75f6323
commit
c35586fe6c
110
doc/vhost.md
110
doc/vhost.md
@ -2,62 +2,55 @@
|
|||||||
|
|
||||||
# vhost Getting Started Guide {#vhost_getting_started}
|
# vhost Getting Started Guide {#vhost_getting_started}
|
||||||
|
|
||||||
The Storage Performance Development Kit vhost application is named "vhost".
|
The Storage Performance Development Kit vhost application is named `vhost`.
|
||||||
This application extends SPDK to present virtio storage controllers to QEMU-based
|
This application extends SPDK to present virtio storage controllers to QEMU-based
|
||||||
VMs and process I/O submitted to devices attached to those controllers.
|
VMs and process I/O submitted to devices attached to those controllers.
|
||||||
|
|
||||||
# Prerequisites {#vhost_prereqs}
|
# Prerequisites {#vhost_prereqs}
|
||||||
|
|
||||||
The base SPDK build instructions are located README.md in SPDK main directory.
|
The base SPDK build instructions are located in README.md in the SPDK root directory.
|
||||||
This guide assumes familiarity with building SPDK using the default options.
|
This guide assumes familiarity with building SPDK using the default options.
|
||||||
|
|
||||||
## Supported Guest Operating Systems
|
## Supported Guest Operating Systems
|
||||||
|
|
||||||
The guest OS must contain virtio drivers. The SPDK vhost target has been tested
|
The guest OS must contain virtio drivers. The SPDK vhost target has been tested
|
||||||
with Ubuntu 16.04, Fedora 25, Windows 2012 R2.
|
with Ubuntu 16.04, Fedora 25, Windows 2012 R2.
|
||||||
|
|
||||||
# Building
|
# Building
|
||||||
|
|
||||||
## SPDK
|
## SPDK
|
||||||
The vhost target is built by default. To enable/disable building the vhost
|
|
||||||
target, either modify the following line in the CONFIG file in the root directory:
|
|
||||||
|
|
||||||
~~~
|
|
||||||
CONFIG_VHOST?=y
|
|
||||||
~~~
|
|
||||||
|
|
||||||
Or specify on the command line:
|
|
||||||
|
|
||||||
~~~
|
|
||||||
make CONFIG_VHOST=y
|
|
||||||
~~~
|
|
||||||
|
|
||||||
|
The vhost target is built by default.
|
||||||
Once built, the binary will be at `app/vhost/vhost`.
|
Once built, the binary will be at `app/vhost/vhost`.
|
||||||
|
|
||||||
## QEMU
|
## QEMU
|
||||||
|
|
||||||
Vhost functionality is dependent on QEMU patches to enable virtio-scsi and
|
Vhost functionality is dependent on QEMU patches to enable virtio-scsi and
|
||||||
virtio-block in userspace - those patches are currently working their way
|
virtio-blk in userspace - those patches are currently working their way
|
||||||
through the QEMU mailing list, but temporary patches to enable this
|
through the QEMU mailing list, but temporary patches to enable this
|
||||||
functionality are available in the spdk branch at https://github.com/spdk/qemu.
|
functionality are available in the spdk branch at https://github.com/spdk/qemu.
|
||||||
|
|
||||||
# Configuration {#vhost_config}
|
# Configuration {#vhost_config}
|
||||||
|
|
||||||
## SPDK
|
## SPDK
|
||||||
A `vhost` specific configuration file is used to configure the SPDK vhost
|
|
||||||
|
A vhost-specific configuration file is used to configure the SPDK vhost
|
||||||
target. A fully documented example configuration file is located at
|
target. A fully documented example configuration file is located at
|
||||||
`etc/spdk/vhost.conf.in`. This file defines the following:
|
`etc/spdk/vhost.conf.in`. This file defines the following:
|
||||||
|
|
||||||
### Storage Backends
|
### Storage Backends
|
||||||
Storage backends are devices which will be exposed to the guest OS. In
|
|
||||||
case of vhost block controller they are exposed as block devices, in case of
|
Storage backends are devices which will be exposed to the guest OS.
|
||||||
vhost scsi as SCSI LUNs on devices attached to the vhost-scsi controller.
|
Vhost-blk backends are exposed as block devices in the guest OS, and vhost-scsi backends are
|
||||||
|
exposed as as SCSI LUNs on devices attached to the vhost-scsi controller in the guest OS.
|
||||||
SPDK supports several different types of storage backends, including NVMe,
|
SPDK supports several different types of storage backends, including NVMe,
|
||||||
Linux AIO, malloc ramdisk and CephRBD. Refer to @ref bdev_getting_started for
|
Linux AIO, malloc ramdisk and Ceph RBD. Refer to @ref bdev_getting_started for
|
||||||
additional information on specifying storage backends in the configuration file.
|
additional information on specifying storage backends in the configuration file.
|
||||||
|
|
||||||
### Mappings Block Controllers and Storage Backends
|
### Mappings Between Block Controllers and Storage Backends
|
||||||
The vhost target is exposing block controllers to the virtual machines.
|
|
||||||
The device in the vhost controller is associated with an SPDK block device and
|
The vhost target exposes block devices to the virtual machines.
|
||||||
|
The device in the vhost controller is associated with an SPDK block device, and the
|
||||||
configuration file defines those associations. The block device to Dev mapping
|
configuration file defines those associations. The block device to Dev mapping
|
||||||
is specified in the configuration file as:
|
is specified in the configuration file as:
|
||||||
|
|
||||||
@ -67,11 +60,11 @@ is specified in the configuration file as:
|
|||||||
Dev BackendX # "BackendX" is block device name from previous
|
Dev BackendX # "BackendX" is block device name from previous
|
||||||
# sections in config file
|
# sections in config file
|
||||||
#Cpumask 0x1 # Optional parameter defining which core controller uses
|
#Cpumask 0x1 # Optional parameter defining which core controller uses
|
||||||
|
|
||||||
~~~
|
~~~
|
||||||
|
|
||||||
### Mappings Between SCSI Controllers and Storage Backends
|
### Mappings Between SCSI Controllers and Storage Backends
|
||||||
The vhost target is exposing SCSI controllers to the virtual machines.
|
|
||||||
|
The vhost target exposes SCSI controllers to the virtual machines.
|
||||||
Each device in the vhost controller is associated with an SPDK block device and
|
Each device in the vhost controller is associated with an SPDK block device and
|
||||||
configuration file defines those associations. The block device to Dev mappings
|
configuration file defines those associations. The block device to Dev mappings
|
||||||
are specified in the configuration file as:
|
are specified in the configuration file as:
|
||||||
@ -85,10 +78,10 @@ are specified in the configuration file as:
|
|||||||
...
|
...
|
||||||
Dev n BackendN
|
Dev n BackendN
|
||||||
#Cpumask 0x1 # Optional parameter defining which core controller uses
|
#Cpumask 0x1 # Optional parameter defining which core controller uses
|
||||||
|
|
||||||
~~~
|
~~~
|
||||||
|
|
||||||
### Vhost Sockets
|
### Vhost Sockets
|
||||||
|
|
||||||
Userspace vhost uses UNIX domain sockets for communication between QEMU
|
Userspace vhost uses UNIX domain sockets for communication between QEMU
|
||||||
and the vhost target. Each vhost controller is associated with a UNIX domain
|
and the vhost target. Each vhost controller is associated with a UNIX domain
|
||||||
socket file with filename equal to the Name argument in configuration file.
|
socket file with filename equal to the Name argument in configuration file.
|
||||||
@ -96,41 +89,43 @@ Sockets are created at current working directory when starting the SPDK vhost
|
|||||||
target.
|
target.
|
||||||
|
|
||||||
### Core Affinity Configuration
|
### Core Affinity Configuration
|
||||||
Vhost target can be restricted to run on certain cores by specifying a ReactorMask.
|
|
||||||
Default is to allow vhost target work on core 0. For NUMA systems it is essential
|
Vhost target can be restricted to run on certain cores by specifying a `ReactorMask`.
|
||||||
|
Default is to allow vhost target work on core 0. For NUMA systems, it is essential
|
||||||
to run vhost with cores on each socket to achieve optimal performance.
|
to run vhost with cores on each socket to achieve optimal performance.
|
||||||
|
|
||||||
To specify which core each controller should use, it can be defined by optional
|
Each controller may be assigned a set of cores using the optional
|
||||||
Cpumask parameter in configuration file. For NUMA systems the Cpumask should
|
`Cpumask` parameter in configuration file. For NUMA systems, the Cpumask should
|
||||||
specify cores on the same CPU socket as its associated VM. Application will
|
specify cores on the same CPU socket as its associated VM. The `vhost` application will
|
||||||
pick one core from ReactorMask masked by Cpumask. Cpumask must be subset of
|
pick one core from `ReactorMask` masked by `Cpumask`. `Cpumask` must be a subset of
|
||||||
ReactorMask.
|
`ReactorMask`.
|
||||||
|
|
||||||
## QEMU
|
## QEMU
|
||||||
|
|
||||||
Userspace vhost-scsi adds the following command line option for QEMU:
|
Userspace vhost-scsi adds the following command line option for QEMU:
|
||||||
~~~
|
~~~
|
||||||
-device vhost-user-scsi-pci,id=scsi0,chardev=char0
|
-device vhost-user-scsi-pci,id=scsi0,chardev=char0
|
||||||
~~~
|
~~~
|
||||||
|
|
||||||
Userspace vhost-block adds the following command line option for QEMU:
|
Userspace vhost-blk adds the following command line option for QEMU:
|
||||||
~~~
|
~~~
|
||||||
-device vhost-user-blk-pci,logical_block_size=4096,size=512M,chardev=char0
|
-device vhost-user-blk-pci,logical_block_size=4096,size=512M,chardev=char0
|
||||||
~~~
|
~~~
|
||||||
|
|
||||||
In order to start qemu with vhost you need to specify following options:
|
In order to start qemu with vhost you need to specify following options:
|
||||||
|
|
||||||
- Socket, which QEMU will use for vhost communication with SPDK:
|
- Socket, which QEMU will use for vhost communication with SPDK:
|
||||||
~~~
|
~~~
|
||||||
-chardev socket,id=char0,path=/path/to/vhost/socket
|
-chardev socket,id=char0,path=/path/to/vhost/socket
|
||||||
~~~
|
~~~
|
||||||
|
|
||||||
- Hugepages to share memory between vm and vhost target
|
- Hugepages to share memory between vm and vhost target
|
||||||
~~~
|
~~~
|
||||||
-object memory-backend-file,id=mem,size=1G,mem-path=/dev/hugepages,share=on
|
-object memory-backend-file,id=mem,size=1G,mem-path=/dev/hugepages,share=on
|
||||||
~~~
|
~~~
|
||||||
|
|
||||||
# Running Vhost Target
|
# Running Vhost Target
|
||||||
|
|
||||||
To get started, the following example is usually sufficient:
|
To get started, the following example is usually sufficient:
|
||||||
~~~
|
~~~
|
||||||
app/vhost/vhost -c /path/to/vhost.conf
|
app/vhost/vhost -c /path/to/vhost.conf
|
||||||
@ -143,40 +138,43 @@ app/vhost/vhost -h
|
|||||||
|
|
||||||
|
|
||||||
## Example
|
## Example
|
||||||
|
|
||||||
Assume that qemu and spdk are in respectively `qemu` and `spdk` directories.
|
Assume that qemu and spdk are in respectively `qemu` and `spdk` directories.
|
||||||
~~~
|
~~~
|
||||||
./qemu/build/x86_64-softmmu/qemu-system-x86_64 \
|
./qemu/build/x86_64-softmmu/qemu-system-x86_64 \
|
||||||
-m 1024 \
|
-m 1024 \
|
||||||
-object memory-backend-file,id=mem,size=1G,mem-path=/dev/hugepages,share=on \
|
-object memory-backend-file,id=mem,size=1G,mem-path=/dev/hugepages,share=on \
|
||||||
-numa node,memdev=mem \
|
-numa node,memdev=mem \
|
||||||
-drive file=$PROJECTS/os.qcow2,if=none,id=disk \
|
-drive file=$PROJECTS/os.qcow2,if=none,id=disk \
|
||||||
-device ide-hd,drive=disk,bootindex=0 \
|
-device ide-hd,drive=disk,bootindex=0 \
|
||||||
-chardev socket,id=char0,path=./spdk/vhost.0 \
|
-chardev socket,id=char0,path=./spdk/vhost.0 \
|
||||||
-device vhost-user-scsi-pci,id=scsi0,chardev=char0 \
|
-device vhost-user-scsi-pci,id=scsi0,chardev=char0 \
|
||||||
-chardev socket,id=char1,path=./spdk/vhost.1 \
|
-chardev socket,id=char1,path=./spdk/vhost.1 \
|
||||||
-device vhost-user-blk-pci,logical_block_size=4096,size=512M,chardev=char1 \
|
-device vhost-user-blk-pci,logical_block_size=4096,size=512M,chardev=char1 \
|
||||||
--enable-kvm
|
--enable-kvm
|
||||||
~~~
|
~~~
|
||||||
|
|
||||||
# Experimental features {#vhost_experimental}
|
# Experimental Features {#vhost_experimental}
|
||||||
|
|
||||||
## Multi-Queue Block Layer (blk_mq)
|
## Multi-Queue Block Layer (blk-mq)
|
||||||
It is possible to use multiqueue feature in vhost.
|
|
||||||
To enable it on linux it is required to modify kernel options inside
|
It is possible to use the Linux kernel block multi-queue feature with vhost.
|
||||||
|
To enable it on Linux, it is required to modify kernel options inside the
|
||||||
virtual machine.
|
virtual machine.
|
||||||
|
|
||||||
Instructions below for Ubuntu OS:
|
Instructions below for Ubuntu OS:
|
||||||
1. `vi /etc/default/grub`
|
1. `vi /etc/default/grub`
|
||||||
2. Make sure mq is enabled:
|
2. Make sure mq is enabled:
|
||||||
GRUB_CMDLINE_LINUX="scsi_mod.use_blk_mq=1"
|
`GRUB_CMDLINE_LINUX="scsi_mod.use_blk_mq=1"`
|
||||||
3. `sudo update-grub`
|
3. `sudo update-grub`
|
||||||
4. Reboot virtual machine
|
4. Reboot virtual machine
|
||||||
|
|
||||||
To achieve better performance make sure to increase number of cores
|
To achieve better performance, make sure to increase number of cores
|
||||||
assigned to vm.
|
assigned to the VM.
|
||||||
|
|
||||||
# Known bugs and limitations {#vhost_bugs}
|
# Known bugs and limitations {#vhost_bugs}
|
||||||
|
|
||||||
## Hot plug is not supported
|
## Hot plug is not supported
|
||||||
|
|
||||||
Hot plug is not supported in vhost yet. Event queue path doesn't handle that
|
Hot plug is not supported in vhost yet. Event queue path doesn't handle that
|
||||||
case yet.
|
case yet.
|
||||||
|
Loading…
Reference in New Issue
Block a user