doc: Fix Markdown MD032 linter warnings

"MD032 Lists should be surrounded by blank lines"
Fix this markdown linter error by inserting newlines or
adjusting text to list points using spaces.

Signed-off-by: Karol Latecki <karol.latecki@intel.com>
Change-Id: I09e1f021b8e95e0c6c58c393d7ecc11ce61c3132
Signed-off-by: Karol Latecki <karol.latecki@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/434
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Maciej Wawryk <maciejx.wawryk@intel.com>
This commit is contained in:
Karol Latecki 2020-02-04 16:41:17 +01:00 committed by Tomasz Zawadzki
parent acac4b3813
commit 3d8a0b19b0
23 changed files with 175 additions and 123 deletions

View File

@ -227,11 +227,13 @@ Added `spdk_bdev_get_write_unit_size()` function for retrieving required number
of logical blocks for write operation.
New zone-related fields were added to the result of the `get_bdevs` RPC call:
- `zoned`: indicates whether the device is zoned or a regular
block device
- `zone_size`: number of blocks in a single zone
- `max_open_zones`: maximum number of open zones
- `optimal_open_zones`: optimal number of open zones
The `zoned` field is a boolean and is always present, while the rest is only available for zoned
bdevs.
@ -949,6 +951,7 @@ parameter. The function will now update that parameter with the largest possible
for which the memory is contiguous in the physical memory address space.
The following functions were removed:
- spdk_pci_nvme_device_attach()
- spdk_pci_nvme_enumerate()
- spdk_pci_ioat_device_attach()
@ -958,6 +961,7 @@ The following functions were removed:
They were replaced with generic spdk_pci_device_attach() and spdk_pci_enumerate() which
require a new spdk_pci_driver object to be provided. It can be one of the following:
- spdk_pci_nvme_get_driver()
- spdk_pci_ioat_get_driver()
- spdk_pci_virtio_get_driver()
@ -1138,6 +1142,7 @@ Dropped support for DPDK 16.07 and earlier, which SPDK won't even compile with r
### RPC
The following RPC commands deprecated in the previous release are now removed:
- construct_virtio_user_scsi_bdev
- construct_virtio_pci_scsi_bdev
- construct_virtio_user_blk_bdev
@ -1326,6 +1331,7 @@ respectively.
### Virtio
The following RPC commands have been deprecated:
- construct_virtio_user_scsi_bdev
- construct_virtio_pci_scsi_bdev
- construct_virtio_user_blk_bdev
@ -1346,6 +1352,7 @@ spdk_file_get_id() returning unique ID for the file was added.
Added jsonrpc-client C library intended for issuing RPC commands from applications.
Added API enabling iteration over JSON object:
- spdk_json_find()
- spdk_json_find_string()
- spdk_json_find_array()
@ -1785,6 +1792,7 @@ write commands.
New API functions that accept I/O parameters in units of blocks instead of bytes
have been added:
- spdk_bdev_read_blocks(), spdk_bdev_readv_blocks()
- spdk_bdev_write_blocks(), spdk_bdev_writev_blocks()
- spdk_bdev_write_zeroes_blocks()
@ -1965,6 +1973,7 @@ current set of functions.
Support for SPDK performance analysis has been added to Intel® VTune™ Amplifier 2018.
This analysis provides:
- I/O performance monitoring (calculating standard I/O metrics like IOPS, throughput, etc.)
- Tuning insights on the interplay of I/O and compute devices by estimating how many cores
would be reasonable to provide for SPDK to keep up with a current storage workload.
@ -2115,6 +2124,7 @@ NVMe devices over a network using the iSCSI protocol. The application is located
in app/iscsi_tgt and a documented configuration file can be found at etc/spdk/spdk.conf.in.
This release also significantly improves the existing NVMe over Fabrics target.
- The configuration file format was changed, which will require updates to
any existing nvmf.conf files (see `etc/spdk/nvmf.conf.in`):
- `SubsystemGroup` was renamed to `Subsystem`.
@ -2135,6 +2145,7 @@ This release also significantly improves the existing NVMe over Fabrics target.
This release also adds one new feature and provides some better examples and tools
for the NVMe driver.
- The Weighted Round Robin arbitration method is now supported. This allows
the user to specify different priorities on a per-I/O-queue basis. To
enable WRR, set the `arb_mechanism` field during `spdk_nvme_probe()`.
@ -2215,6 +2226,7 @@ This release adds a user-space driver with support for the Intel I/O Acceleratio
This is the initial open source release of the Storage Performance Development Kit (SPDK).
Features:
- NVMe user-space driver
- NVMe example programs
- `examples/nvme/perf` tests performance (IOPS) using the NVMe user-space driver

View File

@ -10,6 +10,7 @@ interrupts, which avoids kernel context switches and eliminates interrupt
handling overhead.
The development kit currently includes:
* [NVMe driver](http://www.spdk.io/doc/nvme.html)
* [I/OAT (DMA engine) driver](http://www.spdk.io/doc/ioat.html)
* [NVMe over Fabrics target](http://www.spdk.io/doc/nvmf.html)
@ -172,6 +173,7 @@ of the SPDK static ones.
In order to start a SPDK app linked with SPDK shared libraries, make sure
to do the following steps:
- run ldconfig specifying the directory containing SPDK shared libraries
- provide proper `LD_LIBRARY_PATH`

View File

@ -210,6 +210,7 @@ requires finer granularity it will have to accommodate that itself.
As mentioned earlier, Blobstore can share a single thread with an application or the application
can define any number of threads, within resource constraints, that makes sense. The basic considerations that must be
followed are:
* Metadata operations (API with MD in the name) should be isolated from each other as there is no internal locking on the
memory structures affected by these API.
* Metadata operations should be isolated from conflicting IO operations (an example of a conflicting IO would be one that is
@ -326,6 +327,7 @@ to the unallocated cluster - new extent is chosen. This information is stored in
There are two extent representations on-disk, dependent on `use_extent_table` (default:true) opts used
when creating a blob.
* **use_extent_table=true**: EXTENT_PAGE descriptor is not part of linked list of pages. It contains extents
that are not run-length encoded. Each extent page is referenced by EXTENT_TABLE descriptor, which is serialized
as part of linked list of pages. Extent table is run-length encoding all unallocated extent pages.
@ -393,5 +395,6 @@ example,
~~~
And for the most part the following conventions are followed throughout:
* functions beginning with an underscore are called internally only
* functions or variables with the letters `cpl` are related to set or callback completions

View File

@ -46,6 +46,7 @@ from the oldest band to the youngest.
The address map and valid map are, along with a several other things (e.g. UUID of the device it's
part of, number of surfaced LBAs, band's sequence number, etc.), parts of the band's metadata. The
metadata is split in two parts:
* the head part, containing information already known when opening the band (device's UUID, band's
sequence number, etc.), located at the beginning blocks of the band,
* the tail part, containing the address map and the valid map, located at the end of the band.
@ -146,6 +147,7 @@ bdev or OCSSD `nvme` bdev.
Similar to other bdevs, the FTL bdevs can be created either based on JSON config files or via RPC.
Both interfaces require the same arguments which are described by the `--help` option of the
`bdev_ftl_create` RPC call, which are:
- bdev's name
- base bdev's name (base bdev must implement bdev_zone API)
- UUID of the FTL device (if the FTL is to be restored from the SSD)
@ -161,6 +163,7 @@ on [spdk-3.0.0](https://github.com/spdk/qemu/tree/spdk-3.0.0) branch.
To emulate an Open Channel device, QEMU expects parameters describing the characteristics and
geometry of the SSD:
- `serial` - serial number,
- `lver` - version of the OCSSD standard (0 - disabled, 1 - "1.2", 2 - "2.0"), libftl only supports
2.0,
@ -240,6 +243,7 @@ Logical blks per chunk: 24576
```
In order to create FTL on top Open Channel SSD, the following steps are required:
1) Attach OCSSD NVMe controller
2) Create OCSSD bdev on the controller attached in step 1 (user could specify parallel unit range
and create multiple OCSSD bdevs on single OCSSD NVMe controller)

View File

@ -4950,6 +4950,7 @@ Either UUID or name is used to access logical volume store in RPCs.
A logical volume has a UUID and a name for its identification.
The UUID of the logical volume is generated on creation and it can be unique identifier.
The alias of the logical volume takes the format _lvs_name/lvol_name_ where:
* _lvs_name_ is the name of the logical volume store.
* _lvol_name_ is specified on creation and can be renamed.

View File

@ -6,6 +6,7 @@ Now nvme-cli can support both kernel driver and SPDK user mode driver for most o
Intel specific commands.
1. Clone the nvme-cli repository from the SPDK GitHub fork. Make sure you check out the spdk-1.6 branch.
~~~{.sh}
git clone -b spdk-1.6 https://github.com/spdk/nvme-cli.git
~~~
@ -19,12 +20,12 @@ git clone -b spdk-1.6 https://github.com/spdk/nvme-cli.git
5. Execute "<spdk_folder>/scripts/setup.sh" with the "root" account.
6. Update the "spdk.conf" file under nvme-cli folder to properly configure the SPDK. Notes as following:
~~~{.sh}
spdk=1
Indicates whether or not to use spdk. Can be 0 (off) or 1 (on).
Defaults to 1 which assumes that you have run "<spdk_folder>/scripts/setup.sh", unbinding your drives from the kernel.
core_mask=0x1
A bitmask representing which core(s) to use for nvme-cli operations.
Defaults to core 0.
@ -42,11 +43,13 @@ Defaults to 0.
7. Run the "./nvme list" command to get the domain:bus:device.function for each found NVMe SSD.
8. Run the other nvme commands with domain:bus:device.function instead of "/dev/nvmeX" for the specified device.
~~~{.sh}
Example: ./nvme smart-log 0000:01:00.0
~~~
9. Run the "./nvme intel" commands for Intel specific commands against Intel NVMe SSD.
~~~{.sh}
Example: ./nvme intel internal-log 0000:08:00.0
~~~
@ -57,9 +60,11 @@ use the kernel driver if wanted.
## Use scenarios
### Run as the only SPDK application on the system
1. Modify the spdk to 1 in spdk.conf. If the system has fewer cores or less memory, update the spdk.conf accordingly.
### Run together with other running SPDK applications on shared NVMe SSDs
1. For the other running SPDK application, start with the parameter like "-i 1" to have the same "shm_id".
2. Use the default spdk.conf setting where "shm_id=1" to start the nvme-cli.
@ -67,19 +72,23 @@ use the kernel driver if wanted.
3. If other SPDK applications run with different shm_id parameter, update the "spdk.conf" accordingly.
### Run with other running SPDK applications on non-shared NVMe SSDs
1. Properly configure the other running SPDK applications.
~~~{.sh}
a. Only access the NVMe SSDs it wants.
b. Allocate a fixed number of memory instead of all available memory.
~~~
2. Properly configure the spdk.conf setting for nvme-cli.
~~~{.sh}
a. Not access the NVMe SSDs from other SPDK applications.
b. Change the mem_size to a proper size.
~~~
## Note
1. To run the newly built nvme-cli, either explicitly run as "./nvme" or added it into the $PATH to avoid
invoke other already installed version.

View File

@ -249,6 +249,7 @@ DPDK EAL allows different types of processes to be spawned, each with different
on the hugepage memory used by the applications.
There are two types of processes:
1. a primary process which initializes the shared memory and has full privileges and
2. a secondary process which can attach to the primary process by mapping its shared memory
regions and perform NVMe operations including creating queue pairs.

View File

@ -201,6 +201,7 @@ NVMe Domain NQN = "nqn.", year, '-', month, '.', reverse domain, ':', utf-8 stri
~~~
Please note that the following types from the definition above are defined elsewhere:
1. utf-8 string: Defined in [rfc 3629](https://tools.ietf.org/html/rfc3629).
2. reverse domain: Equivalent to domain name as defined in [rfc 1034](https://tools.ietf.org/html/rfc1034).

View File

@ -11,6 +11,7 @@ for the next SPDK release.
All dependencies should be handled by scripts/pkgdep.sh script.
Package dependencies at the moment include:
- configshell
### Run SPDK application instance

View File

@ -31,6 +31,7 @@ copy the vagrant configuration file (a.k.a. `Vagrantfile`) to it,
and run `vagrant up` with some settings defined by the script arguments.
By default, the VM created is configured with:
- 2 vCPUs
- 4G of RAM
- 2 NICs (1 x NAT - host access, 1 x private network)

View File

@ -347,9 +347,9 @@ To enable it on Linux, it is required to modify kernel options inside the
virtual machine.
Instructions below for Ubuntu OS:
1. `vi /etc/default/grub`
2. Make sure mq is enabled:
`GRUB_CMDLINE_LINUX="scsi_mod.use_blk_mq=1"`
2. Make sure mq is enabled: `GRUB_CMDLINE_LINUX="scsi_mod.use_blk_mq=1"`
3. `sudo update-grub`
4. Reboot virtual machine

View File

@ -89,6 +89,7 @@ device (SPDK) can access it directly. The memory can be fragmented into multiple
physically-discontiguous regions and Vhost-user specification puts a limit on
their number - currently 8. The driver sends a single message for each region with
the following data:
* file descriptor - for mmap
* user address - for memory translations in Vhost-user messages (e.g.
translating vring addresses)
@ -106,6 +107,7 @@ as they use common SCSI I/O to inquiry the underlying disk(s).
Afterwards, the driver requests the number of maximum supported queues and
starts sending virtqueue data, which consists of:
* unique virtqueue id
* index of the last processed vring descriptor
* vring addresses (from user address space)

View File

@ -6,6 +6,7 @@ SPDK Virtio driver is a C library that allows communicating with Virtio devices.
It allows any SPDK application to become an initiator for (SPDK) vhost targets.
The driver supports two different usage models:
* PCI - This is the standard mode of operation when used in a guest virtual
machine, where QEMU has presented the virtio controller as a virtual PCI device.
* vhost-user - Can be used to connect to a vhost socket directly on the same host.

View File

@ -72,21 +72,26 @@ VPP can be configured using a VPP startup file and the `vppctl` command; By defa
Some key values from iSCSI point of view includes:
CPU section (`cpu`):
- `main-core <lcore>` -- logical CPU core used for main thread.
- `corelist-workers <lcore list>` -- logical CPU cores where worker threads are running.
DPDK section (`dpdk`):
- `num-rx-queues <num>` -- number of receive queues.
- `num-tx-queues <num>` -- number of transmit queues.
- `dev <PCI address>` -- whitelisted device.
Session section (`session`):
- `evt_qs_memfd_seg` -- uses a memfd segment for event queues. This is required for SPDK.
Socket server session (`socksvr`):
- `socket-name <path>` -- configure API socket filename (curently SPDK uses default path `/run/vpp-api.sock`).
Plugins section (`plugins`):
- `plugin <plugin name> { [enable|disable] }` -- enable or disable VPP plugin.
### Example:

View File

@ -60,6 +60,7 @@ other than -t, -s, -n and -a.
## fio
Fio job parameters.
- bs: block size
- qd: io depth
- rw: workload mode

View File

@ -21,6 +21,7 @@ Quick start instructions for OSX:
* Note: The extension pack has different licensing than main VirtualBox, please
review them carefully as the evaluation license is for personal use only.
```
/usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
brew doctor

View File

@ -7,6 +7,7 @@ Multiple controllers and namespaces can be exposed to the fuzzer at a time. In o
handle multiple namespaces, the fuzzer will round robin assign a thread to each namespace and
submit commands to that thread at a set queue depth. (currently 128 for I/O, 16 for Admin). The
application will terminate under three conditions:
1. The user specified run time expires (see the -t flag).
2. One of the target controllers stops completing I/O operations back to the fuzzer i.e. controller timeout.
3. The user specified a json file containing operations to run and the fuzzer has received valid completions for all of them.
@ -14,8 +15,10 @@ application will terminate under three conditions:
# Output
By default, the fuzzer will print commands that:
1. Complete successfully back from the target, or
2. Are outstanding at the time of a controller timeout.
Commands are dumped as named objects in json format which can then be supplied back to the
script for targeted debugging on a subsequent run. See `Debugging` below.
By default no output is generated when a specific command is returned with a failed status.

View File

@ -14,6 +14,7 @@ Like the NVMe fuzzer, there is an example json file showing the types of request
that the application accepts. Since the vhost application accepts both vhost block
and vhost scsi commands, there are three distinct object types that can be passed in
to the application.
1. vhost_blk_cmd
2. vhost_scsi_cmd
3. vhost_scsi_mgmt_cmd

View File

@ -10,6 +10,7 @@ to emulate an RDMA enabled NIC. NVMe controllers can also be virtualized in emul
## VM Envronment Requirements (Host):
- 8 GiB of RAM (for DPDK)
- Enable intel_kvm on the host machine from the bios.
- Enable nesting for VMs in kernel command line (for vhost tests).
@ -28,6 +29,7 @@ configuration file. For a full list of the variable declarations available for a
`test/common/autotest_common.sh` starting at line 13.
## Steps for Configuring the VM
1. Download a fresh Fedora 26 image.
2. Perform the installation of Fedora 26 server.
3. Create an admin user sys_sgsw (enabling passwordless sudo for this account will make life easier during the tests).
@ -60,6 +62,7 @@ created above and guest or VM refer to the Ubuntu VM created in this section.
- move .qcow2 file and ssh keys to default locations used by vhost test scripts
Alternatively it is possible to create the VM image manually using following steps:
1. Create an image file for the VM. It does not have to be large, about 3.5G should suffice.
2. Create an ssh keypair for host-guest communications (performed on the host):
- Generate an ssh keypair with the name spdk_vhost_id_rsa and save it in `/root/.ssh`.