Fix Markdown MD022 linter warnings - headers blank lines

MD022 Headers should be surrounded by blank lines

Signed-off-by: Karol Latecki <karol.latecki@intel.com>
Change-Id: I768324b00fc684c254aff6a85b93d9aed7a0cee5
Signed-off-by: Karol Latecki <karol.latecki@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/656
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
This commit is contained in:
Karol Latecki 2020-02-07 12:48:26 +01:00 committed by Tomasz Zawadzki
parent 8ba413a7a4
commit 93be26a51d
10 changed files with 80 additions and 0 deletions

View File

@ -3,6 +3,7 @@
## v20.04: (Upcoming Release)
### vmd
A new function, `spdk_vmd_fini`, has been added. It releases all resources acquired by the VMD
library through the `spdk_vmd_init` call.
@ -1404,6 +1405,7 @@ spdk_bdev_get_qd(), spdk_bdev_get_qd_sampling_period(), and
spdk_bdev_set_qd_sampling_period().
### RAID module
A new bdev module called "raid" has been added as experimental module which
aggregates underlying NVMe bdevs and exposes a single raid bdev. Please note
that vhost will not work with this module because it does not yet have support

View File

@ -5,15 +5,19 @@ See [The SPDK Community Page](http://www.spdk.io/community/) for other SPDK comm
<!--- Provide a general summary of the issue in the Title above -->
## Expected Behavior
<!--- Tell us what should happen -->
## Current Behavior
<!--- Tell us what happens instead of the expected behavior -->
## Possible Solution
<!--- Not obligatory, but suggest a fix/reason for the bug, -->
## Steps to Reproduce
<!--- Provide a link to a live example, or an unambiguous set of steps to -->
<!--- reproduce this bug. Include code to reproduce, if relevant -->
1.
@ -22,4 +26,5 @@ See [The SPDK Community Page](http://www.spdk.io/community/) for other SPDK comm
4.
## Context (Environment including OS version, SPDK version, etc.)
<!--- Providing context helps us come up with a solution that is most useful in the real world -->

View File

@ -1,28 +1,37 @@
# Storage Performance Development Kit {#index}
# Introduction
@copydoc intro
# Concepts
@copydoc concepts
# User Guides
@copydoc user_guides
# Programmer Guides
@copydoc prog_guides
# General Information
@copydoc general
# Miscellaneous
@copydoc misc
# Driver Modules
@copydoc driver_modules
# Tools
@copydoc tools
# Performance Reports
@copydoc performance_reports

View File

@ -1,4 +1,5 @@
# Notify library {#notify}
The notify library implements an event bus, allowing users to register, generate,
and listen for events. For example, the bdev library may register a new event type
for bdev creation. Any time a bdev is created, it "sends" the event. Consumers of

View File

@ -131,10 +131,12 @@ To build nvmf_tgt with the FC transport, there is an additional FC LLD (Low Leve
Please contact your FC vendor for instructions to obtain FC driver module.
### Broadcom FC LLD code
FC LLD driver for Broadcom FC NVMe capable adapters can be obtained from,
https://github.com/ecdufcdrvr/bcmufctdrvr.
### Fetch FC LLD module and then build SPDK with FC enabled:
After cloning SPDK repo and initialize submodules, FC LLD library is built which then can be linked with
the fc transport.

View File

@ -411,5 +411,6 @@ See the [bug report](https://bugzilla.redhat.com/show_bug.cgi?id=1411092) for
more information.
## QEMU vhost-user-blk
QEMU [vhost-user-blk](https://git.qemu.org/?p=qemu.git;a=commit;h=00343e4b54ba) is
supported from version 2.12.

View File

@ -3,12 +3,14 @@ however the three operating modes are covered in more detail here:
Command Mode
------------
This is the default and will just execute one command at a time. It's simple
but the downside is that if you are going to interact quite a bit with the
blobstore, the startup time for the application can be cumbersome.
Shell Mode
----------
You startup shell mode by using the -S command. At that point you will get
a "blob>" prompt where you can enter any of the commands, including -h,
to execute them. You can stil enter just one at a time but the initial
@ -17,6 +19,7 @@ anymore so it is much more usable.
Script (aka test) Mode
----------------------
In script mode you just supply one command with a filename when you start
the cli, for example `blobcli -T test.bs` will feed the tool the file
called test.bs which contains a series of commands that will all run
@ -37,6 +40,7 @@ script lines will simply be skipped, otherwise the tool will exit if
it runs into an invalid line (ie './blobcli -T test.bs ignore`).
Sample test/bs file:
~~~{.sh}
# this is a comment
-i

View File

@ -5,60 +5,93 @@ In order to reproduce test cases described in [SPDK NVMe-OF Performance Test Cas
Currently RDMA NIC IP address assignment must be done manually before running the tests.
# Prepare the configuration file
Configure the target, initiators, and FIO workload in the json configuration file.
## General
Options which apply to both target and all initiator servers such as "password" and "username" fields.
All servers are required to have the same user credentials for running the test.
Test results can be found in /tmp/results directory.
### transport
Transport layer to use between Target and Initiator servers - rdma or tcp.
## Target
Configure the target server information.
### nic_ips
List of IP addresses othat will be used in this test..
NVMe namespaces will be split between provided IP addresses.
So for example providing 2 IP's with 16 NVMe drives present will result in each IP managing
8 NVMe subystems.
### mode
"spdk" or "kernel" values allowed.
### use_null_block
Use null block device instead of present NVMe drives. Used for latency measurements as described
in Test Case 3 of performance report.
### num_cores
List of CPU cores to assign for running SPDK NVMe-OF Target process. Can specify exact core numbers or ranges, eg:
[0, 1, 10-15].
### nvmet_bin
Path to nvmetcli application executable. If not provided then system-wide package will be used
by default. Not used if "mode" is set to "spdk".
### num_shared_buffers
Number of shared buffers to use when creating transport layer.
## Initiator
Describes initiator arguments. There can be more than one initiator section in the configuration file.
For the sake of easier results parsing from multiple initiators please use only digits and letters
in initiator section name.
### ip
Management IP address used for SSH communication with initiator server.
### nic_ips
List of target IP addresses to which the initiator should try to connect.
### mode
"spdk" or "kernel" values allowed.
### num_cores
Applies only to SPDK initiator. Number of CPUs core to use for running FIO job.
If not specified then by default each connected subsystem gets its own CPU core.
### nvmecli_dir
Path to directory with nvme-cli application. If not provided then system-wide package will be used
by default. Not used if "mode" is set to "spdk".
### fio_bin
Path to the fio binary that will be used to compile SPDK and run the test.
If not specified, then the script will use /usr/src/fio/fio as the default.
### extra_params
Space separated string with additional settings for "nvme connect" command
other than -t, -s, -n and -a.
## fio
Fio job parameters.
- bs: block size
@ -70,6 +103,7 @@ Fio job parameters.
- run_num: how many times to run given workload in loop
# Running Test
Before running the test script use the setup.sh script to bind the devices you want to
use in the test to the VFIO/UIO driver.
Run the script on the NVMe-oF target system:
@ -84,6 +118,7 @@ The script uses another spdk script (scripts/rpc.py) so we pass the path to rpc.
as a runtime environment parameter.
# Test Results
When the test completes, you will find a csv file (nvmf_results.csv) containing the results in the target node
directory /tmp/results.

View File

@ -16,11 +16,13 @@ to emulate an RDMA enabled NIC. NVMe controllers can also be virtualized in emul
- In `/etc/default/grub` append the following to the GRUB_CMDLINE_LINUX line: intel_iommu=on kvm-intel.nested=1.
## VM Specs
When creating the user during the fedora installation, it is best to use the name sys_sgsw. Efforts are being made
to remove all references to this user, or files specific to this user from the codebase, but there are still some
trailing references to it.
## Autorun-spdk.conf
Every machine that runs the autotest scripts should include a file titled autorun-spdk.conf in the home directory
of the user that will run them. This file consists of several lines of the form 'variable_name=0/1'. autorun.sh sources
this file each time it is run, and determines which tests to attempt based on which variables are defined in the
@ -38,6 +40,7 @@ configuration file. For a full list of the variable declarations available for a
7. Run autorun.sh for SPDK. Any output files will be placed in `~/spdk_repo/output/`.
## Additional Steps for Preparing the Vhost Tests
The Vhost tests also require the creation of a second virtual machine nested inside of the test VM.
Please follow the directions below to complete that installation. Note that host refers to the Fedora VM
created above and guest or VM refer to the Ubuntu VM created in this section.

View File

@ -10,39 +10,48 @@ Link time optimization can be enabled in SPDK by doing the following:
~
## Configuration
Test is configured by using command-line options.
### Available options
#### -h, --help
Prints available commands and help.
#### --run-time
Tell fio to terminate processing after the specified period of time. Value in seconds.
#### --ramp-time
Fio will run the specified workload for this amount of time before logging any performance numbers.
Value in seconds.
#### --fio-bin
Path to fio binary.
#### --driver
Select between SPDK driver and kernel driver. The Linux Kernel driver has three configurations:
Default mode, Hybrid Polling and Classic Polling. The SPDK driver supports 2 fio_plugin modes: bdev and NVMe PMD. Before running test with spdk, you will need to bind NVMe devics to the Linux uio_pci_generic or vfio-pci driver. When running test with the Kernel driver, NVMe devices use the Kernel driver. The 5 valid values for this option are:
'bdev', 'nvme', 'kernel-libaio', 'kernel-classic-polling' and 'kernel-hybrid-polling'.
#### --max-disk
This option will run multiple fio jobs with varying number of NVMe devices. First it will start with
max-disk number of devices then decrease number of disk by two until there are no more devices.
If set to 'all' then max-disk number will be set to all available devices.
Only one of the max-disk or disk-no option can be used.
#### --disk-no
This option will run fio job on specified number of NVMe devices. If set to 'all' then max-disk number
will be set to all available devices. Only one of the max-disk or disk-no option can be used.
#### --cpu-allowed
Specifies the CPU cores that will be used by fio to execute the performance test cases. When spdk driver is chosen, Nthe script attempts to assign NVMe devices to CPU cores on the same NUMA node. The script will try to align each core with devices matching
core's NUMA first but if the is no devices left within the CPU core NUMA then it will use devices from the other
NUMA node. It is important to choose cores that will ensure best NUMA node alignment. For example:
@ -54,31 +63,40 @@ aligned with 2 devices on numa0 per core and cores 28-33 will be aligned with 1
If kernel driver is chosen then for each job with NVME device, all cpu cores with corresponding NUMA node are picked.
#### --rw
Type of I/O pattern. Accepted values are: randrw, rw
#### --rwmixread
Percentage of a mixed workload that should be reads.
#### --iodepth
Number of I/O units to keep in flight against each file.
#### --block-size
The block size in bytes used for I/O units.
#### --numjobs
Create the specified number of clones of a job.
#### --repeat-no
Specifies how many times run each workload. End results are averages of these workloads
#### --no-preconditioning
By default disks are preconditioned before test using fio with parameters: size=100%, loops=2, bs=1M, w=write,
iodepth=32, ioengine=spdk. It can be skiped when this option is set.
#### "--no-io-scaling"
For SPDK fio plugin iodepth is multiplied by number of devices. When this option is set this multiplication will be disabled.
## Results
Results are stored in "results" folder. After each workload, to this folder are copied files with:
fio configuration file, json files with fio results and logs with latiencies with sampling interval 250 ms.
Number of copied files depends from number of repeats of each workload. Additionall csv file is created with averaged