markdownlint: enable rule MD040

MD040 - Fenced code blocks should have a language specified
Fixed all errors

Signed-off-by: Maciej Wawryk <maciejx.wawryk@intel.com>
Change-Id: Iddd307068c1047ca9a0bb12c1b0d9c88f496765f
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/9272
Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
This commit is contained in:
Maciej Wawryk 2021-08-24 09:04:22 +02:00 committed by Jim Harris
parent 1c81d1afa2
commit 63ee471b64
23 changed files with 515 additions and 502 deletions

View File

@ -1882,7 +1882,7 @@ Preliminary support for cross compilation is now available. Targeting an older
CPU on the same architecture using your native compiler can be accomplished by CPU on the same architecture using your native compiler can be accomplished by
using the `--target-arch` option to `configure` as follows: using the `--target-arch` option to `configure` as follows:
~~~ ~~~bash
./configure --target-arch=broadwell ./configure --target-arch=broadwell
~~~ ~~~
@ -1890,7 +1890,7 @@ Additionally, some support for cross-compiling to other architectures has been
added via the `--cross-prefix` argument to `configure`. To cross-compile, set CC added via the `--cross-prefix` argument to `configure`. To cross-compile, set CC
and CXX to the cross compilers, then run configure as follows: and CXX to the cross compilers, then run configure as follows:
~~~ ~~~bash
./configure --target-arch=aarm64 --cross-prefix=aarch64-linux-gnu ./configure --target-arch=aarm64 --cross-prefix=aarch64-linux-gnu
~~~ ~~~

View File

@ -129,7 +129,9 @@ Boolean (on/off) options are configured with a 'y' (yes) or 'n' (no). For
example, this line of `CONFIG` controls whether the optional RDMA (libibverbs) example, this line of `CONFIG` controls whether the optional RDMA (libibverbs)
support is enabled: support is enabled:
CONFIG_RDMA?=n ~~~{.sh}
CONFIG_RDMA?=n
~~~
To enable RDMA, this line may be added to `mk/config.mk` with a 'y' instead of To enable RDMA, this line may be added to `mk/config.mk` with a 'y' instead of
'n'. For the majority of options this can be done using the `configure` script. 'n'. For the majority of options this can be done using the `configure` script.

View File

@ -151,7 +151,7 @@ Whenever the `CPU mask` is mentioned it is a string in one of the following form
The following CPU masks are equal and correspond to CPUs 0, 1, 2, 8, 9, 10, 11 and 12: The following CPU masks are equal and correspond to CPUs 0, 1, 2, 8, 9, 10, 11 and 12:
~~~ ~~~bash
0x1f07 0x1f07
0x1F07 0x1F07
1f07 1f07

View File

@ -236,7 +236,7 @@ Example command
### Creating a GPT partition table using NBD {#bdev_ug_gpt_create_part} ### Creating a GPT partition table using NBD {#bdev_ug_gpt_create_part}
~~~ ~~~bash
# Expose bdev Nvme0n1 as kernel block device /dev/nbd0 by JSON-RPC # Expose bdev Nvme0n1 as kernel block device /dev/nbd0 by JSON-RPC
rpc.py nbd_start_disk Nvme0n1 /dev/nbd0 rpc.py nbd_start_disk Nvme0n1 /dev/nbd0

View File

@ -88,7 +88,7 @@ In these examples, the value "X" will represent the special value (2^64-1) descr
### Initial Creation ### Initial Creation
``` ```text
+--------------------+ +--------------------+
Backing Device | | Backing Device | |
+--------------------+ +--------------------+
@ -123,7 +123,7 @@ In these examples, the value "X" will represent the special value (2^64-1) descr
store the 16KB of data. store the 16KB of data.
* Write the chunk map index to entry 2 in the logical map. * Write the chunk map index to entry 2 in the logical map.
``` ```text
+--------------------+ +--------------------+
Backing Device |01 | Backing Device |01 |
+--------------------+ +--------------------+
@ -157,7 +157,7 @@ In these examples, the value "X" will represent the special value (2^64-1) descr
* Write (2, X, X, X) to the chunk map. * Write (2, X, X, X) to the chunk map.
* Write the chunk map index to entry 0 in the logical map. * Write the chunk map index to entry 0 in the logical map.
``` ```text
+--------------------+ +--------------------+
Backing Device |012 | Backing Device |012 |
+--------------------+ +--------------------+
@ -205,7 +205,7 @@ In these examples, the value "X" will represent the special value (2^64-1) descr
* Free chunk map 1 back to the free chunk map list. * Free chunk map 1 back to the free chunk map list.
* Free backing IO unit 2 back to the free backing IO unit list. * Free backing IO unit 2 back to the free backing IO unit list.
``` ```text
+--------------------+ +--------------------+
Backing Device |01 34 | Backing Device |01 34 |
+--------------------+ +--------------------+

View File

@ -156,7 +156,7 @@ To verify that the drive is emulated correctly, one can check the output of the
(assuming that `scripts/setup.sh` was called before and the driver has been changed for that (assuming that `scripts/setup.sh` was called before and the driver has been changed for that
device): device):
``` ```bash
$ build/examples/identify $ build/examples/identify
===================================================== =====================================================
NVMe Controller at 0000:00:0a.0 [1d1d:1f1f] NVMe Controller at 0000:00:0a.0 [1d1d:1f1f]

View File

@ -32,7 +32,7 @@ To ensure the SPDK iSCSI target has the best performance, place the NICs and the
same NUMA node and configure the target to run on CPU cores associated with that node. The following same NUMA node and configure the target to run on CPU cores associated with that node. The following
command line option is used to configure the SPDK iSCSI target: command line option is used to configure the SPDK iSCSI target:
~~~ ~~~bash
-m 0xF000000 -m 0xF000000
~~~ ~~~
@ -51,7 +51,7 @@ The iSCSI target is configured via JSON-RPC calls. See @ref jsonrpc for details.
- iscsi_target_node_remove_pg_ig_maps -- Delete initiator group to portal group mappings from an existing iSCSI target node. - iscsi_target_node_remove_pg_ig_maps -- Delete initiator group to portal group mappings from an existing iSCSI target node.
- iscsi_get_portal_groups -- Show information about all available portal groups. - iscsi_get_portal_groups -- Show information about all available portal groups.
~~~ ~~~bash
/path/to/spdk/scripts/rpc.py iscsi_create_portal_group 1 10.0.0.1:3260 /path/to/spdk/scripts/rpc.py iscsi_create_portal_group 1 10.0.0.1:3260
~~~ ~~~
@ -62,7 +62,7 @@ The iSCSI target is configured via JSON-RPC calls. See @ref jsonrpc for details.
- iscsi_initiator_group_add_initiators -- Add initiators to an existing initiator group. - iscsi_initiator_group_add_initiators -- Add initiators to an existing initiator group.
- iscsi_get_initiator_groups -- Show information about all available initiator groups. - iscsi_get_initiator_groups -- Show information about all available initiator groups.
~~~ ~~~bash
/path/to/spdk/scripts/rpc.py iscsi_create_initiator_group 2 ANY 10.0.0.2/32 /path/to/spdk/scripts/rpc.py iscsi_create_initiator_group 2 ANY 10.0.0.2/32
~~~ ~~~
@ -73,7 +73,7 @@ The iSCSI target is configured via JSON-RPC calls. See @ref jsonrpc for details.
- iscsi_target_node_add_lun -- Add a LUN to an existing iSCSI target node. - iscsi_target_node_add_lun -- Add a LUN to an existing iSCSI target node.
- iscsi_get_target_nodes -- Show information about all available iSCSI target nodes. - iscsi_get_target_nodes -- Show information about all available iSCSI target nodes.
~~~ ~~~bash
/path/to/spdk/scripts/rpc.py iscsi_create_target_node Target3 Target3_alias MyBdev:0 1:2 64 -d /path/to/spdk/scripts/rpc.py iscsi_create_target_node Target3 Target3_alias MyBdev:0 1:2 64 -d
~~~ ~~~
@ -83,30 +83,30 @@ The Linux initiator is open-iscsi.
Installing open-iscsi package Installing open-iscsi package
Fedora: Fedora:
~~~ ~~~bash
yum install -y iscsi-initiator-utils yum install -y iscsi-initiator-utils
~~~ ~~~
Ubuntu: Ubuntu:
~~~ ~~~bash
apt-get install -y open-iscsi apt-get install -y open-iscsi
~~~ ~~~
### Setup ### Setup
Edit /etc/iscsi/iscsid.conf Edit /etc/iscsi/iscsid.conf
~~~ ~~~bash
node.session.cmds_max = 4096 node.session.cmds_max = 4096
node.session.queue_depth = 128 node.session.queue_depth = 128
~~~ ~~~
iscsid must be restarted or receive SIGHUP for changes to take effect. To send SIGHUP, run: iscsid must be restarted or receive SIGHUP for changes to take effect. To send SIGHUP, run:
~~~ ~~~bash
killall -HUP iscsid killall -HUP iscsid
~~~ ~~~
Recommended changes to /etc/sysctl.conf Recommended changes to /etc/sysctl.conf
~~~ ~~~bash
net.ipv4.tcp_timestamps = 1 net.ipv4.tcp_timestamps = 1
net.ipv4.tcp_sack = 0 net.ipv4.tcp_sack = 0
@ -124,13 +124,14 @@ net.core.netdev_max_backlog = 300000
### Discovery ### Discovery
Assume target is at 10.0.0.1 Assume target is at 10.0.0.1
~~~
~~~bash
iscsiadm -m discovery -t sendtargets -p 10.0.0.1 iscsiadm -m discovery -t sendtargets -p 10.0.0.1
~~~ ~~~
### Connect to target ### Connect to target
~~~ ~~~bash
iscsiadm -m node --login iscsiadm -m node --login
~~~ ~~~
@ -139,13 +140,13 @@ they came up as.
### Disconnect from target ### Disconnect from target
~~~ ~~~bash
iscsiadm -m node --logout iscsiadm -m node --logout
~~~ ~~~
### Deleting target node cache ### Deleting target node cache
~~~ ~~~bash
iscsiadm -m node -o delete iscsiadm -m node -o delete
~~~ ~~~
@ -153,7 +154,7 @@ This will cause the initiator to forget all previously discovered iSCSI target n
### Finding /dev/sdX nodes for iSCSI LUNs ### Finding /dev/sdX nodes for iSCSI LUNs
~~~ ~~~bash
iscsiadm -m session -P 3 | grep "Attached scsi disk" | awk '{print $4}' iscsiadm -m session -P 3 | grep "Attached scsi disk" | awk '{print $4}'
~~~ ~~~
@ -165,19 +166,19 @@ After the targets are connected, they can be tuned. For example if /dev/sdc is
an iSCSI disk then the following can be done: an iSCSI disk then the following can be done:
Set noop to scheduler Set noop to scheduler
~~~ ~~~bash
echo noop > /sys/block/sdc/queue/scheduler echo noop > /sys/block/sdc/queue/scheduler
~~~ ~~~
Disable merging/coalescing (can be useful for precise workload measurements) Disable merging/coalescing (can be useful for precise workload measurements)
~~~ ~~~bash
echo "2" > /sys/block/sdc/queue/nomerges echo "2" > /sys/block/sdc/queue/nomerges
~~~ ~~~
Increase requests for block queue Increase requests for block queue
~~~ ~~~bash
echo "1024" > /sys/block/sdc/queue/nr_requests echo "1024" > /sys/block/sdc/queue/nr_requests
~~~ ~~~
@ -191,33 +192,34 @@ Assuming we have one iSCSI Target server with portal at 10.0.0.1:3200, two LUNs
#### Configure iSCSI Target #### Configure iSCSI Target
Start iscsi_tgt application: Start iscsi_tgt application:
```
```bash
./build/bin/iscsi_tgt ./build/bin/iscsi_tgt
``` ```
Construct two 64MB Malloc block devices with 512B sector size "Malloc0" and "Malloc1": Construct two 64MB Malloc block devices with 512B sector size "Malloc0" and "Malloc1":
``` ```bash
./scripts/rpc.py bdev_malloc_create -b Malloc0 64 512 ./scripts/rpc.py bdev_malloc_create -b Malloc0 64 512
./scripts/rpc.py bdev_malloc_create -b Malloc1 64 512 ./scripts/rpc.py bdev_malloc_create -b Malloc1 64 512
``` ```
Create new portal group with id 1, and address 10.0.0.1:3260: Create new portal group with id 1, and address 10.0.0.1:3260:
``` ```bash
./scripts/rpc.py iscsi_create_portal_group 1 10.0.0.1:3260 ./scripts/rpc.py iscsi_create_portal_group 1 10.0.0.1:3260
``` ```
Create one initiator group with id 2 to accept any connection from 10.0.0.2/32: Create one initiator group with id 2 to accept any connection from 10.0.0.2/32:
``` ```bash
./scripts/rpc.py iscsi_create_initiator_group 2 ANY 10.0.0.2/32 ./scripts/rpc.py iscsi_create_initiator_group 2 ANY 10.0.0.2/32
``` ```
Finally construct one target using previously created bdevs as LUN0 (Malloc0) and LUN1 (Malloc1) Finally construct one target using previously created bdevs as LUN0 (Malloc0) and LUN1 (Malloc1)
with a name "disk1" and alias "Data Disk1" using portal group 1 and initiator group 2. with a name "disk1" and alias "Data Disk1" using portal group 1 and initiator group 2.
``` ```bash
./scripts/rpc.py iscsi_create_target_node disk1 "Data Disk1" "Malloc0:0 Malloc1:1" 1:2 64 -d ./scripts/rpc.py iscsi_create_target_node disk1 "Data Disk1" "Malloc0:0 Malloc1:1" 1:2 64 -d
``` ```
@ -225,14 +227,14 @@ with a name "disk1" and alias "Data Disk1" using portal group 1 and initiator gr
Discover target Discover target
~~~ ~~~bash
$ iscsiadm -m discovery -t sendtargets -p 10.0.0.1 $ iscsiadm -m discovery -t sendtargets -p 10.0.0.1
10.0.0.1:3260,1 iqn.2016-06.io.spdk:disk1 10.0.0.1:3260,1 iqn.2016-06.io.spdk:disk1
~~~ ~~~
Connect to the target Connect to the target
~~~ ~~~bash
iscsiadm -m node --login iscsiadm -m node --login
~~~ ~~~
@ -240,7 +242,7 @@ At this point the iSCSI target should show up as SCSI disks.
Check dmesg to see what they came up as. In this example it can look like below: Check dmesg to see what they came up as. In this example it can look like below:
~~~ ~~~bash
... ...
[630111.860078] scsi host68: iSCSI Initiator over TCP/IP [630111.860078] scsi host68: iSCSI Initiator over TCP/IP
[630112.124743] scsi 68:0:0:0: Direct-Access INTEL Malloc disk 0001 PQ: 0 ANSI: 5 [630112.124743] scsi 68:0:0:0: Direct-Access INTEL Malloc disk 0001 PQ: 0 ANSI: 5
@ -263,7 +265,7 @@ Check dmesg to see what they came up as. In this example it can look like below:
You may also use simple bash command to find /dev/sdX nodes for each iSCSI LUN You may also use simple bash command to find /dev/sdX nodes for each iSCSI LUN
in all logged iSCSI sessions: in all logged iSCSI sessions:
~~~ ~~~bash
$ iscsiadm -m session -P 3 | grep "Attached scsi disk" | awk '{print $4}' $ iscsiadm -m session -P 3 | grep "Attached scsi disk" | awk '{print $4}'
sdd sdd
sde sde

File diff suppressed because it is too large Load Diff

View File

@ -31,7 +31,7 @@ Status 200 with resultant JSON object included on success.
Below is a sample python script acting as a client side. It sends `bdev_get_bdevs` method with optional `name` Below is a sample python script acting as a client side. It sends `bdev_get_bdevs` method with optional `name`
parameter and prints JSON object returned from remote_rpc script. parameter and prints JSON object returned from remote_rpc script.
~~~ ~~~python
import json import json
import requests import requests
@ -48,7 +48,7 @@ if __name__ == '__main__':
Output: Output:
~~~ ~~~python
python client.py python client.py
[{u'num_blocks': 2621440, u'name': u'Malloc0', u'uuid': u'fb57e59c-599d-42f1-8b89-3e46dbe12641', u'claimed': True, [{u'num_blocks': 2621440, u'name': u'Malloc0', u'uuid': u'fb57e59c-599d-42f1-8b89-3e46dbe12641', u'claimed': True,
u'driver_specific': {}, u'supported_io_types': {u'reset': True, u'nvme_admin': False, u'unmap': True, u'read': True, u'driver_specific': {}, u'supported_io_types': {u'reset': True, u'nvme_admin': False, u'unmap': True, u'read': True,

View File

@ -97,7 +97,7 @@ logical volumes is kept on block devices.
RPC regarding lvolstore: RPC regarding lvolstore:
``` ```bash
bdev_lvol_create_lvstore [-h] [-c CLUSTER_SZ] bdev_name lvs_name bdev_lvol_create_lvstore [-h] [-c CLUSTER_SZ] bdev_name lvs_name
Constructs lvolstore on specified bdev with specified name. During Constructs lvolstore on specified bdev with specified name. During
construction bdev is unmapped at initialization and all data is construction bdev is unmapped at initialization and all data is
@ -129,7 +129,7 @@ bdev_lvol_rename_lvstore [-h] old_name new_name
RPC regarding lvol and spdk bdev: RPC regarding lvol and spdk bdev:
``` ```bash
bdev_lvol_create [-h] [-u UUID] [-l LVS_NAME] [-t] [-c CLEAR_METHOD] lvol_name size bdev_lvol_create [-h] [-u UUID] [-l LVS_NAME] [-t] [-c CLEAR_METHOD] lvol_name size
Creates lvol with specified size and name on lvolstore specified by its uuid Creates lvol with specified size and name on lvolstore specified by its uuid
or name. Then constructs spdk bdev on top of that lvol and presents it as spdk bdev. or name. Then constructs spdk bdev on top of that lvol and presents it as spdk bdev.

View File

@ -131,7 +131,7 @@ E.g. To send fused compare and write operation user must call spdk_nvme_ns_cmd_c
followed with spdk_nvme_ns_cmd_write and make sure no other operations are submitted followed with spdk_nvme_ns_cmd_write and make sure no other operations are submitted
in between on the same queue, like in example below: in between on the same queue, like in example below:
~~~ ~~~c
rc = spdk_nvme_ns_cmd_compare(ns, qpair, cmp_buf, 0, 1, nvme_fused_first_cpl_cb, rc = spdk_nvme_ns_cmd_compare(ns, qpair, cmp_buf, 0, 1, nvme_fused_first_cpl_cb,
NULL, SPDK_NVME_CMD_FUSE_FIRST); NULL, SPDK_NVME_CMD_FUSE_FIRST);
if (rc != 0) { if (rc != 0) {

View File

@ -17,14 +17,14 @@ Tracepoints are placed in groups. They are enabled and disabled as a group. To e
the instrumentation of all the tracepoints group in an SPDK target application, start the the instrumentation of all the tracepoints group in an SPDK target application, start the
target with -e parameter set to 0xFFFF: target with -e parameter set to 0xFFFF:
~~~ ~~~bash
build/bin/nvmf_tgt -e 0xFFFF build/bin/nvmf_tgt -e 0xFFFF
~~~ ~~~
To enable the instrumentation of just the NVMe-oF RDMA tracepoints in an SPDK target To enable the instrumentation of just the NVMe-oF RDMA tracepoints in an SPDK target
application, start the target with the -e parameter set to 0x10: application, start the target with the -e parameter set to 0x10:
~~~ ~~~bash
build/bin/nvmf_tgt -e 0x10 build/bin/nvmf_tgt -e 0x10
~~~ ~~~
@ -32,7 +32,7 @@ When the target starts, a message is logged with the information you need to vie
the tracepoints in a human-readable format using the spdk_trace application. The target the tracepoints in a human-readable format using the spdk_trace application. The target
will also log information about the shared memory file. will also log information about the shared memory file.
~~~{.sh} ~~~bash
app.c: 527:spdk_app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. app.c: 527:spdk_app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified.
app.c: 531:spdk_app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -p 24147' to capture a snapshot of events at runtime. app.c: 531:spdk_app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -p 24147' to capture a snapshot of events at runtime.
app.c: 533:spdk_app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.pid24147 for offline analysis/debug. app.c: 533:spdk_app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.pid24147 for offline analysis/debug.
@ -49,14 +49,14 @@ Send I/Os to the SPDK target application to generate events. The following is
an example usage of perf to send I/Os to the NVMe-oF target over an RDMA network an example usage of perf to send I/Os to the NVMe-oF target over an RDMA network
interface for 10 minutes. interface for 10 minutes.
~~~ ~~~bash
./perf -q 128 -s 4096 -w randread -t 600 -r 'trtype:RDMA adrfam:IPv4 traddr:192.168.100.2 trsvcid:4420' ./perf -q 128 -s 4096 -w randread -t 600 -r 'trtype:RDMA adrfam:IPv4 traddr:192.168.100.2 trsvcid:4420'
~~~ ~~~
The spdk_trace program can be found in the app/trace directory. To analyze the tracepoints on the same The spdk_trace program can be found in the app/trace directory. To analyze the tracepoints on the same
system running the NVMe-oF target, simply execute the command line shown in the log: system running the NVMe-oF target, simply execute the command line shown in the log:
~~~{.sh} ~~~bash
build/bin/spdk_trace -s nvmf -p 24147 build/bin/spdk_trace -s nvmf -p 24147
~~~ ~~~
@ -64,13 +64,13 @@ To analyze the tracepoints on a different system, first prepare the tracepoint f
tracepoint file can be large, but usually compresses very well. This step can also be used to prepare tracepoint file can be large, but usually compresses very well. This step can also be used to prepare
a tracepoint file to attach to a GitHub issue for debugging NVMe-oF application crashes. a tracepoint file to attach to a GitHub issue for debugging NVMe-oF application crashes.
~~~{.sh} ~~~bash
bzip2 -c /dev/shm/nvmf_trace.pid24147 > /tmp/trace.bz2 bzip2 -c /dev/shm/nvmf_trace.pid24147 > /tmp/trace.bz2
~~~ ~~~
After transferring the /tmp/trace.bz2 tracepoint file to a different system: After transferring the /tmp/trace.bz2 tracepoint file to a different system:
~~~{.sh} ~~~bash
bunzip2 /tmp/trace.bz2 bunzip2 /tmp/trace.bz2
build/bin/spdk_trace -f /tmp/trace build/bin/spdk_trace -f /tmp/trace
~~~ ~~~
@ -79,7 +79,7 @@ The following is sample trace capture showing the cumulative time that each
I/O spends at each RDMA state. All the trace captures with the same id are for I/O spends at each RDMA state. All the trace captures with the same id are for
the same I/O. the same I/O.
~~~ ~~~bash
28: 6026.658 ( 12656064) RDMA_REQ_NEED_BUFFER id: r3622 time: 0.019 28: 6026.658 ( 12656064) RDMA_REQ_NEED_BUFFER id: r3622 time: 0.019
28: 6026.694 ( 12656140) RDMA_REQ_RDY_TO_EXECUTE id: r3622 time: 0.055 28: 6026.694 ( 12656140) RDMA_REQ_RDY_TO_EXECUTE id: r3622 time: 0.055
28: 6026.820 ( 12656406) RDMA_REQ_EXECUTING id: r3622 time: 0.182 28: 6026.820 ( 12656406) RDMA_REQ_EXECUTING id: r3622 time: 0.182
@ -135,20 +135,20 @@ spdk_trace_record is used to poll the spdk tracepoint shared memory, record new
and store all entries into specified output file at its shutdown on SIGINT or SIGTERM. and store all entries into specified output file at its shutdown on SIGINT or SIGTERM.
After SPDK nvmf target is launched, simply execute the command line shown in the log: After SPDK nvmf target is launched, simply execute the command line shown in the log:
~~~{.sh} ~~~bash
build/bin/spdk_trace_record -q -s nvmf -p 24147 -f /tmp/spdk_nvmf_record.trace build/bin/spdk_trace_record -q -s nvmf -p 24147 -f /tmp/spdk_nvmf_record.trace
~~~ ~~~
Also send I/Os to the SPDK target application to generate events by previous perf example for 10 minutes. Also send I/Os to the SPDK target application to generate events by previous perf example for 10 minutes.
~~~{.sh} ~~~bash
./perf -q 128 -s 4096 -w randread -t 600 -r 'trtype:RDMA adrfam:IPv4 traddr:192.168.100.2 trsvcid:4420' ./perf -q 128 -s 4096 -w randread -t 600 -r 'trtype:RDMA adrfam:IPv4 traddr:192.168.100.2 trsvcid:4420'
~~~ ~~~
After the completion of perf example, shut down spdk_trace_record by signal SIGINT (Ctrl + C). After the completion of perf example, shut down spdk_trace_record by signal SIGINT (Ctrl + C).
To analyze the tracepoints output file from spdk_trace_record, simply run spdk_trace program by: To analyze the tracepoints output file from spdk_trace_record, simply run spdk_trace program by:
~~~{.sh} ~~~bash
build/bin/spdk_trace -f /tmp/spdk_nvmf_record.trace build/bin/spdk_trace -f /tmp/spdk_nvmf_record.trace
~~~ ~~~
@ -159,7 +159,7 @@ tracepoints to the existing trace groups. For example, to add a new tracepoints
to the SPDK RDMA library (lib/nvmf/rdma.c) trace group TRACE_GROUP_NVMF_RDMA, to the SPDK RDMA library (lib/nvmf/rdma.c) trace group TRACE_GROUP_NVMF_RDMA,
define the tracepoints and assigning them a unique ID using the SPDK_TPOINT_ID macro: define the tracepoints and assigning them a unique ID using the SPDK_TPOINT_ID macro:
~~~ ~~~c
#define TRACE_GROUP_NVMF_RDMA 0x4 #define TRACE_GROUP_NVMF_RDMA 0x4
#define TRACE_RDMA_REQUEST_STATE_NEW SPDK_TPOINT_ID(TRACE_GROUP_NVMF_RDMA, 0x0) #define TRACE_RDMA_REQUEST_STATE_NEW SPDK_TPOINT_ID(TRACE_GROUP_NVMF_RDMA, 0x0)
... ...
@ -170,7 +170,7 @@ You also need to register the new trace points in the SPDK_TRACE_REGISTER_FN mac
within the application/library using the spdk_trace_register_description function within the application/library using the spdk_trace_register_description function
as shown below: as shown below:
~~~ ~~~c
SPDK_TRACE_REGISTER_FN(nvmf_trace) SPDK_TRACE_REGISTER_FN(nvmf_trace)
{ {
spdk_trace_register_object(OBJECT_NVMF_RDMA_IO, 'r'); spdk_trace_register_object(OBJECT_NVMF_RDMA_IO, 'r');
@ -191,7 +191,7 @@ application/library to record the current trace state for the new trace points.
The following example shows the usage of the spdk_trace_record function to The following example shows the usage of the spdk_trace_record function to
record the current trace state of several tracepoints. record the current trace state of several tracepoints.
~~~ ~~~c
case RDMA_REQUEST_STATE_NEW: case RDMA_REQUEST_STATE_NEW:
spdk_trace_record(TRACE_RDMA_REQUEST_STATE_NEW, 0, 0, (uintptr_t)rdma_req, (uintptr_t)rqpair->cm_id); spdk_trace_record(TRACE_RDMA_REQUEST_STATE_NEW, 0, 0, (uintptr_t)rdma_req, (uintptr_t)rqpair->cm_id);
... ...

View File

@ -8,21 +8,21 @@ when SPDK adds or modifies library dependencies.
If your application is using the SPDK nvme library, you would use the following If your application is using the SPDK nvme library, you would use the following
to get the list of required SPDK libraries: to get the list of required SPDK libraries:
~~~ ~~~bash
PKG_CONFIG_PATH=/path/to/spdk/build/lib/pkgconfig pkg-config --libs spdk_nvme PKG_CONFIG_PATH=/path/to/spdk/build/lib/pkgconfig pkg-config --libs spdk_nvme
~~~ ~~~
To get the list of required SPDK and DPDK libraries to use the DPDK-based To get the list of required SPDK and DPDK libraries to use the DPDK-based
environment layer: environment layer:
~~~ ~~~bash
PKG_CONFIG_PATH=/path/to/spdk/build/lib/pkgconfig pkg-config --libs spdk_env_dpdk PKG_CONFIG_PATH=/path/to/spdk/build/lib/pkgconfig pkg-config --libs spdk_env_dpdk
~~~ ~~~
When linking with static libraries, the dependent system libraries must also be When linking with static libraries, the dependent system libraries must also be
specified. To get the list of required system libraries: specified. To get the list of required system libraries:
~~~ ~~~bash
PKG_CONFIG_PATH=/path/to/spdk/build/lib/pkgconfig pkg-config --libs spdk_syslibs PKG_CONFIG_PATH=/path/to/spdk/build/lib/pkgconfig pkg-config --libs spdk_syslibs
~~~ ~~~
@ -33,7 +33,7 @@ the `-Wl,--no-as-needed` parameters while with static libraries `-Wl,--whole-arc
is used. Here is an example Makefile snippet that shows how to use pkg-config to link is used. Here is an example Makefile snippet that shows how to use pkg-config to link
an application that uses the SPDK nvme shared library: an application that uses the SPDK nvme shared library:
~~~ ~~~bash
PKG_CONFIG_PATH = $(SPDK_DIR)/build/lib/pkgconfig PKG_CONFIG_PATH = $(SPDK_DIR)/build/lib/pkgconfig
SPDK_LIB := $(shell PKG_CONFIG_PATH="$(PKG_CONFIG_PATH)" pkg-config --libs spdk_nvme SPDK_LIB := $(shell PKG_CONFIG_PATH="$(PKG_CONFIG_PATH)" pkg-config --libs spdk_nvme
DPDK_LIB := $(shell PKG_CONFIG_PATH="$(PKG_CONFIG_PATH)" pkg-config --libs spdk_env_dpdk DPDK_LIB := $(shell PKG_CONFIG_PATH="$(PKG_CONFIG_PATH)" pkg-config --libs spdk_env_dpdk
@ -44,7 +44,7 @@ app:
If using the SPDK nvme static library: If using the SPDK nvme static library:
~~~ ~~~bash
PKG_CONFIG_PATH = $(SPDK_DIR)/build/lib/pkgconfig PKG_CONFIG_PATH = $(SPDK_DIR)/build/lib/pkgconfig
SPDK_LIB := $(shell PKG_CONFIG_PATH="$(PKG_CONFIG_PATH)" pkg-config --libs spdk_nvme SPDK_LIB := $(shell PKG_CONFIG_PATH="$(PKG_CONFIG_PATH)" pkg-config --libs spdk_nvme
DPDK_LIB := $(shell PKG_CONFIG_PATH="$(PKG_CONFIG_PATH)" pkg-config --libs spdk_env_dpdk DPDK_LIB := $(shell PKG_CONFIG_PATH="$(PKG_CONFIG_PATH)" pkg-config --libs spdk_env_dpdk

View File

@ -115,7 +115,7 @@ shared by its vhost clients as described in the
Open the `/etc/security/limits.conf` file as root and append the following: Open the `/etc/security/limits.conf` file as root and append the following:
``` ```bash
spdk hard memlock unlimited spdk hard memlock unlimited
spdk soft memlock unlimited spdk soft memlock unlimited
``` ```

View File

@ -28,7 +28,7 @@ flex
We have found issues with the packaged bpftrace on both Ubuntu 20.04 We have found issues with the packaged bpftrace on both Ubuntu 20.04
and Fedora 33. So bpftrace should be built and installed from source. and Fedora 33. So bpftrace should be built and installed from source.
``` ```bash
git clone https://github.com/iovisor/bpftrace.git git clone https://github.com/iovisor/bpftrace.git
mkdir bpftrace/build mkdir bpftrace/build
cd bpftrace/build cd bpftrace/build
@ -42,7 +42,7 @@ sudo make install
bpftrace.sh is a helper script that facilitates running bpftrace scripts bpftrace.sh is a helper script that facilitates running bpftrace scripts
against a running SPDK application. Here is a typical usage: against a running SPDK application. Here is a typical usage:
``` ```bash
scripts/bpftrace.sh `pidof spdk_tgt` scripts/bpf/nvmf.bt scripts/bpftrace.sh `pidof spdk_tgt` scripts/bpf/nvmf.bt
``` ```
@ -58,7 +58,7 @@ that string with the PID provided to the script.
## Configuring SPDK Build ## Configuring SPDK Build
``` ```bash
./configure --with-usdt ./configure --with-usdt
``` ```
@ -66,13 +66,13 @@ that string with the PID provided to the script.
From first terminal: From first terminal:
``` ```bash
build/bin/spdk_tgt -m 0xC build/bin/spdk_tgt -m 0xC
``` ```
From second terminal: From second terminal:
``` ```bash
scripts/bpftrace.sh `pidof spdk_tgt` scripts/bpf/nvmf.bt scripts/bpftrace.sh `pidof spdk_tgt` scripts/bpf/nvmf.bt
``` ```
@ -81,7 +81,7 @@ group info state transitions.
From third terminal: From third terminal:
``` ```bash
scripts/rpc.py <<EOF scripts/rpc.py <<EOF
nvmf_create_transport -t tcp nvmf_create_transport -t tcp
nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10
@ -96,7 +96,7 @@ port, and a null bdev which is added as a namespace to the new nvmf subsystem.
You will see output from the second terminal that looks like this: You will see output from the second terminal that looks like this:
``` ```bash
2110.935735: nvmf_tgt reached state NONE 2110.935735: nvmf_tgt reached state NONE
2110.954316: nvmf_tgt reached state CREATE_TARGET 2110.954316: nvmf_tgt reached state CREATE_TARGET
2110.967905: nvmf_tgt reached state CREATE_POLL_GROUPS 2110.967905: nvmf_tgt reached state CREATE_POLL_GROUPS
@ -145,14 +145,14 @@ it again with the send_msg.bt script. This script keeps a count of
functions executed as part of an spdk_for_each_channel or functions executed as part of an spdk_for_each_channel or
spdk_thread_send_msg function call. spdk_thread_send_msg function call.
``` ```bash
scripts/bpftrace.sh `pidof spdk_tgt` scripts/bpf/send_msg.bt scripts/bpftrace.sh `pidof spdk_tgt` scripts/bpf/send_msg.bt
``` ```
From the third terminal, create another null bdev and add it as a From the third terminal, create another null bdev and add it as a
namespace to the cnode1 subsystem. namespace to the cnode1 subsystem.
``` ```bash
scripts/rpc.py <<EOF scripts/rpc.py <<EOF
bdev_null_create null1 1000 512 bdev_null_create null1 1000 512
nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 null1 nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 null1
@ -162,7 +162,7 @@ EOF
Now Ctrl-C the bpftrace.sh in the second terminal, and it will Now Ctrl-C the bpftrace.sh in the second terminal, and it will
print the final results of the maps. print the final results of the maps.
``` ```bash
@for_each_channel[subsystem_state_change_on_pg]: 2 @for_each_channel[subsystem_state_change_on_pg]: 2
@send_msg[_finish_unregister]: 1 @send_msg[_finish_unregister]: 1

View File

@ -18,12 +18,10 @@ reference.
Reading from the Reading from the
[Virtio specification](http://docs.oasis-open.org/virtio/virtio/v1.0/virtio-v1.0.html): [Virtio specification](http://docs.oasis-open.org/virtio/virtio/v1.0/virtio-v1.0.html):
``` > The purpose of virtio and [virtio] specification is that virtual environments
The purpose of virtio and [virtio] specification is that virtual environments > and guests should have a straightforward, efficient, standard and extensible
and guests should have a straightforward, efficient, standard and extensible > mechanism for virtual devices, rather than boutique per-environment or per-OS
mechanism for virtual devices, rather than boutique per-environment or per-OS > mechanisms.
mechanisms.
```
Virtio devices use virtqueues to transport data efficiently. Virtqueue is a set Virtio devices use virtqueues to transport data efficiently. Virtqueue is a set
of three different single-producer, single-consumer ring structures designed to of three different single-producer, single-consumer ring structures designed to
@ -47,23 +45,21 @@ SPDK to expose a vhost device is Vhost-user protocol.
The [Vhost-user specification](https://git.qemu.org/?p=qemu.git;a=blob_plain;f=docs/interop/vhost-user.txt;hb=HEAD) The [Vhost-user specification](https://git.qemu.org/?p=qemu.git;a=blob_plain;f=docs/interop/vhost-user.txt;hb=HEAD)
describes the protocol as follows: describes the protocol as follows:
``` > [Vhost-user protocol] is aiming to complement the ioctl interface used to
[Vhost-user protocol] is aiming to complement the ioctl interface used to > control the vhost implementation in the Linux kernel. It implements the control
control the vhost implementation in the Linux kernel. It implements the control > plane needed to establish virtqueue sharing with a user space process on the
plane needed to establish virtqueue sharing with a user space process on the > same host. It uses communication over a Unix domain socket to share file
same host. It uses communication over a Unix domain socket to share file > descriptors in the ancillary data of the message.
descriptors in the ancillary data of the message. >
> The protocol defines 2 sides of the communication, master and slave. Master is
The protocol defines 2 sides of the communication, master and slave. Master is > the application that shares its virtqueues, in our case QEMU. Slave is the
the application that shares its virtqueues, in our case QEMU. Slave is the > consumer of the virtqueues.
consumer of the virtqueues. >
> In the current implementation QEMU is the Master, and the Slave is intended to
In the current implementation QEMU is the Master, and the Slave is intended to > be a software Ethernet switch running in user space, such as Snabbswitch.
be a software Ethernet switch running in user space, such as Snabbswitch. >
> Master and slave can be either a client (i.e. connecting) or server (listening)
Master and slave can be either a client (i.e. connecting) or server (listening) > in the socket communication.
in the socket communication.
```
SPDK vhost is a Vhost-user slave server. It exposes Unix domain sockets and SPDK vhost is a Vhost-user slave server. It exposes Unix domain sockets and
allows external applications to connect. allows external applications to connect.
@ -125,7 +121,7 @@ the request data, and putting guest addresses of those buffers into virtqueues.
A Virtio-Block request looks as follows. A Virtio-Block request looks as follows.
``` ```c
struct virtio_blk_req { struct virtio_blk_req {
uint32_t type; // READ, WRITE, FLUSH (read-only) uint32_t type; // READ, WRITE, FLUSH (read-only)
uint64_t offset; // offset in the disk (read-only) uint64_t offset; // offset in the disk (read-only)
@ -135,7 +131,7 @@ struct virtio_blk_req {
``` ```
And a Virtio-SCSI request as follows. And a Virtio-SCSI request as follows.
``` ```c
struct virtio_scsi_req_cmd { struct virtio_scsi_req_cmd {
struct virtio_scsi_cmd_req *req; // request data (read-only) struct virtio_scsi_cmd_req *req; // request data (read-only)
struct iovec read_only_buffers[]; // scatter-gatter list for WRITE I/Os struct iovec read_only_buffers[]; // scatter-gatter list for WRITE I/Os
@ -149,7 +145,7 @@ to be converted into a chain of such descriptors. A single descriptor can be
either readable or writable, so each I/O request consists of at least two either readable or writable, so each I/O request consists of at least two
(request + response). (request + response).
``` ```c
struct virtq_desc { struct virtq_desc {
/* Address (guest-physical). */ /* Address (guest-physical). */
le64 addr; le64 addr;

View File

@ -2,4 +2,6 @@
To use perf test on FreeBSD over NVMe-oF, explicitly link userspace library of HBA. For example, on a setup with Mellanox HBA, To use perf test on FreeBSD over NVMe-oF, explicitly link userspace library of HBA. For example, on a setup with Mellanox HBA,
```make
LIBS += -lmlx5 LIBS += -lmlx5
```

View File

@ -8,6 +8,5 @@ rule 'MD029', :style => "ordered"
exclude_rule 'MD031' exclude_rule 'MD031'
exclude_rule 'MD033' exclude_rule 'MD033'
exclude_rule 'MD034' exclude_rule 'MD034'
exclude_rule 'MD040'
exclude_rule 'MD041' exclude_rule 'MD041'
exclude_rule 'MD046' exclude_rule 'MD046'

View File

@ -22,7 +22,7 @@ Quick start instructions for OSX:
* Note: The extension pack has different licensing than main VirtualBox, please * Note: The extension pack has different licensing than main VirtualBox, please
review them carefully as the evaluation license is for personal use only. review them carefully as the evaluation license is for personal use only.
``` ```bash
/usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)" /usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
brew doctor brew doctor
brew update brew update
@ -69,7 +69,7 @@ If you are behind a corporate firewall, configure the following proxy settings.
1. Set the http_proxy and https_proxy 1. Set the http_proxy and https_proxy
2. Install the proxyconf plugin 2. Install the proxyconf plugin
``` ```bash
$ export http_proxy=.... $ export http_proxy=....
$ export https_proxy=.... $ export https_proxy=....
$ vagrant plugin install vagrant-proxyconf $ vagrant plugin install vagrant-proxyconf
@ -93,7 +93,7 @@ Use the `spdk/scripts/vagrant/create_vbox.sh` script to create a VM of your choi
- fedora28 - fedora28
- freebsd11 - freebsd11
``` ```bash
$ spdk/scripts/vagrant/create_vbox.sh -h $ spdk/scripts/vagrant/create_vbox.sh -h
Usage: create_vbox.sh [-n <num-cpus>] [-s <ram-size>] [-x <http-proxy>] [-hvrld] <distro> Usage: create_vbox.sh [-n <num-cpus>] [-s <ram-size>] [-x <http-proxy>] [-hvrld] <distro>
@ -124,7 +124,7 @@ It is recommended that you call the `create_vbox.sh` script from outside of the
Call this script from a parent directory. This will allow the creation of multiple VMs in separate Call this script from a parent directory. This will allow the creation of multiple VMs in separate
<distro> directories, all using the same spdk repository. For example: <distro> directories, all using the same spdk repository. For example:
``` ```bash
$ spdk/scripts/vagrant/create_vbox.sh -s 2048 -n 2 fedora26 $ spdk/scripts/vagrant/create_vbox.sh -s 2048 -n 2 fedora26
``` ```
@ -141,7 +141,7 @@ This script will:
This arrangement allows the provisioning of multiple, different VMs within that same directory hierarchy using thesame This arrangement allows the provisioning of multiple, different VMs within that same directory hierarchy using thesame
spdk repository. Following the creation of the vm you'll need to ssh into your virtual box and finish the VM initialization. spdk repository. Following the creation of the vm you'll need to ssh into your virtual box and finish the VM initialization.
``` ```bash
$ cd <distro> $ cd <distro>
$ vagrant ssh $ vagrant ssh
``` ```
@ -152,7 +152,7 @@ A copy of the `spdk` repository you cloned will exist in the `spdk_repo` directo
account. After using `vagrant ssh` to enter your VM you must complete the initialization of your VM by running account. After using `vagrant ssh` to enter your VM you must complete the initialization of your VM by running
the `scripts/vagrant/update.sh` script. For example: the `scripts/vagrant/update.sh` script. For example:
``` ```bash
$ script -c 'sudo spdk_repo/spdk/scripts/vagrant/update.sh' update.log $ script -c 'sudo spdk_repo/spdk/scripts/vagrant/update.sh' update.log
``` ```
@ -175,14 +175,14 @@ Following VM initialization you must:
### Verify you have an emulated NVMe device ### Verify you have an emulated NVMe device
``` ```bash
$ lspci | grep "Non-Volatile" $ lspci | grep "Non-Volatile"
00:0e.0 Non-Volatile memory controller: InnoTek Systemberatung GmbH Device 4e56 00:0e.0 Non-Volatile memory controller: InnoTek Systemberatung GmbH Device 4e56
``` ```
### Compile SPDK ### Compile SPDK
``` ```bash
$ cd spdk_repo/spdk $ cd spdk_repo/spdk
$ git submodule update --init $ git submodule update --init
$ ./configure --enable-debug $ ./configure --enable-debug
@ -191,7 +191,7 @@ Following VM initialization you must:
### Run the hello_world example script ### Run the hello_world example script
``` ```bash
$ sudo scripts/setup.sh $ sudo scripts/setup.sh
$ sudo scripts/gen_nvme.sh --json-with-subsystems > ./build/examples/hello_bdev.json $ sudo scripts/gen_nvme.sh --json-with-subsystems > ./build/examples/hello_bdev.json
$ sudo ./build/examples/hello_bdev --json ./build/examples/hello_bdev.json -b Nvme0n1 $ sudo ./build/examples/hello_bdev --json ./build/examples/hello_bdev.json -b Nvme0n1
@ -202,7 +202,7 @@ Following VM initialization you must:
After running vm_setup.sh the `run-autorun.sh` can be used to run `spdk/autorun.sh` on a Fedora vagrant machine. After running vm_setup.sh the `run-autorun.sh` can be used to run `spdk/autorun.sh` on a Fedora vagrant machine.
Note that the `spdk/scripts/vagrant/autorun-spdk.conf` should be copied to `~/autorun-spdk.conf` before starting your tests. Note that the `spdk/scripts/vagrant/autorun-spdk.conf` should be copied to `~/autorun-spdk.conf` before starting your tests.
``` ```bash
$ cp spdk/scripts/vagrant/autorun-spdk.conf ~/ $ cp spdk/scripts/vagrant/autorun-spdk.conf ~/
$ spdk/scripts/vagrant/run-autorun.sh -h $ spdk/scripts/vagrant/run-autorun.sh -h
Usage: scripts/vagrant/run-autorun.sh -d <path_to_spdk_tree> [-h] | [-q] | [-n] Usage: scripts/vagrant/run-autorun.sh -d <path_to_spdk_tree> [-h] | [-q] | [-n]
@ -224,7 +224,7 @@ Note that the `spdk/scripts/vagrant/autorun-spdk.conf` should be copied to `~/au
The following steps are done by the `update.sh` script. It is recommended that you capture the output of `update.sh` with a typescript. E.g.: The following steps are done by the `update.sh` script. It is recommended that you capture the output of `update.sh` with a typescript. E.g.:
``` ```bash
$ script update.log sudo spdk_repo/spdk/scripts/vagrant/update.sh $ script update.log sudo spdk_repo/spdk/scripts/vagrant/update.sh
``` ```
@ -232,7 +232,7 @@ The following steps are done by the `update.sh` script. It is recommended that y
1. Installs the needed FreeBSD packages on the system by calling pkgdep.sh 1. Installs the needed FreeBSD packages on the system by calling pkgdep.sh
2. Installs the FreeBSD source in /usr/src 2. Installs the FreeBSD source in /usr/src
``` ```bash
$ sudo pkg upgrade -f $ sudo pkg upgrade -f
$ sudo spdk_repo/spdk/scripts/pkgdep.sh --all $ sudo spdk_repo/spdk/scripts/pkgdep.sh --all
$ sudo git clone --depth 10 -b releases/11.1.0 https://github.com/freebsd/freebsd.git /usr/src $ sudo git clone --depth 10 -b releases/11.1.0 https://github.com/freebsd/freebsd.git /usr/src
@ -240,7 +240,7 @@ The following steps are done by the `update.sh` script. It is recommended that y
To build spdk on FreeBSD use `gmake MAKE=gmake`. E.g.: To build spdk on FreeBSD use `gmake MAKE=gmake`. E.g.:
``` ```bash
$ cd spdk_repo/spdk $ cd spdk_repo/spdk
$ git submodule update --init $ git submodule update --init
$ ./configure --enable-debug $ ./configure --enable-debug

View File

@ -25,6 +25,6 @@ script for targeted debugging on a subsequent run.
At the end of each test run, a summary is printed in the following format: At the end of each test run, a summary is printed in the following format:
~~~ ~~~bash
device 0x11c3b90 stats: Sent 1543 valid opcode PDUs, 16215 invalid opcode PDUs. device 0x11c3b90 stats: Sent 1543 valid opcode PDUs, 16215 invalid opcode PDUs.
~~~ ~~~

View File

@ -26,7 +26,7 @@ This can be overridden with the -V flag. if -V is specified, each command will b
it is completed in the JSON format specified above. it is completed in the JSON format specified above.
At the end of each test run, a summary is printed for each namespace in the following format: At the end of each test run, a summary is printed for each namespace in the following format:
~~~ ~~~bash
NS: 0x200079262300 admin qp, Total commands completed: 462459, total successful commands: 1960, random_seed: 4276918833 NS: 0x200079262300 admin qp, Total commands completed: 462459, total successful commands: 1960, random_seed: 4276918833
~~~ ~~~

View File

@ -38,7 +38,7 @@ submitted to the proper block devices.
The vhost fuzzer differs from the NVMe fuzzer in that it expects devices to be configured via rpc. The fuzzer should The vhost fuzzer differs from the NVMe fuzzer in that it expects devices to be configured via rpc. The fuzzer should
always be started with the --wait-for-rpc argument. Please see below for an example of starting the fuzzer. always be started with the --wait-for-rpc argument. Please see below for an example of starting the fuzzer.
~~~ ~~~bash
./test/app/fuzz/vhost_fuzz/vhost_fuzz -t 30 --wait-for-rpc & ./test/app/fuzz/vhost_fuzz/vhost_fuzz -t 30 --wait-for-rpc &
./scripts/rpc.py fuzz_vhost_create_dev -s ./Vhost.1 -b -V ./scripts/rpc.py fuzz_vhost_create_dev -s ./Vhost.1 -b -V
./scripts/rpc.py fuzz_vhost_create_dev -s ./naa.VhostScsi0.1 -l -V ./scripts/rpc.py fuzz_vhost_create_dev -s ./naa.VhostScsi0.1 -l -V

View File

@ -8,7 +8,7 @@ This directory also contains a convenient test script, test_make.sh, which autom
and testing all six of these linker options. It takes a single argument, the path to an SPDK and testing all six of these linker options. It takes a single argument, the path to an SPDK
repository and should be run as follows: repository and should be run as follows:
~~~ ~~~bash
sudo ./test_make.sh /path/to/spdk sudo ./test_make.sh /path/to/spdk
~~~ ~~~