From 1f813ec3dae7406cb52b7ef9e40002ab19ffe8d9 Mon Sep 17 00:00:00 2001 From: Chen Wang Date: Mon, 27 Aug 2018 16:42:35 +0800 Subject: [PATCH] doc: fix typos in the doc directory Change-Id: Ifff553ed70ce5aa8e7bdf6d8a8e9e9afb73e8a64 Signed-off-by: Chen Wang Reviewed-on: https://review.gerrithub.io/423497 Tested-by: SPDK CI Jenkins Chandler-Test-Pool: SPDK Automated Test System Reviewed-by: Ben Walker Reviewed-by: Jim Harris --- CHANGELOG.md | 4 ++-- README.md | 4 ++-- doc/applications.md | 2 +- doc/bdev.md | 6 +++--- doc/blob.md | 2 +- doc/concurrency.md | 2 +- doc/jsonrpc.md | 4 ++-- doc/lvol.md | 4 ++-- doc/memory.md | 2 +- doc/nvmf.md | 4 ++-- doc/nvmf_tgt_pg.md | 2 +- doc/peer_2_peer.md | 2 +- doc/spdkcli.md | 4 ++-- doc/template_pg.md | 6 +++--- doc/vagrant.md | 2 +- doc/vhost_processing.md | 2 +- 16 files changed, 26 insertions(+), 26 deletions(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index 605cdf87a..ba0e112ab 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -265,7 +265,7 @@ A new `destroy_lvol_bdev` RPC method to delete logical volumes has been added. Lvols now have their own UUIDs which replace previous LvolStoreUUID_BlobID combination. -New Snapshot and Clone funtionalities have been added. User may create Snapshots of existing Lvols +New Snapshot and Clone functionalities have been added. User may create Snapshots of existing Lvols and Clones of existing Snapshots. See the [lvol snapshots](http://www.spdk.io/doc/logical_volumes.html#lvol_snapshots) documentation for more details. @@ -414,7 +414,7 @@ See the [GPT](http://www.spdk.io/doc/bdev.html#bdev_config_gpt) documentation fo ### FIO plugin -SPDK `fio_plugin` now suports FIO 3.3. The support for previous FIO 2.21 has been dropped, +SPDK `fio_plugin` now supports FIO 3.3. The support for previous FIO 2.21 has been dropped, although it still remains to work for now. The new FIO contains huge amount of bugfixes and it's recommended to do an update. diff --git a/README.md b/README.md index e0e02d55c..d82b9db92 100644 --- a/README.md +++ b/README.md @@ -124,7 +124,7 @@ For example: ./configure --with-rdma ~~~ -Additionally, `CONFIG` options may also be overrriden on the `make` command +Additionally, `CONFIG` options may also be overridden on the `make` command line: ~~~{.sh} @@ -188,5 +188,5 @@ vfio. ## Contributing For additional details on how to get more involved in the community, including -[contributing code](http://www.spdk.io/development) and participating in discussions and other activiites, please +[contributing code](http://www.spdk.io/development) and participating in discussions and other activities, please refer to [spdk.io](http://www.spdk.io/community) diff --git a/doc/applications.md b/doc/applications.md index ae426aafa..6b9360a9e 100644 --- a/doc/applications.md +++ b/doc/applications.md @@ -77,7 +77,7 @@ For more details see @ref jsonrpc documentation. ### Create just one hugetlbfs file {#cmd_arg_single_file_segments} Instead of creating one hugetlbfs file per page, this option makes SPDK create -one file per hugepagesz per socket. This is needed for @ref virtio to be used +one file per hugepages per socket. This is needed for @ref virtio to be used with more than 8 hugepages. See @ref virtio_2mb. ### Multi process mode {#cmd_arg_multi_process} diff --git a/doc/bdev.md b/doc/bdev.md index aaa33bc65..f302ffd90 100644 --- a/doc/bdev.md +++ b/doc/bdev.md @@ -101,7 +101,7 @@ possibly multiple virtual bdevs. The SPDK partition type GUID is `7c5222bd-8f5d-4087-9c00-bf9843c7b58c`. Existing SPDK bdevs can be exposed as Linux block devices via NBD and then ca be partitioned with standard partitioning tools. After partitioning, the bdevs will need to be deleted and -attached again fot the GPT bdev module to see any changes. NBD kernel module must be +attached again for the GPT bdev module to see any changes. NBD kernel module must be loaded first. To create NBD bdev user should use `start_nbd_disk` RPC command. Example command @@ -224,7 +224,7 @@ please refer to @ref lvol. Before creating any logical volumes (lvols), an lvol store has to be created first on selected block device. Lvol store is lvols vessel responsible for managing underlying -bdev space assigment to lvol bdevs and storing metadata. To create lvol store user +bdev space assignment to lvol bdevs and storing metadata. To create lvol store user should use using `construct_lvol_store` RPC command. Example command @@ -274,7 +274,7 @@ Example commands # Passthru {#bdev_config_passthru} The SPDK Passthru virtual block device module serves as an example of how to write a -virutal block device module. It implements the required functionality of a vbdev module +virtual block device module. It implements the required functionality of a vbdev module and demonstrates some other basic features such as the use of per I/O context. Example commands diff --git a/doc/blob.md b/doc/blob.md index e3b023295..e6db9d3da 100644 --- a/doc/blob.md +++ b/doc/blob.md @@ -314,7 +314,7 @@ Cluster 0 is special and has the following format, where page 0 is the first pag The super block is a single page located at the beginning of the partition. It contains basic information about the Blobstore. The metadata region is the remainder of cluster 0 and may extend to additional clusters. Refer -to the latest srouce code for complete structural details of the super block and metadata region. +to the latest source code for complete structural details of the super block and metadata region. Each blob is allocated a non-contiguous set of pages inside the metadata region for its metadata. These pages form a linked list. The first page in the list will be written in place on update, while all other pages will diff --git a/doc/concurrency.md b/doc/concurrency.md index 240b6c3c6..d014bce5f 100644 --- a/doc/concurrency.md +++ b/doc/concurrency.md @@ -153,7 +153,7 @@ Don't split these functions up - keep them as a nice unit that can be read from For more complex callback chains, especially ones that have logical branches or loops, it's best to write out a state machine. It turns out that higher -level langauges that support futures and promises are just generating state +level languages that support futures and promises are just generating state machines at compile time, so even though we don't have the ability to generate them in C we can still write them out by hand. As an example, here's a callback chain that performs `foo` 5 times and then calls `bar` - effectively diff --git a/doc/jsonrpc.md b/doc/jsonrpc.md index 2c9d91a49..e5e70439e 100644 --- a/doc/jsonrpc.md +++ b/doc/jsonrpc.md @@ -372,7 +372,7 @@ name | Required | string | SPDK subsystem name ### Response -The response is current configuration of the specfied SPDK subsystem. +The response is current configuration of the specified SPDK subsystem. Null is returned if it is not retrievable by the get_subsystem_config method and empty array is returned if it is empty. ### Example @@ -1349,7 +1349,7 @@ Example response: ## pmem_pool_info {#rpc_pmem_pool_info} -Retrive basic information about PMDK memory pool. +Retrieve basic information about PMDK memory pool. This method is available only if SPDK was built with PMDK support. diff --git a/doc/lvol.md b/doc/lvol.md index 9734bde62..2b966fdb8 100644 --- a/doc/lvol.md +++ b/doc/lvol.md @@ -63,7 +63,7 @@ Blobs can be inflated to copy data from backing devices (e.g. snapshots) and all ## Decoupling {#lvol_decoupling} -Blobs can be decoupled from all dependencies by copying data from backing devices (e.g. snapshots) for all allocated clusters. Remainig unallocated clusters are kept thin provisioned. +Blobs can be decoupled from all dependencies by copying data from backing devices (e.g. snapshots) for all allocated clusters. Remaining unallocated clusters are kept thin provisioned. # Configuring Logical Volumes @@ -80,7 +80,7 @@ construct_lvol_store [-h] [-c CLUSTER_SZ] bdev_name lvs_name erased. Then original bdev is claimed by SPDK, but no additional spdk bdevs are created. Returns uuid of created lvolstore. - Optional paramters: + Optional parameters: -h show help -c CLUSTER_SZ Specifies the size of cluster. By default its 4MiB. destroy_lvol_store [-h] [-u UUID] [-l LVS_NAME] diff --git a/doc/memory.md b/doc/memory.md index e2b7bb2ad..abe81ffaf 100644 --- a/doc/memory.md +++ b/doc/memory.md @@ -87,7 +87,7 @@ never change their physical location. This is not by intent, and so things could change in future versions, but it is true today and has been for a number of years (see the later section on the IOMMU for a future-proof solution). DPDK goes through great pains to allocate hugepages such that it can string together -the longest runs of physical pages possible, such that it can accomodate +the longest runs of physical pages possible, such that it can accommodate physically contiguous allocations larger than a single page. With this explanation, hopefully it is now clear why all data buffers passed to diff --git a/doc/nvmf.md b/doc/nvmf.md index 87f1528ac..966a0771f 100644 --- a/doc/nvmf.md +++ b/doc/nvmf.md @@ -107,7 +107,7 @@ app/nvmf_tgt/nvmf_tgt -c /path/to/nvmf.conf ### Subsystem Configuration {#nvmf_config_subsystem} The `[Subsystem]` section in the configuration file is used to configure -subysystems for the NVMe-oF target. +subsystems for the NVMe-oF target. This example shows two local PCIe NVMe devices exposed as separate NVMe-oF target subsystems: @@ -196,7 +196,7 @@ alphabetic hex digits in their NQNs. SPDK uses the [DPDK Environment Abstraction Layer](http://dpdk.org/doc/guides/prog_guide/env_abstraction_layer.html) to gain access to hardware resources such as huge memory pages and CPU core(s). DPDK EAL provides functions to assign threads to specific cores. -To ensure the SPDK NVMe-oF target has the best performance, configure the RNICs and NVMe devices to +To ensure the SPDK NVMe-oF target has the best performance, configure the NICs and NVMe devices to be located on the same NUMA node. The `-m` core mask option specifies a bit mask of the CPU cores that diff --git a/doc/nvmf_tgt_pg.md b/doc/nvmf_tgt_pg.md index e67e4520a..73ea0a7c8 100644 --- a/doc/nvmf_tgt_pg.md +++ b/doc/nvmf_tgt_pg.md @@ -201,4 +201,4 @@ object. Further, RDMA NICs expose different queue depths for READ/WRITE operations than they do for SEND/RECV operations. The RDMA transport reports available queue depth based on SEND/RECV operation limits and will queue in software as -necessary to accomodate (usually lower) limits on READ/WRITE operations. +necessary to accommodate (usually lower) limits on READ/WRITE operations. diff --git a/doc/peer_2_peer.md b/doc/peer_2_peer.md index 3be7d8592..ee39a4aeb 100644 --- a/doc/peer_2_peer.md +++ b/doc/peer_2_peer.md @@ -52,7 +52,7 @@ DMA buffer. provided by Broadcom or Microsemi) as that is know to provide good performance. * Even with a PCIe switch there may be occasions where peer-2-peer - DMAs fail to work. This is probaby due to PCIe Access Control + DMAs fail to work. This is probably due to PCIe Access Control Services (ACS) being enabled by the BIOS and/or OS. You can disable ACS using setpci or via out of tree kernel patches that can be found on the internet. diff --git a/doc/spdkcli.md b/doc/spdkcli.md index df72df885..b3d3e4095 100644 --- a/doc/spdkcli.md +++ b/doc/spdkcli.md @@ -22,7 +22,7 @@ Package dependencies at the moment include: ### Run SPDK CLI -Spdkcli should be run with the same priviliges as SPDK application. +Spdkcli should be run with the same privileges as SPDK application. In order to use SPDK CLI in interactive mode please use: ~~~{.sh} scripts/spdkcli.py @@ -49,7 +49,7 @@ virtualenv-3 ./venv source ./venv/bin/activate ~~~ -Then install the dependencies using pip. That way depedencies will be +Then install the dependencies using pip. That way dependencies will be installed only inside the virtual environment. ~~~{.sh} (venv) pip install configshell-fb diff --git a/doc/template_pg.md b/doc/template_pg.md index 253d73ba7..535a980c3 100644 --- a/doc/template_pg.md +++ b/doc/template_pg.md @@ -24,7 +24,7 @@ sequences will be discussed. For the latest source code reference refer to the [ Provide some high level description of what this component is, what it does and maybe why it exists. This shouldn't be a lengthy tutorial or commentary on storage in general or the goodness of SPDK but provide enough information to set the stage for someone about to write an application to integrate with this component. They won't be totally -starting from scratch if they're at this point, they are by defintion a storage applicaiton developer if they are +starting from scratch if they're at this point, they are by definition a storage application developer if they are reading this guide. ## Theory of Operation {#componentname_pg_theory} @@ -69,9 +69,9 @@ design docs as part of SPDK, the overhead and maintenance is too much for open s to provide some level of insight into the codebase to promote getting more people involved and understanding of what the design is all about. The PG is meant to help a developer write their own application but we can use this section, per module, to test out a way to build out some internal design info as well. I see -this as including an overview of key structures, concepts, etc., of the module itself. So, intersting info +this as including an overview of key structures, concepts, etc., of the module itself. So, interesting info not required to write an application using the module but maybe just enough to provide the next level of -detail into what's behind the scenes to get someone more intertested in becoming a community contributor. +detail into what's behind the scenes to get someone more interested in becoming a community contributor. ## Sequences {#componentname_pg_sequences} diff --git a/doc/vagrant.md b/doc/vagrant.md index 60febfa69..cd32739d3 100644 --- a/doc/vagrant.md +++ b/doc/vagrant.md @@ -59,7 +59,7 @@ By default, the VM created is configured with: Currently VirtualBox & libvirt provider are supported. -For libvirt currently there is only centos 7 image avaiable. +For libvirt currently there is only centos 7 image available. To run with libvirt: diff --git a/doc/vhost_processing.md b/doc/vhost_processing.md index 210d60e56..a882bd7aa 100644 --- a/doc/vhost_processing.md +++ b/doc/vhost_processing.md @@ -84,7 +84,7 @@ the Master and the Slave exposes a list of their implemented features and upon negotiation they choose a common set of those. Most of these features are implementation-related, but also regard e.g. multiqueue support or live migration. -After the negotiatiation, the Vhost-user driver shares its memory, so that the vhost +After the negotiation, the Vhost-user driver shares its memory, so that the vhost device (SPDK) can access it directly. The memory can be fragmented into multiple physically-discontiguous regions and Vhost-user specification puts a limit on their number - currently 8. The driver sends a single message for each region with