Commit Graph

6 Commits

Author SHA1 Message Date
paul luse
397cf3f884 lib/idxd: small code cleanup
Suggestions from a prior review... able to remove a boolean by changing
how the batch elements 'index' and 'remaining' are used.

Signed-off-by: paul luse <paul.e.luse@intel.com>
Change-Id: I273e5e231bb30d51eb3ae0a59eec110377d49ab7
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/4813
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2020-10-22 22:43:28 +00:00
paul luse
63d7ac35c9 lib/idxd: small code simplifcation
Earlier refactoring enables us to not have to keep track of batch completions in
the batch struct as they're always used sequentially now so we can just add
the addresses from the start up to the number of elements in the batch.

Signed-off-by: paul luse <paul.e.luse@intel.com>
Change-Id: I00cdcdec3376a1c32c9dab72c68fea868c1cb540
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/4810
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Community-CI: Mellanox Build Bot
2020-10-22 22:43:28 +00:00
paul luse
33eac886b9 lib/idxd: refactor batching for increased performance
And to eliminate an artificial constraint on # of user descriptors.
The main idea here was to move from a single ring that covered all
user descriptors to a pre-allocated ring per pre-allocated batch.

In addition, the other major change here is in how we poll for
completions.  We used to poll the batch rings then the main ring.
Now when commands are prepared their completion address is added to
a per channel list and the poller simply runs through that list
not caring which ring the completion address belongs too. This
simplifies the completion logic considerably and will avoid
polling locations that can't potentially have a completion.

Some minor rework was included as well, mainly getting rid of the
ring_ctrl struct as it didn't serve much of a purpose anyway and
with how things are setup now its easier to read with all the
elements in the channel struct.

Also, a change that came in while this was WIP needed a few fixes
to function correctly.  Addressed those and moved them to a
helper function so we have one point of control for xlations.

Added support for NOP in cases where a batch is submitted with
only 1 descriptor.

Signed-off-by: paul luse <paul.e.luse@intel.com>
Change-Id: Ie201b28118823100e908e0d1b08e7c10bb8fa9e7
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/3654
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
2020-10-22 22:43:28 +00:00
paul luse
fc250841ca idxd: add batch capability to accel framework and IDXD back-end
This patch only includes the basic framework for batching and the
ability to batch one type of command, copy. Follow-on patches will
add the ability to batch other commands and include an example of
how to do so via the accel perf tool.  SW engine support for batching
will also come in a future patch. Documentation will also be coming.

Batching allows the application to submit a list of independent
descriptors to DSA with one single "batch" descriptor. This is beneficial
when the application is in a position to have several operations ready
at once; batching saves the overhead of submitting each one separately.

The way batching works in SPDK is as follows:

1) The app gets a handle to a new batch with spdk_accel_batch_create()
2) The app uses that handle to prepare a command to be included in the
batch. For copy the command is spdk_accel_batch_prep_copy(). The
app many continue to prep commands for the batch up to the max via
calling spdk_accel_batch_get_max()
3) The app then submits the batch with spdk_accel_batch_submit()
4) The callback provided for each command in the batch will be called as
they complete, the callback provided to the batch submit itself will be
called then the entire batch is done.

Signed-off-by: paul luse <paul.e.luse@intel.com>
Change-Id: I4102e9291fe59a245cedde6888f42a923b6dbafd
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/2248
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2020-07-10 07:31:09 +00:00
paul luse
0aca4d91e8 lib/idxd: clean up some casting and type issues
Signed-off-by: paul luse <paul.e.luse@intel.com>
Change-Id: If196c51deead9828fd75388f34b5622884c5e2d8
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/2204
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Community-CI: Mellanox Build Bot
2020-06-17 07:21:05 +00:00
paul luse
e58e9fbda8 lib/idxd: add low level idxd library
Module, etc., will follow. Notes:

* IDXD is an Intel silicon feature available in future Intel CPUs.
Initial development is being done on a simulator. Once HW is
available and the code fully tested the experimental label will be
lifted. Spec can be found here: https://software.intel.com/en-us/download/intel-data-streaming-accelerator-preliminary-architecture-specification

* The current implementation will only work with VFIO.

* DSA has a number of engines that can be grouped based on application
need such as type of memory being served or QoS. Engines are processing
units and are assigned to groups. Work queues are on device structures
that act as front-end groups for queueing descriptors. Full details on
what is configurable & how will come in later doc patches.

* There is a finite number of work queue slots that are divided amongst
the number of desired work queues in some fashion (ie evenly).

* SW (outside of the idxd lib) is required to manage flow control, to not
over-run the work queues.This is provided in the accel plug-in module.
The upper layers use public API to manage this.

* Work queue submissions are done with a 64 byte atomic instruction

* The design here creates a set of descriptor rings per channel that match
the size of the work queues. Then, an spdk_bit_array is used to make sure
we don't overrun a queue.  If there are not slots available, the operation
is put on a linked list to be retried later from the poller.

* As we need to support any number of channels (we can't limit ourselves
to the number of work queues) we need to dynamically size/resize our
per channel descriptor rings based on the number of current channels. This
is done from upper layers via public API into the lib.

* As channels are created, the total number of work queue slots is divided
across the channels evenly. Same thing when they are destroyed, remaining
channels with see the ring sizes increase. This is done from upper layers
via public API into the lib.

* The sim has 64 total work queue entries (WQE) that get dolled out to the
work queues (WQ) evenly.

Signed-off-by: paul luse <paul.e.luse@intel.com>
Change-Id: I899bbeda3cef3db05bea4197b8757e89dddb579d
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/1809
Community-CI: Mellanox Build Bot
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Vitaliy Mysak <vitaliy.mysak@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
2020-04-23 15:48:32 +00:00