A workaround for kernel deadlocks surfaced in #1275.
DPDK basically offers two APIs for hotplugging all PCI devices:
rte_bus_scan() and rte_bus_probe(). Scan iterates through
/sys/bus/pci/devices/* and creates corresponding rte_pci_device-s,
then rte_bus_probe() tries to initialize each device with the
supporting driver.
Previously we did scan and probe together, one after another, now
we'll have an intermediate step. After scanning the bus, we'll
iterate through all rte_pci_device-s and temporarily blacklist any
newly detected devices. We'll use devargs->data field to a store
a timeout value (integer) after which the device can be un-blacklisted
and initialized. devargs->data is documented in DPDK as "Device
string storage" and it's a char*, but it's not referenced anywhere
in DPDK. rte_bus_probe() respects the blacklist and doesn't do
absolutely anything with blacklisted ones.
The timeout value is 2 seconds, which should be plenty enough
for an NVMe device to reset, leave the critical lock sections in
kernel, and let us initialize it safely.
Note that direct attach by BDF doesn't respect the blacklist,
so an NVMe attach RPC won't be delayed in any way, it will continue
to work as it always did. Only the automatic discovery & enumeration
is deferred.
Change-Id: I62b719271bd0755bc2882331ea33f69897b1e5e5
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/1733
Community-CI: Mellanox Build Bot
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Extensive testing showed it can fail:
> EAL: eal_parse_sysfs_value(): cannot open sysfs value
> /sys/bus/pci/devices/0000:02:00.0/vendor
> EAL: Scan for (pci) bus failed.
spdk_pci_enumerate() would previously return with error because
of this and e.g. the test nvme hotplug app could immediately exit
with failure. A mis-timed scan shouldn't cause this kind of failure,
so ignore it's return code. This shouldn't cause any issues.
Change-Id: I9253219c218981a747774a8632335963cfb0db53
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/2941
Community-CI: Mellanox Build Bot
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: <dongx.yi@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-by: Michael Haeuptle <michaelhaeuptle@gmail.com>
There was a chance we scheduled a device removal to the DPDK thread
while that thread was already removing the device from a VFIO hotremove
notification (on the DPDK interrupt thread). The second hotremove
attempt touches some freed memory and segfaults.
The VFIO hotremove notification already checks pending_removal flag
under a mutex and sets it to true, so do the same in spdk_detach_rte()
(called from the SPDK init thread).
Change-Id: Ib3f0eb7c0c5c6e1ab8cf253b7711fd149925a143
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/1730
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-by: Michael Haeuptle <michaelhaeuptle@gmail.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Simplify the code path a bit. VFIO notification is the only
place where detach callback is called from the dpdk intr thread.
Detach checks the current thread and behaves differently in this
case, but it could be the VFIO notification that simply calls
a different function.
So instead of carrying the VFIO notification through the generic
detach routine, carry it just through the DPDK-thread specific
subset. This lets us remove some ifs in the generic routine.
Change-Id: I5e8866e4643ef08fb3cd12621e2d262b5e827c74
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/1731
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
This reverts commit 301c5aeec9.
The patch doesn't fix anything as the hotremoval could be still
called twice and the second call would do use-after-free.
Change-Id: I78a1120707dbdf36c871ec378a312c4a058fc76b
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/1729
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Mellanox Build Bot
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Hitting only the static functions from the above libraries
with the spdk_ prefix.
Signed-off-by: Seth Howell <seth.howell@intel.com>
Change-Id: Ic6df38dfbeb53f0b1c30d350921f7216acba3170
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/2362
Community-CI: Mellanox Build Bot
Community-CI: Broadcom CI
Reviewed-by: Paul Luse <paul.e.luse@intel.com>
Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com>
Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
When removing large number of devices (>8) in parallel,
the 20ms timeout is not long enough.
As part of spdk_detach_cb, DPDK calls into the VFIO driver
which may get delayed due to multiple hot removes being
processed by pciehp driver (pciehp IRQ thread function
is handling the actual removal of a device in paralle but
all of the IRQ thread function compete for a global mutex
increasing processing time and race conditions).
Signed-off-by: Michael Haeuptle <michael.haeuptle@hpe.com>
Change-Id: I470fbbee92dac9677082c873781efe41e2941cd5
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/1588
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
We use `spdk_map_bar_rte()` to read mapped addresses
from PCI BARs.
This function is currently checking for NULL in each pair.
But in PCI memory, some registers can be left unused,
in which case they are set to 0.
As a result, we may read some NULL pointers from BARs,
which is OK.
To check if given address is indeed invalid, we should first
check if it is used.
So it is best to delegate such checks to the
user of this function.
In fact, users already do the NULL check where it is needed
(ex: virtio_pci.c:390, nvme_pcie.c:589)
so this patch just removes them from `spdk_map_bar_rte()`.
This solves github issue #1206
Change-Id: I88021ceca1b9e9d503b224f790819999cd16da01
Signed-off-by: Vitaliy Mysak <vitaliy.mysak@intel.com>
Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/1129
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
spdk_map_bar_rte did not return error in case bar was not mapped successfully
Signed-off-by: Lukasz Radomski <lukasz.radomski@intel.com>
Change-Id: I662cc189d47c65af8f135a3ab4b27ff1785233d0
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/477812
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
DPDK intr thread is designed that it can't unregister the src
callback in this callback handler. So I think we can't detach
the PCI device in the hotremove callback as it needs to unregister
the VFIO notification callback which will be not successful
but it still can free the device. So at the next req notification
in the handler function, we meet the freed device.
Fix#994
Change-Id: Id4b45a2d0fe6b45b132355d59471bc80240fad70
Signed-off-by: Jin Yu <jin.yu@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/473176
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
The function allows the user to get string representation of the type of
a PCI device.
Change-Id: I02abcd9fc98ba912ca4d7936be22e9d5b4950ea2
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/470648
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Community-CI: Broadcom SPDK FC-NVMe CI <spdk-ci.pdl@broadcom.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Wojciech Malikowski <wojciech.malikowski@intel.com>
spdk_pci_device_claim() could create a file on the
filesystem that couldn't be deleted programatically.
It could only be overwritten - e.g. by another spdk
instance - but this didn't really work if that
another instance had less privileges and hence no
access to the previous file.
This is exactly the case we're seeing on our CI when
running SPDK as non-root. In general it's a good idea
not to leave any leftover files, so now we'll delete
the pci claim file when the spdk process exits.
spdk_pci_device_claim() used to return a file descriptor
that could be simply closed to "un-claim" the device.
It'll now return only a return code. The fd will be
stored inside spdk_pci_device and will be closed either
when user calls the newly introduced spdk_pci_device_unclaim(),
or when the device is detached.
We'll still need to clean up those files somewhere in
our test scripts (probably ./setup.sh cleanup) to
clean up after crashed processes or so - but we don't
necessarily want to run such scripts inside the autotest
whenever a non-root spdk is about to be started.
Change-Id: I797e079417bb56491013cc5b92f0f0d14f451d18
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/467107
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
By making dpdk device detach asynchronous we have
actually broken some cases where devices are re-attached
immediately after and fail since they were not detached
yet, so now we're making device detach synchronous again.
For that we'll simply wait inside spdk_pci_device_detach()
for the background dpdk thread to perform all necessary
actions before we return. We'll also print an error msg
if DPDK failed the detach (probably because of some
internal error).
Change-Id: I7657ac1b169169eae3325de2d28c2cc311e7d901
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/460286
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: <jacek.kalwas@intel.com>
By making dpdk device detach asynchronous we have
actually broken some cases where devices are re-attached
immediately after and fail since they were not detached
yet.
We'll need to make detach synchronous again, and for that
we'll wait for the background dpdk thread to perform all
necessary actions before we return from spdk_pci_device_detach().
However, device detach could be triggered from the very
same dpdk background thread as well. Waiting there would
cause a deadlock, so now we'll schedule asynchronous
device detach to the dpdk thread only if we're not on
that thread already.
This patch itself serves also as an optimization.
Change-Id: I86b7ac1b669169eee3325de2d28c2cc313e7d901
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/460285
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Added spdk_pci_get_first_device() and
spdk_pci_get_next_device() to iterate
over all devices on g_pci_devices list.
Change-Id: I65079fb3e274195707dee64bc1fb8b4b72d07352
Signed-off-by: Wojciech Malikowski <wojciech.malikowski@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/450924
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Put the locks inside cleanup_pci_devices().
This serves as cleanup.
Change-Id: I040b28006e5584d1f33af26b63cafedbafe04fdb
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/458934
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: John Kariuki <John.K.Kariuki@intel.com>
The global pci tailq is no longer modified on the dpdk
thread, so on the spdk thread we can access it safely
without any lock. The code is slightly more readable
then.
This shows that cleanup_pci_devices() is always wrapped
with lock/unlock. We'll put the locks inside this
function in the next patch.
Change-Id: Ia4d386b78a87078761df0a3b953bfc4ff44102f8
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/458933
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
To safely access the global pci device list on an spdk
thread, we'll need not to modify this list on any other
thread. When device gets hotplugged on a dpdk thread,
it will be now inserted into a new global tailq that
can be accessed only under g_pci_mutex. Then any
subsequently called public pci function will add it to
the regular device tailq.
Change-Id: I9cb9d6b24fd731641fd764d0da71bedab38824c9
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/458932
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
To safely access the global pci device list on an spdk
thread, we'll need not to modify this list on any other
thread. When device gets hotremoved on a dpdk thread,
it will now set a new per-device `removed` flag. Then
any subsequently called public pci function will remove
it from the list.
Change-Id: I0f16237617e0bea75b322ab402407780616424c3
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/458931
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
For VMD driver we'll need to introduce some way of
iterating over all spdk pci device objects and we would
like to achieve that with simple spdk_pci_get_first_dev()/get_next_dev()
APIs. To make it thread safe though, we would have to
expose some public pci mutex to be locked around the
iteration and we don't want to do that, so we'll make
PCI APIs usable from only a single thread - this will
prevent any pci devices from being removed inbetween
subsequent get_first/get_next calls.
We currently have the following players accessing pci
device state:
1) public APIs, obviously (on any thread right now)
2) VFIO hotremove callback (dpdk interrupt thread)
3) rte_eal_alarm for detaching rte_pci_devices (dpdk
interrupt thread)
4) DPDK hotplug IPC (dpdk interrupt thread)
There is g_pci_mutex providing the thread safety, but
even today it doesn't protect #3 and #4, making the
entire pci layer prone to data corruption.
To make #3 and #4 safe, we would have to lock inside
device init/fini callbacks (spdk_pci_device_init/fini),
but those are called directly inside the public device
attach/detach functions which already lock.
So now, with the decision to drop thread safety from
public pci APIs, we narrow down the locks inside public
functions and introduce locks inside those lower-level
init/fini callbacks.
Change-Id: I5dcbc9cdcbab65ee76cd3c42890f596069ec9a8a
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/458930
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
While detaching the device, DPDK may try to unregister
a VFIO interrupt callback which is currently "in use".
The unregister call may fail, but the error doesn't get
propagated to upper DPDK layers. Practically, detaching
the device may stop in the middle but still return 0 to
SPDK.
This effectively breaks hotremove as the device would
be neither usable or removable.
We work around it in SPDK by internally scheduling the
DPDK device detach on the DPDK interrupt thread. This
prevents any other interrupt callback to be "in use"
while the device is detached.
Since device detach in SPDK can be asynchronous now,
we add a few checks to prevent re-attaching devices
that are still being detached.
Change-Id: Ibb56a8017e34418db0304fe32774811427b056aa
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/448928
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
This is an attempt to fix device hotremove with VFIO.
A soft device hotremove request through sysfs [1] would
currently just block until the SPDK process manually
releases that device - e.g. upon an RPC request.
VFIO won't get unbound from the device untill userspace
releases all its resources. VFIO can signal a pending
hotremove request by kicking any file descriptor provided
by the userspace - and DPDK does provide such descriptor -
but SPDK does not listen on it.
DPDK does offer handy API to listen and in this patch
we make use of it inside our env/pci layer. Within
a DPDK callback we set an internal per-device hotremove
flag, which upper-layer SPDK drivers can poll with a new
env API - spdk_pci_device_is_removed().
The VFIO hotremove event will be sent to primary
processes only, so that's where we listen.
We make use of this new API in the NVMe hotplug poller,
which will process it just like any other supported
hotremove event.
Fixes#595Fixes#690
[1] # echo 1 > /sys/bus/pci/devices/<bdf>/remove
Change-Id: I03d88271c2089c740e232056d9340e5a640d442c
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/448927
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
It's mostly needed for the next patch, but even
now it provides some value by printing errors if
there any leaked (still attached) PCI devices
at shutdown.
Change-Id: I8459a6049b3c6612d9f1d99444bf3acfd474a839
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/449082
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
DPDK 17.11 is the oldest version still supported by DPDK,
so drop support for DPDKs older than that in SPDK. This
lets us remove a huge amount of ifdefs.
Change-Id: I500987648e388cd5418a25845b6cccf4b55a4e5b
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/c/spdk/spdk/+/447674
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
The enumerate callback doesn't currently iterate through
any hotplugged devices, as it uses an outdated device list
underneath. What updates that list is a bus rescan, which
happens implicitly on DPDK init or a specific device attach.
This wasn't crucial until we refactored NVMe bdev hotplug
poller to use enumerate instead of attach, which broke the
hotplug entirely. Unluckily, the hotplug tests were broken
as well and didn't detect this in time.
We fix the above by rescanning the pci bus before iterating
through its devices inside spdk_pci_enumerate().
Change-Id: I9643514ff07883eff0f3004b6991ca43ce0b2804
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/c/438243
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Devices behind a VMD aren't visible directly on the PCI
bus. In order to support them, we'll need an additional
VMD driver that's going to enumerate the devices behind
it and hook those into the SPDK PCI layer.
We want those devices to be accessible with the same APIs
that are used to access physical PCI devices.
The physical devices are still created and managed by
DPDK, but additional devices can be now hooked externally.
The hook API slightly departs from how env layer worked
so far. Instead of keeping the generic hook functions
internal-only and adding per-driver (NVMe, I/OAT, Virtio)
public functions, this patch makes the generic hook API
public from the start. It accepts the device driver as
a parameter, which needs to be exposed now. That's why
spdk_pci_nvme_get_driver() is introduced. It's only the
NVMe driver that's exposed so far, but other drivers and
their attach APIs should eventually follow the same path.
The previous model really didn't scale well and there's
no need to stretch it further.
Change-Id: Iade018a43b1e23527bd2914be42b403551e73bb6
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/435802
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
In order to populate our PCI device list with devices
located behind the VMD, we'll need to fill out those
device structures from within a special VMD driver. That
driver will base on PCI configuration and BAR accesses,
but definitely not on DPDK. We want to put the VMD driver
outside of the env lib, so we're about to provide it with
a direct access to the device struct. Before we do that,
let's group all the env-internal fields into an extra
struct "internal".
The spdk_pci_device struct does actually depend on DPDK
now as it contains an `rte_pci_device *dev_handle` field,
but we can easily break that dependency. The field is only
used as an arguement to DPDK functions, so we can change
its type to void* and let the implicit type conversion do
the magic. After all, the VMD driver will potentially use
it to store its (non-DPDK) data as well.
Change-Id: I425d6dfa7af13e022f5377ceaff39efbd4a01b3d
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/435799
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
DPDK 18.11+ multi-process hotplug isn't robust.
Multiple secondary processes starting at the same
time might cause the internal IPC to misbehave.
Just retry hotplugging/hotremoving the device
in such case.
Change-Id: I1f830c2c0dbe1d63eca9a116101b3d202172b2ca
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/434539
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
With all the error checks and segfault preventions in place,
we can finally enable hotplug in a multi-process scenario
for DPDK 18.11+.
If a device is attached in the primary process, it will send
an attach IPC request to the secondary process which needs
to succeed. Until now it would get rejected, and the attach
would fail in all the processes.
The device in secondary process will be now probed by DPDK
and will be put into the process local SPDK list of devices
to be locally attached. Either SPDK will attach it sometime
later on any attach/enumerate request, or DPDK will remove
it automatically once the same device in the primary process
gets removed.
We also allow the surprise attach in primary processes, as
it's technically possible for the pci devices (NVMe) to
be attached exclusively from the secondary process. The
fact that the NVMe stack doesn't support it is another story.
Currently the NVMe stack will handle the failure by itself
just fine.
Change-Id: Ia24a8b4610cc7c659f59a2fdda9d8a78e58af873
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/434416
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
DPDK 18.11+ does its best to ensure all devices are
equally attached or detached in all processes within
a shared memory group. For SPDK it means that if
a device is hotplugged in the primary, then DPDK will
automatically send an IPC hotplug request to all other
processes. Those other processes may not have the same
SPDK PCI driver registered and may fail to attach the
device. DPDK will send back the failure status and the
primary process will also fail to hotplug its device.
To prevent that, we need to pre-register the pci
drivers on env init.
We register the drivers just after the EAL init
because we don't want the matching devices to be picked
up by the initial bus probe in DPDK. That's for 2 reasons:
1) we don't want to attach *all* available devices
2) devices attached from non-SPDK context (that is,
outside of the spdk attach or enumerate functions)
will still fail to attach - the entire attaching
process will only take significant amount of time
and will bloat the log with useless status messages
Change-Id: I7b4c3a2e355f98ea755649f789137f5a727bc935
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/434415
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Although the struct is used as an enumeration context,
it really is a pci driver. The subsuequent patch introduces
a few functions around the pci driver, so rename the struct
to make it align nicely with those functions.
Change-Id: I919c30e55d9f42d795ecd8e20e5d29f3918c17a5
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/434414
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Upon detaching a device in a secondary process, DPDK 18.11
will try to detach it from the primary process as well.
SPDK doesn't support such hot-detach and will reject it
in the primary process. That will cause the secondary
process to also reject its detach. The device in the
secondary process will be still there in DPDK, but for
SPDK it will remain inaccessible - neither attach, nor
enumerate will work on it.
To fix it, we make our attach and enumerate functions
always check the process local list of devices probed
by DPDK, but not attached in SPDK.
Looking at the patch from a different perspective, it
simply introduces error handling for the DPDK detach
function. If a device failed to detach, we'll now maintain
it locally in SPDK to make it attach-able again.
Change-Id: I8c509a571bea7a9fb413c9c2bfd64c62ad91074b
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/434413
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
It's handy to store the SPDK structs within the device
structure. The subsequent patch will make us use
spdk_pci_addr much more frequently, so it makes sense
to keep it around rather than build it up from rte_pci_addr
everytime.
The upcoming VMD driver will also benefit from this patch
by being able to fill the spdk_pci_device struct with any
custom PCI details.
Change-Id: I236a19e28beba9a593b29f23b79b1b0b92ef1fa7
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/434418
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
In DPDK 18.11, a device can be potentially detached not only
upon an SPDK request, but also directly from within the DPDK
itself. In a multi-process scenario, when one process detaches
the PCI device, an IPC message - detach request - will be sent
to every other process in the same shared memory group. As we
don't propagate the removal notification to upper layers, the
still-referenced rte_pci_device object will just disappear at
one moment.
SPDK is still not ready for supporting the above case and will
try to avoid it, but just in case some detach request slips
through, then this patch provides the sanity checks preventing
SPDK from crashing.
Change-Id: I3e35d8efb33085163b9acd8a565e86a4221df844
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/434412
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Very minor cleanup before we start refactoring the code.
Change-Id: I00d768ec0c84f2a37c54b7575de695281c5ebb22
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/434411
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Tested-by: Jim Harris <james.r.harris@intel.com>
DPDK already prints at least one error message, so
there's no need to print a yet another one.
Change-Id: I1c7bdfe5ca2095b93ec282bf193a717627d5fa27
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/434410
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Prepare for storing additional per-device data.
The struct doesn't store any interesting data yet,
but already has a TAILQ_ENTRY that allows us to
put it into a global pci device list. Right now
we use the list only to find the SPDK device once
the corresponding DPDK device gets removed, but
more usages will be implemented soon.
Change-Id: If3abc1da60446e0a647d8d4c642f111ebfbcdb9e
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/434409
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Now that even DPDK 16.11 (LTS) reaches its end of life in
November 2018, we can surely drop support for DPDK
versions older than that.
The PCI code will go through a major refactor soon, so this
patch cleans it up first.
Since this is the very first SPDK patch that drops support
for older DPDK versions, it also introduces an #error
directive that'll directly fail the build if the used DPDK
lib is too old.
Change-Id: I9bae30c98826c75cc91cda498e47e46979a08ed1
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/433865
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Despite the scary commit title, this patch just unifies
per-driver mutexes into a single pci mutex.
On each hotplug we modify some DPDK global resources,
which per-driver locks aren't sufficient for. If
multiple threads try to attach devices at the same time,
then we'll likely have a data race. DPDK hotplug APIs
don't provide any kind of thread safety on their own.
Change-Id: I89cca9fea04ecf576ec5854c662bae1d3712b3fb
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/433864
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
We need to do it only for DPDK 16.11, which leaks the
mappings otherwise. DPDK was fixed in version 17.02 with
the following commit:
e84ad157 (pci: unmap resources if probe fails)
Unmapping the resources twice doesn't actually cause
us any trouble, but prints an ambiguous error message.
Change-Id: I8b62e86d5fff8fe924dbf9ae2e37cff29298d412
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/433863
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
BSD implementation for config access in DPDK seems to
return 0 on success while Linux implementation returns 0
only on failure. The env wrapper was always treating 0 as
an error and caused some of our PCI initialization code
to fail prematurely.
At one point DPDK harmonized this BSD behavior with Linux,
but only for config reads.
Fixes#484
Change-Id: I4ea850ea50f5e667fad28e8125209b21c377a2a3
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/432401
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
The previous functions were deprecated and now removed.
Change-Id: I076125aaf80b97c627ca45b860700fdf6d87e925
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/430557
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
This is an NVMe-specific issue and I/OA or VirtIO devices don't
need it. Additionally, the delay is now asynchronous, meaning
that potentially multiple NVMe controllers can wait all at once.
The drawback of this change is that we're needlessly waiting
even when using uio_pci_generic. However, since the delay does
not block anymore, its impact is significantly minimized.
Change-Id: I5d16a7fd7cb66c785acb687f14690e95f6188b9e
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/429414
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
All PCI device management is done only by the primary process,
so there's no need to delay device initialization in secondary
processes. If device is being initialized in a secondary
process, then it must have been already initialized by the
primary.
Change-Id: I087da77f981018dabf3feed59c76b294a16ca88d
Signed-off-by: Darek Stojaczyk <dariusz.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/429413
Chandler-Test-Pool: SPDK Automated Test System <sys_sgsw@intel.com>
Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Replace the use of the private rte_pci_bus list with our own internal
list of PCI devices inside SPDK. This fixes linking against the shared
library version of DPDK.
Change-Id: Ia69555e4e7caa1a40974b7969d48773e36ae0fd7
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
Reviewed-on: https://review.gerrithub.io/405937
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Some places use NULL to check for mmap failure but mmap return
MAP_FAILED in this case.
Change-Id: I4796fa52421da53c94223a9e8cc26ac04968f1d8
Signed-off-by: Pawel Wodkowski <pawelx.wodkowski@intel.com>
Reviewed-on: https://review.gerrithub.io/405648
Reviewed-by: Dariusz Stojaczyk <dariuszx.stojaczyk@intel.com>
Reviewed-by: Daniel Verkamp <daniel.verkamp@intel.com>
Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com>
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
This isn't possible to implement using the current public API of DPDK,
and all of the in-tree users have been removed. Replace the
implementation with a stub that always returns NULL and mark it
deprecated so that any users have a release to update their code.
Change-Id: I4bc71f0a9fd518923484e862333b0c5e86883980
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
Reviewed-on: https://review.gerrithub.io/405710
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
VFIO requires at least one IOMMU group to be added to the
VFIO container to be able to perform any IOMMU operations
on that container. [1] Without any groups added, VFIO_IOMMU_MAP_DMA
would always respond with errno 22 (Invalid argument).
Also, if the last IOMMU group is removed from the container
(device hotremove), all the IOMMU mappings are lost.
In both cases we need to remap vfio memory as soon as the
first IOMMU group is attached. The attach is done inside
DPDK during device attach and we can't hook into it directly.
Instead, this patch hooks into our PCI init/fini callbacks.
There's now a PCI device ref counter in our vfio manager and
a history of all registered memory pages. When the refcount
is increased from 0 to 1, the vtophys will remap all vfio
dma memory.
[1] https://www.kernel.org/doc/Documentation/vfio.txt
"On its own, the container provides little functionality,
with all but a couple version and extension query interfaces
locked away. The user needs to add a group into the container
for the next level of functionality. [...] With a group
(or groups) attached to a container, the remaining ioctls
become available, enabling access to the VFIO IOMMU
interfaces."
Change-Id: I744e07043dbe7ffd433fc95d604dad39647675f4
Signed-off-by: Dariusz Stojaczyk <dariuszx.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/390655
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Pawel Wodkowski <pawelx.wodkowski@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Require braces around all conditional statements, e.g.:
if (cond)
statement();
becomes:
if (cond) {
statement();
}
This is the style used through most of the SPDK code, but several
exceptions crept in over time. Add the astyle option to make sure we
are consistent.
Change-Id: I5a71980147fe8dfb471ff42e8bc06db2124a1a7f
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
Reviewed-on: https://review.gerrithub.io/390914
Reviewed-by: <shuhei.matsumoto.xt@hitachi.com>
Reviewed-by: Dariusz Stojaczyk <dariuszx.stojaczyk@intel.com>
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I927d659c93787f7ff15cb5aeb2a1c00d3e90e68a
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-on: https://review.gerrithub.io/390514
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Daniel Verkamp <daniel.verkamp@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
DPDK 17.11-rc3 removes pci_probe*
and pci_detach functions. It introduces
different ones - rte_eal_dev_attach/detach.
Those have a slightly different signature.
Change-Id: Iadde9ff37c64190dad41929997f9ff78379f36e1
Signed-off-by: Dariusz Stojaczyk <dariuszx.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/387656
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
This allows users of this interface to then close the fd
when they want to release the claim.
This prepares for calling spdk_pci_device_claim() in the
nvme driver to cover not just the bdev_nvme driver but all
of our nvme example and test applications as well. We'll
want the fd returned so that we can properly close it during
detach (including hotplug) use cases.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: I8b149cc4e778ba31c0e7045b858c8a1561b6b7af
Reviewed-on: https://review.gerrithub.io/385523
Reviewed-by: Dariusz Stojaczyk <dariuszx.stojaczyk@intel.com>
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Daniel Verkamp <daniel.verkamp@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
New functions for reading/writing any length of data.
Also simplified specific 8/16/32-bit reads/writes.
Change-Id: I518cdb3ce8d27a25353e80f2e7ca21162b0bd12b
Signed-off-by: Dariusz Stojaczyk <dariuszx.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/379487
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Daniel Verkamp <daniel.verkamp@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
In some cases (for example, Intel VMD or Microsoft Azure), the PCI
domain may be larger than 16 bits. Extend the domain field of struct
spdk_pci_addr to 32 bits to accomodate this.
Note that equivalent changes must be made in DPDK's struct rte_pci_addr
for larger domains to actually work.
Change-Id: I21c4666a68bc8a4aedfcc82b44042c02734246de
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
Reviewed-on: https://review.gerrithub.io/366520
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
Reviewed-by: Ben Walker <benjamin.walker@intel.com>
Reviewed-by: Cunyin Chang <cunyin.chang@intel.com>
FOREACH_DEVICE_ON_PCIBUS macro has been defined since rc2.
Change-Id: Iad61401520735dfde4e5715c32e74a54a2dff7da
Signed-off-by: Dariusz Stojaczyk <dariuszx.stojaczyk@intel.com>
Since DPDK 17.05 API rte_eal_device_insert is only used for
virtual device scan and initialization, for PCI devices
which use Domain:Bus:Dev:Function, this API is no longer
valid.
Change-Id: I1ab63dfc3af188d01836e67cd8db745e035fc450
Signed-off-by: Changpeng Liu <changpeng.liu@intel.com>
Fix up the existing comment blocks misaligned in the first column.
Also add line numbers to the comment checks.
Change-Id: I9d28c365271df36e7013d74cbb02d0023ab4f581
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
Fix up all existing spacing errors in comments and add an automated
check for patterns like /*comment*/.
Change-Id: I28f61c93612dc0f8aed66bd509da78e91ea9737e
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
The new format is: domain.bus.device.function
For this format, since we use '.' as separator,
to avoid misusing, we only support the following:
1 domain.bus.device.function ( 4 values provided)
2 bus.device.function (3 values provoided with domain = 0)
3 bus.device (2 values provided with domain = 0, function = 0)
Change-Id: Ide03db38b4ac7802cf36f0e536e8b997101d6cd3
Signed-off-by: Ziye Yang <ziye.yang@intel.com>
remove the unnecessary rte_eal_pci_probe_one() in function
spdk_pci_device_detach(), this could cause error message when we
terminate the application, it will also not make sense try to probe one
device after we detach it, we could call spdk_pci_nvme_device_attach()
instead of spdk_pci_nvme_enumerate() when we have one given device address,
dpdk will try to scan the device and add it back to pci device list then.
Change-Id: I35f5bb412249bb20da57394f0531c10a49691906
Signed-off-by: Cunyin Chang <cunyin.chang@intel.com>
This avoids registering PMDs that are not used by a given
application. For example, an app may wish to *not* use
ioat - in this case, ioat PMD would not be registered with
DPDK, and we would not waste time probing these devices
when probing other devices like NVMe.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Change-Id: If378e40bde9057c7808603aa1918bcfe80fa0e9d
This function will return a device handle from a pci
address.
Change-Id: I323d92c71014ef571f3df9f19c2ec887844707e8
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
If the first call to spdk_nvme_probe probes a device and
the driver elects not to take it, still call the probe
callback for that device on subsequence calls to
spdk_nvme_probe.
Change-Id: If06467cf6796c827a0bbfba6e36d5b91534526fc
Signed-off-by: Ben Walker <benjamin.walker@intel.com>
Change the PCI enumeration API to individual functions per device type
so that only the drivers that are actually in use get linked into the
final executable. All of the common code is still shared internally in
the env_dpdk library.
Change-Id: I2ba83afe59202a510f999a0674e23e60b6581221
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
These functions will attach or detach from a PCI device. Attaching
typically means mapping the BAR.
Change-Id: Iaaf59010b8a0366d32ec80bb90c1c277ada7cfe7
Signed-off-by: Cunyin Chang <cunyin.chang@intel.com>
Now that the env PCI framework already requires enumerating devices
based on an enum of specific device types, it is not useful to query the
class code of a PCI device handle.
It is currently unused and does not work in its current form on FreeBSD
(it reads a file from /sys). This lets us drop a big chunk of file
reading and parsing code.
Change-Id: I1d720398416ba3d6f91e077b807ec11a6de562cf
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>