There are two separate abstraction layers:
* vsocket - which represents a unix domain socket
* virtio_net - which represents a vsocket connection
There can be many connections
on the same socket. vsocket
provides an API to enable/disable
particular virtio features on
the fly, but it's the virtio_net
that uses these features.
virtio_net used to rely on
vsocket->features during
feature negotiation, breaking
the layer encapsulation (and
yet causing a deadlock - two
locks were being locked in a
separate order). Now each
virtio_net device has it's own
copy of vsocket features, created
at the time of virtio_net creation.
vsocket->features have to be
still present, as features can be
enabled/disabled while no
virtio_net device has been
created yet.
Fixes#214
Change-Id: Ic4b2dd8cae6050813fc9a420b2ed30bc5ae60393
Signed-off-by: Dariusz Stojaczyk <dariuszx.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/386294
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Pawel Wodkowski <pawelx.wodkowski@intel.com>
Reviewed-by: Daniel Verkamp <daniel.verkamp@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
The vhost connection can be closed
concurrently from 2 places:
* the connection thread itself
* rte_vhost_driver_unregister
The connection thread will terminate
the connection if any recv error
occured. The unregister function
will terminate the connection
together with the thread.
However, there is no sychronization
between those two. The connection
thread runs in the background
without any mutex.
The rte_vhost_driver_unregister
now signals the connection thread
to terminate itself and waits
until it's killed.
Change-Id: I012e97ebb8a79edcb2c17c28b2fc7e8041bf92b3
Signed-off-by: Dariusz Stojaczyk <dariuszx.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/383085
Reviewed-by: Pawel Wodkowski <pawelx.wodkowski@intel.com>
Reviewed-by: Daniel Verkamp <daniel.verkamp@intel.com>
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
Added new callbacks to notify about socket connection status.
As destroy_device is used for virtqueue processing *pause* as well as
connection close, the user has no distinction between those.
Consider the following scenario:
rte_vhost: received SET_VRING_BASE message,
calling destroy_device() as usual
user: end-user asks to remove the device (together with socket file),
OK, device is not *in use* - that's NOT the behavior we want
calling rte_vhost_driver_unregister() etc.
Instead of changing new_device/destroy_device callbacks and breaking
the ABI, a set of new functions new_connection/destroy_connection
has been added.
Change-Id: I50a8ca4035045892d6c658da7df58c0c97025ec3
Signed-off-by: Dariusz Stojaczyk <dariuszx.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/372074
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Pawel Wodkowski <pawelx.wodkowski@intel.com>
Reviewed-by: Changpeng Liu <changpeng.liu@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
2 locks are executed in 2 places in opposite orders.
Consider the following scenario, threads A and B:
(A)
* fdset_event_dispatch() start
* pfdentry->busy = 1; (lock #1)
* vhost_user_read_cb() start
* vhost_destroy_device() start
(B)
* rte_vhost_driver_unregister() start
* pthread_mutex_lock(&vsocket->conn_mutex); (lock #2)
* fdset_del()
* endless loop, waiting for pfdentry->busy == 0 (lock #1)
(A)
* vhost_destroy_device() end
* pthread_mutex_lock(&vsocket->conn_mutex); (lock #2)
(mutex already locked - deadlock at this point)
Thread B has locked vsocket->conn_mutex and is in while(1)
loop waiting for given fd to change it's busy flag to 0.
Thread A would have to finish vhost_user_read_cb() in order
to set busy flag back to 0, but that can't happen due to
the vsocket->conn_mutex lock.
This patch defers the fdset_del(), so that it's called outside of
vsocket->conn_mutex.
Change-Id: Ifb5d4699bdafe96a573444c11ad4eae3adc359f5
Signed-off-by: Dariusz Stojaczyk <dariuszx.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/375910
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Pawel Wodkowski <pawelx.wodkowski@intel.com>
Reviewed-by: Daniel Verkamp <daniel.verkamp@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
This reverts commit 6973898164.
This solution was incomplete, see the next patch which properly
fixes the deadlock issue.
Change-Id: Ib3cc609814276f1c48b05280379b8c2849ad831f
Signed-off-by: Dariusz Stojaczyk <dariuszx.stojaczyk@intel.com>
Reviewed-on: https://review.gerrithub.io/375909
Tested-by: SPDK Automated Test System <sys_sgsw@intel.com>
Reviewed-by: Pawel Wodkowski <pawelx.wodkowski@intel.com>
Reviewed-by: Daniel Verkamp <daniel.verkamp@intel.com>
Reviewed-by: Jim Harris <james.r.harris@intel.com>
This fixes spontaneous vhost hangs on SIGINT shutdown.
Apperently during vhost_destroy_device(conn->vid) from
line #284 another QEMU message might arrive, causing
vsocket->conn_mutex deadlock. (line #286)
Change-Id: I4f1c31a52facffd1eb1e1192591095f00da55031
Signed-off-by: Dariusz Stojaczyk <dariuszx.stojaczyk@intel.com>
This patch adds a library, application and test scripts for extending
SPDK to present virtio-scsi controllers to QEMU-based VMs and
process I/O submitted to devices attached to those controllers.
This functionality is dependent on QEMU patches to enable
vhost-scsi in userspace - those patches are currently working their
way through the QEMU mailing list, but temporary patches to enable
this functionality in QEMU will be made available shortly through the
SPDK github repository.
Signed-off-by: Jim Harris <james.r.harris@intel.com>
Signed-off-by: Krzysztof Jakimiak <krzysztof.jakimiak@intel.com>
Signed-off-by: Michal Kosciowski <michal.kosciowski@intel.com>
Signed-off-by: Karol Latecki <karolx.latecki@intel.com>
Signed-off-by: Piotr Pelplinski <piotr.pelplinski@intel.com>
Signed-off-by: Daniel Verkamp <daniel.verkamp@intel.com>
Signed-off-by: Pawel Wodkowski <pawelx.wodkowski@intel.com>
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
Signed-off-by: Krzysztof Jakimiak <krzysztof.jakimiak@intel.com>
Change-Id: I138e4021f0ac4b1cd9a6e4041783cdf06e6f0efb