qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Outline for VHOST_USER_PROTOCOL_F_VDPA


From: Jason Wang
Subject: Re: Outline for VHOST_USER_PROTOCOL_F_VDPA
Date: Mon, 12 Oct 2020 11:52:04 +0800
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.10.0


On 2020/9/30 下午4:07, Michael S. Tsirkin wrote:
On Tue, Sep 29, 2020 at 07:38:24PM +0100, Stefan Hajnoczi wrote:
On Tue, Sep 29, 2020 at 06:04:34AM -0400, Michael S. Tsirkin wrote:
On Tue, Sep 29, 2020 at 09:57:51AM +0100, Stefan Hajnoczi wrote:
On Tue, Sep 29, 2020 at 02:09:55AM -0400, Michael S. Tsirkin wrote:
On Mon, Sep 28, 2020 at 10:25:37AM +0100, Stefan Hajnoczi wrote:
Why extend vhost-user with vDPA?
================================
Reusing VIRTIO emulation code for vhost-user backends
-----------------------------------------------------
It is a common misconception that a vhost device is a VIRTIO device.
VIRTIO devices are defined in the VIRTIO specification and consist of a
configuration space, virtqueues, and a device lifecycle that includes
feature negotiation. A vhost device is a subset of the corresponding
VIRTIO device. The exact subset depends on the device type, and some
vhost devices are closer to the full functionality of their
corresponding VIRTIO device than others. The most well-known example is
that vhost-net devices have rx/tx virtqueues and but lack the virtio-net
control virtqueue. Also, the configuration space and device lifecycle
are only partially available to vhost devices.

This difference makes it impossible to use a VIRTIO device as a
vhost-user device and vice versa. There is an impedance mismatch and
missing functionality. That's a shame because existing VIRTIO device
emulation code is mature and duplicating it to provide vhost-user
backends creates additional work.
The biggest issue facing vhost-user and absent in vdpa is
backend disconnect handling. This is the reason control path
is kept under QEMU control: we do not need any logic to
restore control path data, and we can verify a new backend
is consistent with old one.
I don't think using vhost-user with vDPA changes that. The VMM still
needs to emulate a virtio-pci/ccw/mmio device that the guest interfaces
with. If the device backend goes offline it's possible to restore that
state upon reconnection. What have I missed?
The need to maintain the state in a way that is robust
against backend disconnects and can be restored.
QEMU is only bypassed for virtqueue accesses. Everything else still
goes through the virtio-pci emulation in QEMU (VIRTIO configuration
space, status register). vDPA doesn't change this.

Existing vhost-user messages can be kept if they are useful (e.g.
virtqueue state tracking). So I think the situation is no different than
with the existing vhost-user protocol.

Regarding reconnection in general, it currently seems like a partially
solved problem in vhost-user. There is the "Inflight I/O tracking"
mechanism in the spec and some wording about reconnecting the socket,
but in practice I wouldn't expect all device types, VMMs, or device
backends to actually support reconnection. This is an area where a
uniform solution would be very welcome too.
I'm not aware of big issues. What are they?
I think "Inflight I/O tracking" can only be used when request processing
is idempotent? In other words, it can only be used when submitting the
same request multiple times is safe.
Not inherently it just does not attempt to address this problem.


Inflight tracking only tries to address issues on the guest side,
that is, making sure the same buffer is used exactly once.


As discussed, if we design virito ring carefully, there's probably no need for using extra metadata for inflight tracking.

And I remember that the current inflight tracking doesn't support packed virtqueue.

Thanks




reply via email to

[Prev in Thread] Current Thread [Next in Thread]