qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Using virtio-vhost-user or vhost-pci


From: Nikos Dragazis
Subject: Re: Using virtio-vhost-user or vhost-pci
Date: Tue, 13 Oct 2020 02:14:22 +0300
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.10.0

On 12/10/20 10:22 μ.μ., Cosmin Chenaru wrote:

Hi,

Could you please tell me if there has been any more work on virtio-vhost-user 
or vhost-pci? The last messages that I could find were from January 2018, from 
this thread [1], and from what I see the latest Qemu code does not have that 
included.

Hi Cosmin,

The thread that you are pointing to is Stefan's initial work on this
subject, but it is not the last update. Since 2018, a lot of things have
happened. I have personally put a lot of effort on pushing this further
and with the help of the community we are trying to get this merged into
QEMU eventually. You can find an overview of the up-to-date state here
[1]. Note also that, recently, we had a discussion on various on-going
inter-VM device emulation interfaces (have a look at this [2]).

In brief, the current step/goal is to get the device spec merged into the
VIRTIO spec (have a look at these [3][4]).

For more details, please just do a simple search on the spdk, dpdk,
qemu-devel and virtio-dev mailing lists. You will find a lot of threads
on this subject. If anything doesn't make sense or is not clear enough,
feel free to ask.

Nikos

[1] https://lists.gnu.org/archive/html/qemu-devel/2020-02/msg03356.html
[2] https://lists.gnu.org/archive/html/qemu-devel/2020-07/msg04934.html
[3] https://lists.oasis-open.org/archives/virtio-dev/202008/msg00083.html
[4] https://lists.oasis-open.org/archives/virtio-dev/202005/msg00132.html


I am currently running multiple VMs, connected in between by the DPDK vhost-switch. A VM 
can start, reboot, shutdown, so much of this is dynamic and the vhost-switch handles all 
of these. So these VMs are some sort of "endpoints" (I could not find a better 
naming).

The code which runs on the VM endpoints is somehow tied to the vhost-switch 
code, and if I change something on the VM which breaks the compatibility, I 
need to recompile the vhost-switch and restart. The problem is that most of the 
time I forget to update the vhost-switch, and I run into other problems.

If I could use a VM as a vhost-switch instead of the DPDK app, then I hope that 
in my endpoint code which runs on the VM, I can add functionality to make it 
also run as a switch, and forward the packets between the other VMs like the 
current DPDK switch does. Doing so would allow me to catch this out-of-sync 
between the VM endpoint code and the switch code at compile time, since they 
will be part of the same app.

This would be a two-phase process. First to run the DPDK vhost-switch inside a 
guest VM, and then to move the tx-rx part into my app.

Both Qemu and the DPDK app use "vhost-user". I was happy to see that I can 
start Qemu in vhost-user server mode:

    <interface type='vhostuser'>
      <mac address='52:54:00:9c:3a:e3'/>
      <source type='unix' path='/home/cosmin/vsocket.server' mode='server'/>
      <model type='virtio'/>
      <driver queues='2'>
        <host mrg_rxbuf='on'/>
      </driver>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' 
function='0x0'/>
    </interface>

This would translate to these Qemu arguments:

-chardev socket,id=charnet1,path=/home/cosmin/vsocket.server,server -netdev 
type=vhost-user,id=hostnet1,chardev=charnet1,queues=2 -device 
virtio-net-pci,mrg_rxbuf=on,mq=on,vectors=6,netdev=hostnet1,id=net1,mac=52:54:00:9c:3a:e3,bus=pci.0,addr=0x4

But at this point Qemu will not boot the VM until there is a vhost-user client connecting 
to Qemu. I even tried adding the "nowait" argument, but Qemu still waits. This 
will not work in my case, as the endpoint VMs could start and stop at any time, and I 
don't even know how many network interfaces the endpoint VMs will have.

I then found the virtio-vhost-user transport protocol [2], and was thinking 
that I could at least start the packet-switching VM, and then let the DPDK app 
inside it do the forwarding of the packets. But from what I understand, this 
creates a single network interface inside the VM on which the DPDK app can 
bind. The limitation here is that if another VM wants to connect to the 
packet-switching VM, I need to manually add another virtio-vhost-user-pci 
device (and a new vhost-user.sock) before this packet-switching VM starts, so 
this is not dynamic.

The second approach for me would be to use vhost-pci [3], which I could not 
fully understand how it works, but I think it presents a network interface to 
the guest kernel after another VM connects to the first one.

I realize I made a big story and probably don't make too much sense, but one 
more thing. The ideal solution for me would be a combination of the vhost-user 
socket and the vhost-pci socket. The Qemu will start the VM and the socket will 
wait in the background for vhost-user connections. When a new connection is 
established, Qemu should create a hot-plugable PCI network card and either the 
guest kernel or the DPDK app inside to handle the vhost-user messages.

Any feedback will be welcome, and I really appreciate all your work :)

Cosmin.

[1] https://lists.nongnu.org/archive/html/qemu-devel/2018-01/msg04806.html
[2] https://wiki.qemu.org/Features/VirtioVhostUser
[3] https://github.com/wei-w-wang/vhost-pci




reply via email to

[Prev in Thread] Current Thread [Next in Thread]