qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Inter-VM device emulation (call on Mon 20th July 2020)


From: Alex Bennée
Subject: Re: Inter-VM device emulation (call on Mon 20th July 2020)
Date: Mon, 27 Jul 2020 11:30:24 +0100
User-agent: mu4e 1.5.5; emacs 28.0.50

Stefan Hajnoczi <stefanha@redhat.com> writes:

> On Tue, Jul 21, 2020 at 11:49:04AM +0100, Alex Bennée wrote:
>> Stefan Hajnoczi <stefanha@gmail.com> writes:
>> > 2. Alexander Graf's idea for a new Linux driver that provides an
>> > enforcing software IOMMU. This would be a character device driver that
>> > is mmapped by the device emulation process (either vhost-user-style on
>> > the host or another VMM for inter-VM device emulation). The Driver VMM
>> > can program mappings into the device and the page tables in the device
>> > emulation process will be updated. This way the Driver VMM can share
>> > memory specific regions of guest RAM with the device emulation process
>> > and revoke those mappings later.
>> 
>> I'm wondering if there is enough plumbing on the guest side so a guest
>> can use the virtio-iommu to mark out exactly which bits of memory the
>> virtual device can have access to? At a minimum the virtqueues need to
>> be accessible and for larger transfers maybe a bounce buffer. However
>> for speed you want as wide as possible mapping but no more. It would be
>> nice for example if a block device could load data directly into the
>> guests block cache (zero-copy) but without getting a view of the kernels
>> internal data structures.
>
> Maybe Jean-Philippe or Eric can answer that?
>
>> Another thing that came across in the call was quite a lot of
>> assumptions about QEMU and Linux w.r.t virtio. While our project will
>> likely have Linux as a guest OS we are looking specifically at enabling
>> virtio for Type-1 hypervisors like Xen and the various safety certified
>> proprietary ones. It is unlikely that QEMU would be used as the VMM for
>> these deployments. We want to work out what sort of common facilities
>> hypervisors need to support to enable virtio so the daemons can be
>> re-usable and maybe setup with a minimal shim for the particular
>> hypervisor in question.
>
> The vhost-user protocol together with the backend program conventions
> define the wire protocol and command-line interface (see
> docs/interop/vhost-user.rst).
>
> vhost-user is already used by other VMMs today. For example,
> cloud-hypervisor implements vhost-user.

Ohh that's a new one for me. I see it is a KVM only project but it's
nice to see another VMM using the common rust-vmm backend. There is
interest in using rust-vmm to implement VMMs for type-1 hypervisors but
we need to work out if there are two many type-2 concepts backed into
the lower level rust crates.

> I'm sure there is room for improvement, but it seems like an incremental
> step given that vhost-user already tries to cater for this scenario.
>
> Are there any specific gaps you have identified?

Aside from the desire to limit the shared memory footprint between the
backend daemon and a guest not yet.

I suspect the eventfd mechanism might just end up being simulated by the
VMM as a result of whatever comes from the type-1 interface indicating a
doorbell has been rung. It is after all just a FD you consume numbers
over right?

Not all setups will have an equivalent of a Dom0 "master" guest to do
orchestration. Highly embedded are likely to have fixed domains created
as the firmware/hypervisor start up.

>
> Stefan


-- 
Alex Bennée



reply via email to

[Prev in Thread] Current Thread [Next in Thread]