qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Inter-VM device emulation (call on Mon 20th July 2020)


From: Jan Kiszka
Subject: Re: Inter-VM device emulation (call on Mon 20th July 2020)
Date: Tue, 21 Jul 2020 21:08:00 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101 Thunderbird/68.10.0

On 21.07.20 12:49, Alex Bennée wrote:

Stefan Hajnoczi <stefanha@gmail.com> writes:

Thank you everyone who joined!

I didn't take notes but two things stood out:

1. The ivshmem v2 and virtio-vhost-user use cases are quite different
so combining them does not seem realistic. ivshmem v2 needs to be as
simple for the hypervisor to implement as possible even if this
involves some sacrifices (e.g. not transparent to the Driver VM that
is accessing the device, performance). virtio-vhost-user is more aimed
at general-purpose device emulation although support for arbitrary
devices (e.g. PCI) would be important to serve all use cases.

I believe my phone gave up on the last few minutes of the call so I'll
just say we are interested in being able to implement arbitrary devices
in the inter-VM silos. Devices we are looking at:

   virtio-audio
   virtio-video

these are performance sensitive devices which provide a HAL abstraction
to a common software core.

   virtio-rpmb

this is a secure device where the backend may need to reside in a secure
virtualised world.

   virtio-scmi

this is a more complex device which allows the guest to make power and
clock demands from the firmware. Needless to say this starts to become
complex with multiple moving parts.

The flexibility of vhost-user seems to match up quite well with wanting
to have a reasonably portable backend that just needs to be fed signals
and a memory mapping. However we don't want daemons to automatically
have a full view of the whole of the guests system memory.

2. Alexander Graf's idea for a new Linux driver that provides an
enforcing software IOMMU. This would be a character device driver that
is mmapped by the device emulation process (either vhost-user-style on
the host or another VMM for inter-VM device emulation). The Driver VMM
can program mappings into the device and the page tables in the device
emulation process will be updated. This way the Driver VMM can share
memory specific regions of guest RAM with the device emulation process
and revoke those mappings later.

I'm wondering if there is enough plumbing on the guest side so a guest
can use the virtio-iommu to mark out exactly which bits of memory the
virtual device can have access to? At a minimum the virtqueues need to
be accessible and for larger transfers maybe a bounce buffer. However
for speed you want as wide as possible mapping but no more. It would be
nice for example if a block device could load data directly into the
guests block cache (zero-copy) but without getting a view of the kernels
internal data structures.

Welcome to a classic optimization triangle:

 - speed -> direct mappings
 - security -> restricted mapping
 - simplicity -> static mapping

Pick two, you can't have them all. Well, you could try a little bit more of one, at the price of losing on another. But that's it.

We chose the last two, ending up with probably the simplest but not fastest solution for type-1 hypervisors like Jailhouse. Specifically for non-Linux use cases, legacy RTOSes, often with limited driver stacks, having not only virtio but also even simpler channels over application-defined shared memory layouts is a requirement.


Another thing that came across in the call was quite a lot of
assumptions about QEMU and Linux w.r.t virtio. While our project will
likely have Linux as a guest OS we are looking specifically at enabling
virtio for Type-1 hypervisors like Xen and the various safety certified
proprietary ones. It is unlikely that QEMU would be used as the VMM for
these deployments. We want to work out what sort of common facilities
hypervisors need to support to enable virtio so the daemons can be
re-usable and maybe setup with a minimal shim for the particular
hypervisor in question.


I'm with you regarding stacks that are mappable not only on QEMU/Linux. And also one that does not let the certification costs sky-rocket because of its mandated implementation complexity.

I'm not sure anymore if there will be only one device model. Maybe we should eventually think about a backend layer that can sit on something like virtio-vhost-user as well as on ivshmem-virtio, allowing the same device backend code to be plumbed into both transports. Why shouldn't work what already works well under Linux with the frontend device drivers vs. virtio transports?

Jan

--
Siemens AG, Corporate Technology, CT RDA IOT SES-DE
Corporate Competence Center Embedded Linux



reply via email to

[Prev in Thread] Current Thread [Next in Thread]