qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Out-of-Process Device Emulation session at KVM Forum 2020


From: Alex Williamson
Subject: Re: Out-of-Process Device Emulation session at KVM Forum 2020
Date: Thu, 29 Oct 2020 21:04:07 -0600

On Fri, 30 Oct 2020 09:11:23 +0800
Jason Wang <jasowang@redhat.com> wrote:

> On 2020/10/29 下午11:46, Alex Williamson wrote:
> > On Thu, 29 Oct 2020 23:09:33 +0800
> > Jason Wang <jasowang@redhat.com> wrote:
> >  
> >> On 2020/10/29 下午10:31, Alex Williamson wrote:  
> >>> On Thu, 29 Oct 2020 21:02:05 +0800
> >>> Jason Wang <jasowang@redhat.com> wrote:
> >>>     
> >>>> On 2020/10/29 下午8:08, Stefan Hajnoczi wrote:  
> >>>>> Here are notes from the session:
> >>>>>
> >>>>> protocol stability:
> >>>>>        * vhost-user already exists for existing third-party applications
> >>>>>        * vfio-user is more general but will take more time to develop
> >>>>>        * libvfio-user can be provided to allow device implementations
> >>>>>
> >>>>> management:
> >>>>>        * Should QEMU launch device emulation processes?
> >>>>>            * Nicer user experience
> >>>>>            * Technical blockers: forking, hotplug, security is hard once
> >>>>> QEMU has started running
> >>>>>            * Probably requires a new process model with a long-running
> >>>>> QEMU management process proxying QMP requests to the emulator process
> >>>>>
> >>>>> migration:
> >>>>>        * dbus-vmstate
> >>>>>        * VFIO live migration ioctls
> >>>>>            * Source device can continue if migration fails
> >>>>>            * Opaque blobs are transferred to destination, destination 
> >>>>> can
> >>>>> fail migration if it decides the blobs are incompatible  
> >>>> I'm not sure this can work:
> >>>>
> >>>> 1) Reading something that is opaque to userspace is probably a hint of
> >>>> bad uAPI design
> >>>> 2) Did qemu even try to migrate opaque blobs before? It's probably a bad
> >>>> design of migration protocol as well.
> >>>>
> >>>> It looks to me have a migration driver in qemu that can clearly define
> >>>> each byte in the migration stream is a better approach.  
> >>> Any time during the previous two years of development might have been a
> >>> more appropriate time to express your doubts.  
> >>
> >> Somehow I did that in this series[1]. But the main issue is still there.  
> > That series is related to a migration compatibility interface, not the
> > migration data itself.  
> 
> 
> They are not independent. The compatibility interface design depends on 
> the migration data design. I ask the uAPI issue in that thread but 
> without any response.
> 
> 
> >  
> >> Is this legal to have a uAPI that turns out to be opaque to userspace?
> >> (VFIO seems to be the first). If it's not,  the only choice is to do
> >> that in Qemu.  
> > So you're suggesting that any time the kernel is passing through opaque
> > data that gets interpreted by some entity elsewhere, potentially with
> > proprietary code, that we're in legal jeopardy?  VFIO is certainly not
> > the first to do that (storage and network devices come to mind).
> > Devices are essentially opaque data themselves, vfio provides access to
> > (ex.) BARs, but the interpretation of what resides in that BAR is device
> > specific.  Sometimes it's defined in a public datasheet, sometimes not.
> > Suggesting that we can't move opaque data through a uAPI seems rather
> > absurd.  
> 
> 
> No, I think we are talking about different things. What I meant is the 
> data carried via uAPI should not opaque userspace. What you said here is 
> a good example for this actually. When you expose BAR to userspace, 
> there should be driver that knows the semantics of BAR running in the 
> userspace, so it's not opaque to userspace.


But the thing running in userspace might be QEMU, which doesn't know
the semantics of the BAR, it might not be until a driver in the guest
that we have something that understands the BAR semantics beyond opaque
data.  We might have nested guests, so it could be passed through
multiple userspaces as opaque data.  The requirement make no sense.


> >>> Note that we're not talking about vDPA devices here, we're talking
> >>> about arbitrary devices with arbitrary state.  Some degree of migration
> >>> support for assigned devices can be implemented in QEMU, Alex Graf
> >>> proved this several years ago with i40evf.  Years later, we don't have
> >>> any vendors proposing device specific migration code for QEMU.  
> >>
> >> Yes but it's not necessarily VFIO as well.  
> > I don't know what this means.  
> 
> 
> I meant we can't not assume VFIO is the only uAPI that will be used by Qemu.

 
And we don't, DPDK, SPDK, various other userspaces exist.  All can take
advantage of the migration uAPI that we've developed rather than
implementing device specific code in their projects.  I'm not sure how
this is strengthening your argument for device specific migration code
in QEMU, which would need to be replicated in every other userspace.  As
opaque data with a well defined protocol, each userspace can implement
support for this migration protocol once and it should work independent
of the device or vendor.  It only requires support in the code
implementing the device, which is already necessarily device specific.


> >>> Clearly we're also trying to account for proprietary devices where even
> >>> for suspend/resume support, proprietary drivers may be required for
> >>> manipulating that internal state.  When we move device emulation
> >>> outside of QEMU, whether in kernel or to other userspace processes,
> >>> does it still make sense to require code in QEMU to support
> >>> interpretation of that device for migration purposes?  
> >>
> >> Well, we could extend Qemu to support property module (or have we
> >> supported that now?). And then it can talk to property drivers via
> >> either VFIO or vendor specific uAPI.  
> > Yikes, I thought out-of-process devices was exactly the compromise
> > being developed to avoid QEMU supporting proprietary modules and ad-hoc
> > vendor specific uAPIs.  
> 
> 
> We can't even prevent this in kernel, so I don't see how possible we can 
> make it for Qemu.


The kernel is a different beast, it already supports loadable modules
and due to whatever pressures or market demands of the past, it allows
non-GPL use of symbols necessary for some of those modules.  QEMU has
no module support outside of non-mainline forks.  Clearly there is
pressure to support sub-process and proprietary device emulation and
it's our choice how we enable that.  This vfio over socket approach is
the mechanism we're trying to enable to avoid proprietary modules in
QEMU proper.


> > I think you're actually questioning even the
> > premise of developing a standardized API for out-of-process devices
> > here.  Thanks,  
> 
> 
> Actually not, it's just question in my mind when looking at VFIO 
> migration compatibility patches, since vfio-user is being proposed, it's 
> a good time to revisit them.


A migration compatibility interface has not been determined for vfio.
We currently rely on the vendor drivers to provide their own internal
validation and harmlessly reject migration from an incompatible device.
It would be great if we could make progress on this, but it's a
difficult problem, and one that I hope we can further address once we
have a base level of migration support.

It's great to revisit ideas, but proclaiming a uAPI is bad solely
because the data transfer is opaque, without defining why that's bad,
evaluating the feasibility and implementation of defining a well
specified data format rather than protocol, including cross-vendor
support, or proposing any sort of alternative is not so helpful imo.

Note that we also migrate guest memory as opaque data; we don't require
knowing the data structures it holds or how regions are used, we simply
look for changes and transfer the new data.  That's not so different
from a vendor driver passing us a blob of data as "information it needs
to replicate the device state at the target."  Thanks,

Alex




reply via email to

[Prev in Thread] Current Thread [Next in Thread]