qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [RFC 0/2] Attempt to implement the standby feature for


From: Daniel P . Berrangé
Subject: Re: [Qemu-devel] [RFC 0/2] Attempt to implement the standby feature for assigned network devices
Date: Thu, 6 Dec 2018 10:01:46 +0000
User-agent: Mutt/1.10.1 (2018-07-13)

On Wed, Dec 05, 2018 at 03:57:14PM -0500, Michael S. Tsirkin wrote:
> On Wed, Dec 05, 2018 at 02:24:32PM -0600, Michael Roth wrote:
> > Quoting Daniel P. Berrangé (2018-12-05 11:18:18)
> > > 
> > > Unless I'm mis-reading the patches, it looks like the VFIO device always 
> > > has
> > > to be available at the time QEMU is started. There's no way to boot a 
> > > guest
> > > and then later hotplug a VFIO device to accelerate the existing 
> > > virtio-net NIC.
> > > Or similarly after migration there might not be any VFIO device available
> > > initially when QEMU is started to accept the incoming migration. So it 
> > > might
> > > need to run in degraded mode for an extended period of time until one 
> > > becomes
> > > available for hotplugging. The use of qdev IDs makes this troublesome, as 
> > > the
> > > qdev ID of the future VFIO device would need to be decided upfront before 
> > > it
> > > even exists.
> > 
> > > 
> > > So overall I'm not really a fan of the dynamic hiding/unhiding of 
> > > devices. I
> > > would much prefer to see some way to expose an explicit relationship 
> > > between
> > > the devices to the guest.
> > 
> > If we place the burden of determining whether the guest supports STANDBY
> > on the part of users/management, a lot of this complexity goes away. For
> > instance, one possible implementation is to simply fail migration and say
> > "sorry your VFIO device is still there" if the VFIO device is still around
> > at the start of migration (whether due to unplug failure or a
> > user/management forgetting to do it manually beforehand).
> 
> It's a bit different. What happens is that migration just doesn't
> finish. Same as it sometimes doesn't when guest dirties too much memory.
> Upper layers usually handle that in a way similar to what you describe.
> If it's desirable that the reason for migration not finishing is
> reported to user, we can add that information for sure. Though most
> users likely won't care.

Users absolutely *do* care why migration is not finishing. A migration that
does not finish is a major problem for mgmt apps in many case of the use
cases for migration. Especially important when evacuating VMs from a host
in order to do a software upgrade or replace faulty hardware. As mentioned
previously, they will also often serialize migrations to prevent eh network
being overutilized, so a migration that runs indefinitely will stall
evacuation of additional VMs too.  Predictable execution of migration and
clear error reporting/handling are critical features. IMHO this is the key
reason VFIO unplug/plug needs to be done explicitly by the mgmt app, so it
can be in control over when each part of the process takes place.

> > So how important is it that setting F_STANDBY cap doesn't break older
> > guests? If the idea is to support live migration with VFs then aren't
> > we still dead in the water if the guest boots okay but doesn't have
> > the requisite functionality to be migrated later?
> 
> No because such legacy guest will never see the PT device at all.  So it
> can migrate.

PCI devices are a precious finite resource. If a guest is not going to use
it, we must never add the VFIO device to QEMU in the first place. Adding a
PCI device that is never activated wastes precious resources, preventing
other guests that need PCI devices from being launched on the host.

Regards,
Daniel
-- 
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|



reply via email to

[Prev in Thread] Current Thread [Next in Thread]