[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [RFC v3 00/29] vDPA software assisted live migration
From: |
Eugenio Perez Martin |
Subject: |
Re: [RFC v3 00/29] vDPA software assisted live migration |
Date: |
Mon, 24 May 2021 12:37:48 +0200 |
On Mon, May 24, 2021 at 11:38 AM Michael S. Tsirkin <mst@redhat.com> wrote:
>
> On Wed, May 19, 2021 at 06:28:34PM +0200, Eugenio Pérez wrote:
> > Commit 17 introduces the buffer forwarding. Previous one are for
> > preparations again, and laters are for enabling some obvious
> > optimizations. However, it needs the vdpa device to be able to map
> > every IOVA space, and some vDPA devices are not able to do so. Checking
> > of this is added in previous commits.
>
> That might become a significant limitation. And it worries me that
> this is such a big patchset which might yet take a while to get
> finalized.
>
Sorry, maybe I've been unclear here: Latter commits in this series
address this limitation. Still not perfect: for example, it does not
support adding or removing guest's memory at the moment, but this
should be easy to implement on top.
The main issue I'm observing is from the kernel if I'm not wrong: If I
unmap every address, I cannot re-map them again. But code in this
patchset is mostly final, except for the comments it may arise in the
mail list of course.
> I have an idea: how about as a first step we implement a transparent
> switch from vdpa to a software virtio in QEMU or a software vhost in
> kernel?
>
> This will give us live migration quickly with performance comparable
> to failover but without dependance on guest cooperation.
>
I think it should be doable. I'm not sure about the effort that needs
to be done in qemu to hide these "hypervisor-failover devices" from
guest's view but it should be comparable to failover, as you say.
Networking should be ok by its nature, although it could require care
on the host hardware setup. But I'm not sure how other types of
vhost/vdpa devices may work that way. How would a disk/scsi device
switch modes? Can the kernel take control of the vdpa device through
vhost, and just start reporting with a dirty bitmap?
Thanks!
> Next step could be driving vdpa from userspace while still copying
> packets to a pre-registered buffer.
>
> Finally your approach will be a performance optimization for devices
> that support arbitrary IOVA.
>
> Thoughts?
>
> --
> MST
>
- [RFC v3 21/29] vhost: Add VhostIOVATree, (continued)
- [RFC v3 21/29] vhost: Add VhostIOVATree, Eugenio Pérez, 2021/05/19
- [RFC v3 22/29] vhost: Add iova_rev_maps_find_iova to IOVAReverseMaps, Eugenio Pérez, 2021/05/19
- [RFC v3 23/29] vhost: Use a tree to store memory mappings, Eugenio Pérez, 2021/05/19
- [RFC v3 24/29] vhost: Add iova_rev_maps_alloc, Eugenio Pérez, 2021/05/19
- [RFC v3 25/29] vhost: Add custom IOTLB translations to SVQ, Eugenio Pérez, 2021/05/19
- [RFC v3 26/29] vhost: Map in vdpa-dev, Eugenio Pérez, 2021/05/19
- [RFC v3 27/29] vhost-vdpa: Implement vhost_vdpa_vring_pause operation, Eugenio Pérez, 2021/05/19
- [RFC v3 28/29] vhost-vdpa: never map with vDPA listener, Eugenio Pérez, 2021/05/19
- [RFC v3 29/29] vhost: Start vhost-vdpa SVQ directly, Eugenio Pérez, 2021/05/19
- Re: [RFC v3 00/29] vDPA software assisted live migration, Michael S. Tsirkin, 2021/05/24
- Re: [RFC v3 00/29] vDPA software assisted live migration,
Eugenio Perez Martin <=