qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: RFC: Fast Migration for SEV and SEV-ES - blueprint and proof of conc


From: Tobin Feldman-Fitzthum
Subject: Re: RFC: Fast Migration for SEV and SEV-ES - blueprint and proof of concept
Date: Fri, 30 Oct 2020 17:10:02 -0400
User-agent: Roundcube Webmail/1.0.1

On 2020-10-30 16:02, Dr. David Alan Gilbert wrote:
* Tobin Feldman-Fitzthum (tobin@linux.ibm.com) wrote:
Hello,

Dov Murik, James Bottomley, Hubertus Franke, and I have been working on a plan for fast live migration with SEV and SEV-ES. We just posted an RFC about it to the edk2 list. It includes a proof-of-concept for what we feel
to be the most difficult part of fast live migration with SEV-ES.

https://edk2.groups.io/g/devel/topic/77875297

This was posted to the edk2 list because OVMF is one of the main components of our approach to live migration. With SEV/SEV-ES the hypervisor generally
does not have access to guest memory/CPU state. We propose a Migration
Handler in OVMF that runs inside the guest and assists the hypervisor with migration. One major challenge to this approach is that for SEV-ES this Migration Handler must be able to set the CPU state of the target to the CPU state of the source while the target is running. Our demo shows that this is
feasible.

While OVMF is a major component of our approach, QEMU obviously has a huge
part to play as well. We want to start thinking about the best way to
support fast live migration for SEV and for encrypted VMs in general. A handful of issues are starting to come into focus. For instance, the target VM needs to be started before we begin receiving pages from the source VM.

That might not be that hard to fudge; we already start the VM in
postcopy mode before we have all of RAM; now in that we have to do stuff
to protect the RAM because we expect the guest to access it; in this
case you possibly don't need to.

I'll need to think a bit about this. The basic setup is that we want the
VM to boot into OVMF and initialize the Migration Handler. Then QEMU will start receiving encrypted pages and passing them into OVMF via some shared address.
The Migration Handler will decrypt the pages and put them into place,
overwriting everything around it. The Migration Handler will be a runtime
driver so it should avoid overwriting itself.

We will also need to start an extra vCPU for the Migration Handler to run on. We are currently working on a demo of end-to-end live migration for SEV (non-ES) VMs that should help crystallize these issues. It should be on the list around the end of the year. For now, check out our other post, which
has a lot more information and let me know if you have any thoughts.

I don't think I understood why you'd want to map the VMSA, or why it
would be encrypted in such a way it was useful to you.

We map the VMSA into the guest because it gives us an easy way to
securely send the CPU state to the target.

Each time there is a VMExit, the CPU state of the guest
is stored in the VMSA. Although the VMSA is encrypted with the guest's key,
it usually isn't mapped into the guest. During migration we securely
transfer guest memory from source to destination (the Migration Handler
on source and target share a key which they use to encrypt/decrypt the
pages). If we tweak the NPT to add the VMSA to the guest, the VMSA will be
sent along with the all the other pages.

There are some details with the timing. We'll need to force a VMExit once we get convergence and re-send the VMSA page to make sure it's up to date. Once the target has all the pages, the Migration Handler can just read the
CPU state from some known address.

-Tobin

Dave


-Tobin




reply via email to

[Prev in Thread] Current Thread [Next in Thread]