qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Migration failure when running nested VMs


From: Jintack Lim
Subject: Re: Migration failure when running nested VMs
Date: Mon, 23 Sep 2019 11:32:13 -0700

On Mon, Sep 23, 2019 at 3:42 AM Dr. David Alan Gilbert
<address@hidden> wrote:
>
> * Jintack Lim (address@hidden) wrote:
> > Hi,
>
> Copying in Paolo, since he recently did work to fix nested migration -
> it was expected to be broken until pretty recently; but 4.1.0 qemu on
> 5.3 kernel is pretty new, so I think I'd expected it to work.
>

Thank you, Dave. What Paolo proposed make migration work!

> > I'm seeing VM live migration failure when a VM is running a nested VM.
> > I'm using latest Linux kernel (v5.3) and QEMU (v4.1.0). I also tried
> > v5.2, but the result was the same. Kernel versions in L1 and L2 VM are
> > v4.18, but I don't think that matters.
> >
> > The symptom is that L2 VM kernel crashes in different places after
> > migration but the call stack is mostly related to memory management
> > like [1] and [2]. The kernel crash happens almost all the time. While
> > L2 VM gets kernel panic, L1 VM runs fine after the migration. Both L1
> > and L2 VM were doing nothing during migration.
> >
> > I found a few clues about this issue.
> > 1) It happens with a relatively large memory for L1 (24G), but it does
> > not with a smaller size (3G).
> >
> > 2) Dead migration worked; when I ran "stop" command in the qemu
> > monitor for L1 first and did migration, migration worked always. It
> > also worked when I only stopped L2 VM and kept L1 live during the
> > migration.
> >
> > With those two clues, I guess maybe some dirty pages made by L2 are
> > not transferred to the destination correctly, but I'm not really sure.
> >
> > 3) It happens on Intel(R) Xeon(R) Silver 4114 CPU, but it does not on
> > Intel(R) Xeon(R) CPU E5-2630 v3 CPU.
> >
> > This makes me confused because I thought migrating nested state
> > doesn't depend on the underlying hardware.. Anyways, L1-only migration
> > with the large memory size (24G) works on both CPUs without any
> > problem.
> >
> > I would appreciate any comments/suggestions to fix this problem.
>
> Can you share the qemu command lines you're using for both L1 and L2
> please ?

Sure. I use the same QEMU command line for L1 and L2 except for cpu
and memory allocation.

This is the one for running L1, and I use smaller cpu and memory size for L2.
./qemu/x86_64-softmmu/qemu-system-x86_64 -smp 6 -m 24G -M
q35,accel=kvm -cpu host -drive
if=none,file=/vm_nfs/guest0.img,id=vda,cache=none,format=raw -device
virtio-blk-pci,drive=vda --nographic -qmp
unix:/var/run/qmp,server,wait -serial mon:stdio -netdev
user,id=net0,hostfwd=tcp::2222-:22 -device
virtio-net-pci,netdev=net0,mac=de:ad:be:ef:f2:12 -netdev
tap,id=net1,vhost=on,helper=/srv/vm/qemu/qemu-bridge-helper -device
virtio-net-pci,netdev=net1,disable-modern=off,disable-legacy=on,mac=de:ad:be:ef:f2:11
-monitor telnet:127.0.0.1:4444,server,nowait

> Are there any dmesg entries around the time of the migration on either
> the hosts or the L1 VMs?

No, I didn't see anything special in L0 or L1 kernel log.

> What guest OS are you running in L1 and L2?
>

I'm using Linux v4.18 both in L1 and L2.

Thanks,
Jintack

> Dave
>
> > Thanks,
> > Jintack
> >
> >
> > [1]https://paste.ubuntu.com/p/XGDKH45yt4/
> > [2]https://paste.ubuntu.com/p/CpbVTXJCyc/
> >
> --
> Dr. David Alan Gilbert / address@hidden / Manchester, UK



reply via email to

[Prev in Thread] Current Thread [Next in Thread]