qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v6 00/23] migration: File based migration with multifd and ma


From: Fabiano Rosas
Subject: Re: [PATCH v6 00/23] migration: File based migration with multifd and mapped-ram
Date: Mon, 04 Mar 2024 10:09:25 -0300

Daniel P. Berrangé <berrange@redhat.com> writes:

> On Mon, Mar 04, 2024 at 08:35:36PM +0800, Peter Xu wrote:
>> Fabiano,
>> 
>> On Thu, Feb 29, 2024 at 12:29:54PM -0300, Fabiano Rosas wrote:
>> > => guest: 128 GB RAM - 120 GB dirty - 1 vcpu in tight loop dirtying memory
>> 
>> I'm curious normally how much time does it take to do the final fdatasync()
>> for you when you did this test.

I haven't looked at the fdatasync() in isolation. I'll do some
measurements soon.

>> 
>> I finally got a relatively large system today and gave it a quick shot over
>> 128G (100G busy dirty) mapped-ram snapshot with 8 multifd channels.  The
>> migration save/load does all fine, so I don't think there's anything wrong
>> with the patchset, however when save completes (I'll need to stop the
>> workload as my disk isn't fast enough I guess..) I'll always hit a super
>> long hang of QEMU on fdatasync() on XFS during which the main thread is in
>> UNINTERRUPTIBLE state.

> That isn't very surprising. If you don't have O_DIRECT enabled, then
> all that disk I/O from the migrate is going to be in RAM, and thus the
> fdatasync() is likely to trigger writing out alot of data.
>
> Blocking the main QEMU thread though is pretty unhelpful. That suggests
> the data sync needs to be moved to a non-main thread.

Perhaps if we move the fsync to the same spot as the multifd thread sync
instead of having a big one at the end? Not sure how that looks with
concurrency in the mix.

I'll have to experiment a bit.




reply via email to

[Prev in Thread] Current Thread [Next in Thread]