qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 2/3] migration: Drop unnecessary check in ram's pending_exact


From: Peter Xu
Subject: Re: [PATCH 2/3] migration: Drop unnecessary check in ram's pending_exact()
Date: Wed, 20 Mar 2024 16:45:34 -0400

On Wed, Mar 20, 2024 at 03:46:44PM -0400, Peter Xu wrote:
> On Wed, Mar 20, 2024 at 08:21:30PM +0100, Nina Schoetterl-Glausch wrote:
> > On Wed, 2024-03-20 at 14:57 -0400, Peter Xu wrote:
> > > On Wed, Mar 20, 2024 at 06:51:26PM +0100, Nina Schoetterl-Glausch wrote:
> > > > On Wed, 2024-01-17 at 15:58 +0800, peterx@redhat.com wrote:
> > > > > From: Peter Xu <peterx@redhat.com>
> > > > > 
> > > > > When the migration frameworks fetches the exact pending sizes, it 
> > > > > means
> > > > > this check:
> > > > > 
> > > > >   remaining_size < s->threshold_size
> > > > > 
> > > > > Must have been done already, actually at migration_iteration_run():
> > > > > 
> > > > >     if (must_precopy <= s->threshold_size) {
> > > > >         qemu_savevm_state_pending_exact(&must_precopy, &can_postcopy);
> > > > > 
> > > > > That should be after one round of ram_state_pending_estimate().  It 
> > > > > makes
> > > > > the 2nd check meaningless and can be dropped.
> > > > > 
> > > > > To say it in another way, when reaching ->state_pending_exact(), we
> > > > > unconditionally sync dirty bits for precopy.
> > > > > 
> > > > > Then we can drop migrate_get_current() there too.
> > > > > 
> > > > > Signed-off-by: Peter Xu <peterx@redhat.com>
> > > > 
> > > > Hi Peter,
> > > 
> > > Hi, Nina,
> > > 
> > > > 
> > > > could you have a look at this issue:
> > > > https://gitlab.com/qemu-project/qemu/-/issues/1565
> > > > 
> > > > which I reopened. Previous thread here:
> > > > 
> > > > https://lore.kernel.org/qemu-devel/20230324184129.3119575-1-nsg@linux.ibm.com/
> > > > 
> > > > I'm seeing migration failures with s390x TCG again, which look the same 
> > > > to me
> > > > as those a while back.
> > > 
> > > I'm still quite confused how that could be caused of this.
> > > 
> > > What you described in the previous bug report seems to imply some page was
> > > leftover in migration so some page got corrupted after migrated.
> > > 
> > > However what this patch mostly does is it can sync more than before even 
> > > if
> > > I overlooked the condition check there (I still think the check is
> > > redundant, there's one outlier when remaining_size == threshold_size, but 
> > > I
> > > don't think it should matter here as of now).  It'll make more sense if
> > > this patch made the sync less, but that's not the case but vice versa.
> > 
> > [...]
> > 
> > > In the previous discussion, you mentioned that you bisected to the commit
> > > and also verified the fix.  Now you also mentioned in the bz that you 
> > > can't
> > > reporduce this bug manually.
> > > 
> > > Is it still possible to be reproduced with some scripts?  Do you also mean
> > > that it's harder to reproduce comparing to before?  In all cases, some way
> > > to reproduce it would definitely be helpful.
> > 
> > I tried running the kvm-unit-test a bunch of times in a loop and couldn't
> > trigger a failure. I just tried again on a different system and managed just
> > fine, yay. No idea why it wouldn't on the first system tho.
> 
> There's probably still a bug somewhere.  If reproduction rate changed, it's
> also a sign that it might not be directly relevant to this change, as
> otherwise it should reproduce the same as before.
> 
> > > 
> > > Even if we want to revert this change, we'll need to know whether this 
> > > will
> > > fix your case so we need something to verify it before a revert.  I'll
> > > consider that the last though as I had a feeling this is papering over
> > > something else.
> > 
> > I can check if I can reproduce the issue before & after b0504edd 
> > ("migration:
> > Drop unnecessary check in ram's pending_exact()").
> > I can also check if I can reproduce it on x86, that worked last time.
> > Anything else? Ideas on how to pinpoint where the corruption happens?
> 
> I don't have a solid clue yet, but more information of the single case
> where it reproduced could help.
> 
> I saw from the bug link that the cmdline is pretty simple.  However still
> not sure of something that can be relevant.  E.g., did you use postcopy
> (including when postcopy-ram enabled but precopy completed)?  Is there any
> special device, like s390's CMMA (would that simplest cmdline include such
> a device; apologies, I have zero knowledge there before today)?
> 
> I _think_ when reading the code I already found something quite unusual,
> but only when postcopy is selected: I notice postcopy will frequently sync
> dirty bitmap while it doesn't really necessarily need to, because
> ram_state_pending_estimate() will report all ram as "can_postcopy"; it
> means it's highly likely that this check will 99.999% always be true simply
> because must_precopy can in most cases be zero:
> 
>     if (must_precopy <= s->threshold_size) { <---------------------------- 
> here
>         qemu_savevm_state_pending_exact(&must_precopy, &can_postcopy);
>         pending_size = must_precopy + can_postcopy;
>         trace_migrate_pending_exact(pending_size, must_precopy, can_postcopy);
>     }
> 
> I need to think more of this, but this doesn't sound right at all.  There's
> no such issue with precopy-only, and I'm surprised it is like that for years.

It seems this can be a separate new bug.. possible introduced in the same
commit since 8.0.  I will post a patch for this soon.

One more thing to mention is I am aware Nicholas & Phil also hit some s390
tcg issues, and just recently there got a fix landed, I'm suspecting that
could also be relevant. See:

20240312201458.79532-1-philmd@linaro.org/">https://lore.kernel.org/qemu-devel/20240312201458.79532-1-philmd@linaro.org/
03bfc2188f physmem: Fix migration dirty bitmap coherency with TCG memory access

I would suspect this issue reproduces easier before this.  I think Nicholas
also mentioned there can be other bug floating around:

https://lore.kernel.org/qemu-devel/CZSDDVZW4G3L.6CV89ZRMQK9G@wheely/

Let me add all into this loop.

Thanks,

-- 
Peter Xu




reply via email to

[Prev in Thread] Current Thread [Next in Thread]