[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [PATCH 2/3] migration: Drop unnecessary check in ram's pending_exact
From: |
Peter Xu |
Subject: |
Re: [PATCH 2/3] migration: Drop unnecessary check in ram's pending_exact() |
Date: |
Wed, 20 Mar 2024 14:57:17 -0400 |
On Wed, Mar 20, 2024 at 06:51:26PM +0100, Nina Schoetterl-Glausch wrote:
> On Wed, 2024-01-17 at 15:58 +0800, peterx@redhat.com wrote:
> > From: Peter Xu <peterx@redhat.com>
> >
> > When the migration frameworks fetches the exact pending sizes, it means
> > this check:
> >
> > remaining_size < s->threshold_size
> >
> > Must have been done already, actually at migration_iteration_run():
> >
> > if (must_precopy <= s->threshold_size) {
> > qemu_savevm_state_pending_exact(&must_precopy, &can_postcopy);
> >
> > That should be after one round of ram_state_pending_estimate(). It makes
> > the 2nd check meaningless and can be dropped.
> >
> > To say it in another way, when reaching ->state_pending_exact(), we
> > unconditionally sync dirty bits for precopy.
> >
> > Then we can drop migrate_get_current() there too.
> >
> > Signed-off-by: Peter Xu <peterx@redhat.com>
>
> Hi Peter,
Hi, Nina,
>
> could you have a look at this issue:
> https://gitlab.com/qemu-project/qemu/-/issues/1565
>
> which I reopened. Previous thread here:
>
> https://lore.kernel.org/qemu-devel/20230324184129.3119575-1-nsg@linux.ibm.com/
>
> I'm seeing migration failures with s390x TCG again, which look the same to me
> as those a while back.
I'm still quite confused how that could be caused of this.
What you described in the previous bug report seems to imply some page was
leftover in migration so some page got corrupted after migrated.
However what this patch mostly does is it can sync more than before even if
I overlooked the condition check there (I still think the check is
redundant, there's one outlier when remaining_size == threshold_size, but I
don't think it should matter here as of now). It'll make more sense if
this patch made the sync less, but that's not the case but vice versa.
>
> > ---
> > migration/ram.c | 9 ++++-----
> > 1 file changed, 4 insertions(+), 5 deletions(-)
> >
> > diff --git a/migration/ram.c b/migration/ram.c
> > index c0cdcccb75..d5b7cd5ac2 100644
> > --- a/migration/ram.c
> > +++ b/migration/ram.c
> > @@ -3213,21 +3213,20 @@ static void ram_state_pending_estimate(void
> > *opaque, uint64_t *must_precopy,
> > static void ram_state_pending_exact(void *opaque, uint64_t *must_precopy,
> > uint64_t *can_postcopy)
> > {
> > - MigrationState *s = migrate_get_current();
> > RAMState **temp = opaque;
> > RAMState *rs = *temp;
> > + uint64_t remaining_size;
> >
> > - uint64_t remaining_size = rs->migration_dirty_pages * TARGET_PAGE_SIZE;
> > -
> > - if (!migration_in_postcopy() && remaining_size < s->threshold_size) {
> > + if (!migration_in_postcopy()) {
> > bql_lock();
> > WITH_RCU_READ_LOCK_GUARD() {
> > migration_bitmap_sync_precopy(rs, false);
> > }
> > bql_unlock();
> > - remaining_size = rs->migration_dirty_pages * TARGET_PAGE_SIZE;
> > }
> >
> > + remaining_size = rs->migration_dirty_pages * TARGET_PAGE_SIZE;
> > +
> > if (migrate_postcopy_ram()) {
> > /* We can do postcopy, and all the data is postcopiable */
> > *can_postcopy += remaining_size;
>
> This basically reverts 28ef5339c3 ("migration: fix
> ram_state_pending_exact()"), which originally
> made the issue disappear.
>
> Any thoughts on the matter appreciated.
In the previous discussion, you mentioned that you bisected to the commit
and also verified the fix. Now you also mentioned in the bz that you can't
reporduce this bug manually.
Is it still possible to be reproduced with some scripts? Do you also mean
that it's harder to reproduce comparing to before? In all cases, some way
to reproduce it would definitely be helpful.
Even if we want to revert this change, we'll need to know whether this will
fix your case so we need something to verify it before a revert. I'll
consider that the last though as I had a feeling this is papering over
something else.
Thanks,
--
Peter Xu