qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v4 10/25] migration: Add Error** argument to qemu_savevm_stat


From: Peter Xu
Subject: Re: [PATCH v4 10/25] migration: Add Error** argument to qemu_savevm_state_setup()
Date: Tue, 12 Mar 2024 11:18:58 -0400

On Tue, Mar 12, 2024 at 11:24:39AM -0300, Fabiano Rosas wrote:
> Cédric Le Goater <clg@redhat.com> writes:
> 
> > On 3/12/24 14:34, Cédric Le Goater wrote:
> >> On 3/12/24 13:32, Cédric Le Goater wrote:
> >>> On 3/11/24 20:03, Fabiano Rosas wrote:
> >>>> Cédric Le Goater <clg@redhat.com> writes:
> >>>>
> >>>>> On 3/8/24 15:36, Fabiano Rosas wrote:
> >>>>>> Cédric Le Goater <clg@redhat.com> writes:
> >>>>>>
> >>>>>>> This prepares ground for the changes coming next which add an Error**
> >>>>>>> argument to the .save_setup() handler. Callers of 
> >>>>>>> qemu_savevm_state_setup()
> >>>>>>> now handle the error and fail earlier setting the migration state from
> >>>>>>> MIGRATION_STATUS_SETUP to MIGRATION_STATUS_FAILED.
> >>>>>>>
> >>>>>>> In qemu_savevm_state(), move the cleanup to preserve the error
> >>>>>>> reported by .save_setup() handlers.
> >>>>>>>
> >>>>>>> Since the previous behavior was to ignore errors at this step of
> >>>>>>> migration, this change should be examined closely to check that
> >>>>>>> cleanups are still correctly done.
> >>>>>>>
> >>>>>>> Signed-off-by: Cédric Le Goater <clg@redhat.com>
> >>>>>>> ---
> >>>>>>>
> >>>>>>>    Changes in v4:
> >>>>>>>    - Merged cleanup change in qemu_savevm_state()
> >>>>>>>    Changes in v3:
> >>>>>>>    - Set migration state to MIGRATION_STATUS_FAILED
> >>>>>>>    - Fixed error handling to be done under lock in 
> >>>>>>> bg_migration_thread()
> >>>>>>>    - Made sure an error is always set in case of failure in
> >>>>>>>      qemu_savevm_state_setup()
> >>>>>>>    migration/savevm.h    |  2 +-
> >>>>>>>    migration/migration.c | 27 ++++++++++++++++++++++++---
> >>>>>>>    migration/savevm.c    | 26 +++++++++++++++-----------
> >>>>>>>    3 files changed, 40 insertions(+), 15 deletions(-)
> >>>>>>>
> >>>>>>> diff --git a/migration/savevm.h b/migration/savevm.h
> >>>>>>> index 
> >>>>>>> 74669733dd63a080b765866c703234a5c4939223..9ec96a995c93a42aad621595f0ed58596c532328
> >>>>>>>  100644
> >>>>>>> --- a/migration/savevm.h
> >>>>>>> +++ b/migration/savevm.h
> >>>>>>> @@ -32,7 +32,7 @@
> >>>>>>>    bool qemu_savevm_state_blocked(Error **errp);
> >>>>>>>    void qemu_savevm_non_migratable_list(strList **reasons);
> >>>>>>>    int qemu_savevm_state_prepare(Error **errp);
> >>>>>>> -void qemu_savevm_state_setup(QEMUFile *f);
> >>>>>>> +int qemu_savevm_state_setup(QEMUFile *f, Error **errp);
> >>>>>>>    bool qemu_savevm_state_guest_unplug_pending(void);
> >>>>>>>    int qemu_savevm_state_resume_prepare(MigrationState *s);
> >>>>>>>    void qemu_savevm_state_header(QEMUFile *f);
> >>>>>>> diff --git a/migration/migration.c b/migration/migration.c
> >>>>>>> index 
> >>>>>>> a49fcd53ee19df1ce0182bc99d7e064968f0317b..6d1544224e96f5edfe56939a9c8395d88ef29581
> >>>>>>>  100644
> >>>>>>> --- a/migration/migration.c
> >>>>>>> +++ b/migration/migration.c
> >>>>>>> @@ -3408,6 +3408,8 @@ static void *migration_thread(void *opaque)
> >>>>>>>        int64_t setup_start = qemu_clock_get_ms(QEMU_CLOCK_HOST);
> >>>>>>>        MigThrError thr_error;
> >>>>>>>        bool urgent = false;
> >>>>>>> +    Error *local_err = NULL;
> >>>>>>> +    int ret;
> >>>>>>>        thread = migration_threads_add("live_migration", 
> >>>>>>> qemu_get_thread_id());
> >>>>>>> @@ -3451,9 +3453,17 @@ static void *migration_thread(void *opaque)
> >>>>>>>        }
> >>>>>>>        bql_lock();
> >>>>>>> -    qemu_savevm_state_setup(s->to_dst_file);
> >>>>>>> +    ret = qemu_savevm_state_setup(s->to_dst_file, &local_err);
> >>>>>>>        bql_unlock();
> >>>>>>> +    if (ret) {
> >>>>>>> +        migrate_set_error(s, local_err);
> >>>>>>> +        error_free(local_err);
> >>>>>>> +        migrate_set_state(&s->state, MIGRATION_STATUS_SETUP,
> >>>>>>> +                          MIGRATION_STATUS_FAILED);
> >>>>>>> +        goto out;
> >>>>>>> +     }
> >>>>>>> +
> >>>>>>>        qemu_savevm_wait_unplug(s, MIGRATION_STATUS_SETUP,
> >>>>>>>                                   MIGRATION_STATUS_ACTIVE);
> >>>>>>
> >>>>>> This^ should be before the new block it seems:
> >>>>>>
> >>>>>> GOOD:
> >>>>>> migrate_set_state new state setup
> >>>>>> migrate_set_state new state wait-unplug
> >>>>>> migrate_fd_cancel
> >>>>>> migrate_set_state new state cancelling
> >>>>>> migrate_fd_cleanup
> >>>>>> migrate_set_state new state cancelled
> >>>>>> migrate_fd_cancel
> >>>>>> ok 1 /x86_64/failover-virtio-net/migrate/abort/wait-unplug
> >>>>>>
> >>>>>> BAD:
> >>>>>> migrate_set_state new state setup
> >>>>>> migrate_fd_cancel
> >>>>>> migrate_set_state new state cancelling
> >>>>>> migrate_fd_cleanup
> >>>>>> migrate_set_state new state cancelled
> >>>>>> qemu-system-x86_64: ram_save_setup failed: Input/output error
> >>>>>> **
> >>>>>> ERROR:../tests/qtest/virtio-net-failover.c:1203:test_migrate_abort_wait_unplug:
> >>>>>> assertion failed (status == "cancelling"): ("cancelled" == 
> >>>>>> "cancelling")
> >>>>>>
> >>>>>> Otherwise migration_iteration_finish() will schedule the cleanup BH and
> >>>>>> that will run concurrently with migrate_fd_cancel() issued by the test
> >>>>>> and bad things happens.
> >>>>>
> >>>>> This hack makes things work :
> >>>>>
> >>>>> @@ -3452,6 +3452,9 @@ static void *migration_thread(void *opaq
> >>>>>            qemu_savevm_send_colo_enable(s->to_dst_file);
> >>>>>        }
> >>>>> +    qemu_savevm_wait_unplug(s, MIGRATION_STATUS_SETUP,
> >>>>> +                            MIGRATION_STATUS_SETUP);
> >>>>> +
> >>>>
> >>>> Why move it all the way up here? Has moving the wait_unplug before the
> >>>> 'if (ret)' block not worked for you?
> >>>
> >>> We could be sleeping while holding the BQL. It looked wrong.
> >> 
> >> Sorry wrong answer. Yes I can try moving it before the 'if (ret)' block.
> >> I can reproduce easily with an x86 guest running on PPC64.
> >
> > That works just the same.
> >
> > Peter, Fabiano,
> >
> > What would you prefer  ?
> >
> > 1. move qemu_savevm_wait_unplug() before qemu_savevm_state_setup(),
> >     means one new patch.
> 
> Is there a point to this except "because we can"? Honest question, I
> might have missed the motivation.

My previous point was, it avoids holding the resources (that will be
allocated in setup() routines) while we know we can wait for a long time.

But then I found that the ordering is indeed needed at least if we don't
change migrate_set_state() first - it is the only place we set the status
to START (which I overlooked, sorry)...

IMHO the function is not well designed; the state update of the next stage
should not reside in a function to wait for failover primary devices
conditionally. It's a bit of a mess.

> 
> Also a couple of points:
> 
> - The current version of this proposal seems it will lose the transition
> from SETUP->ACTIVE no? As in, after qemu_savevm_state_setup, there's
> nothing changing the state to ACTIVE anymore.
> 
> - You also need to change the bg migration path.
> 
> >
> > 2. leave qemu_savevm_wait_unplug() after qemu_savevm_state_setup()
> >     and handle state_setup() errors after waiting. means an update
> >     of this patch.
> 
> I vote for this. This failover feature is a pretty complex one, let's
> not risk changing the behavior for no good reason. Just look at the
> amount of head-banging going on in these threads:
> 
> https://patchwork.ozlabs.org/project/qemu-devel/cover/20181025140631.634922-1-sameeh@daynix.com/
> https://www.mail-archive.com/qemu-devel@nongnu.org/msg609296.html

Do we know who is consuming this feature?

Now VFIO allows a migration to happen without this trick.  I'm wondering
whether all relevant NICs can also support VFIO migrations in the future,
then we can drop this tricky feature for good.

-- 
Peter Xu




reply via email to

[Prev in Thread] Current Thread [Next in Thread]