qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-devel] [PATCH RFC v6 5/7] migration: fix the multifd code when


From: Fei Li
Subject: Re: [Qemu-devel] [PATCH RFC v6 5/7] migration: fix the multifd code when receiving less channels
Date: Wed, 31 Oct 2018 20:26:32 +0800
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:52.0) Gecko/20100101 Thunderbird/52.9.1



On 10/31/2018 06:18 AM, Peter Xu wrote:
On Tue, Oct 30, 2018 at 06:05:18PM +0800, Fei Li wrote:

[...]

@@ -1366,6 +1373,10 @@ bool multifd_recv_new_channel(QIOChannel *ioc)
                          QEMU_THREAD_JOINABLE);
       atomic_inc(&multifd_recv_state->count);
       return multifd_recv_state->count == migrate_multifd_channels();
+fail:
+    qemu_fclose(mis->from_src_file);
+    mis->from_src_file = NULL;
+    return false;
Do we need this?

I'd suggest to put all cleanups into a single function.  For dest vm
I say it's process_incoming_migration_bh.

Regards,

Not sure whether I understand correctly, if multifd_recv_new_channel()
fails,
that means migration_incoming_process() will not be called, then
process_incoming_migration_co() and process_incoming_migration_bh()
will not be called either. In that way, there is no cleanup.
Sorry the funtion name I wanted to paste is something like
migration_incoming_state_destroy()...  Anyway I still don't feel that
right to close the mis->from_src_file in a multifd special path.

For now, I'll either ignore the cleanup part (AFAIU the TLS failure
will also ignore it when migration_tls_channel_process_incoming fails)
and just print the extra error message,
I will adopt this option, thanks for the suggestion :)

Have a nice day, thanks
Fei
  or you can also look into how
to cleanup the dest vm in a better way.  That could be someting like
calling migration_incoming_state_destroy() somewhere in
migration_channel_process_incoming() when failure happens but I'm not
sure.

Regards,





reply via email to

[Prev in Thread] Current Thread [Next in Thread]