[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
RE: [PATCH] migration: Yield coroutine when receiving MIG_CMD_POSTCOPY_L
From: |
Wang, Wei W |
Subject: |
RE: [PATCH] migration: Yield coroutine when receiving MIG_CMD_POSTCOPY_LISTEN |
Date: |
Fri, 29 Mar 2024 08:54:07 +0000 |
On Friday, March 29, 2024 11:32 AM, Wang, Lei4 wrote:
> When using the post-copy preemption feature to perform post-copy live
> migration, the below scenario could lead to a deadlock and the migration will
> never finish:
>
> - Source connect() the preemption channel in postcopy_start().
> - Source and the destination side TCP stack finished the 3-way handshake
> thus the connection is successful.
> - The destination side main thread is handling the loading of the bulk RAM
> pages thus it doesn't start to handle the pending connection event in the
> event loop. and doesn't post the semaphore postcopy_qemufile_dst_done for
> the preemption thread.
> - The source side sends non-iterative device states, such as the virtio
> states.
> - The destination main thread starts to receive the virtio states, this
> process may lead to a page fault (e.g., virtio_load()->vring_avail_idx()
> may trigger a page fault since the avail ring page may not be received
> yet).
> - The page request is sent back to the source side. Source sends the page
> content to the destination side preemption thread.
> - Since the event is not arrived and the semaphore
> postcopy_qemufile_dst_done is not posted, the preemption thread in
> destination side is blocked, and cannot handle receiving the page.
> - The QEMU main load thread on the destination side is stuck at the page
> fault, and cannot yield and handle the connect() event for the
> preemption channel to unblock the preemption thread.
> - The postcopy will stuck there forever since this is a deadlock.
>
> The key point to reproduce this bug is that the source side is sending pages
> at a
> rate faster than the destination handling, otherwise, the qemu_get_be64() in
> ram_load_precopy() will have a chance to yield since at that time there are no
> pending data in the buffer to get. This will make this bug harder to be
> reproduced.
>
> Fix this by yielding the load coroutine when receiving
> MIG_CMD_POSTCOPY_LISTEN so the main event loop can handle the
> connection event before loading the non-iterative devices state to avoid the
> deadlock condition.
>
> Signed-off-by: Lei Wang <lei4.wang@intel.com>
This seems to be a regression issue caused by this commit:
737840e2c6ea (migration: Use the number of transferred bytes directly)
Adding qemu_fflush back to migration_rate_exceeded() or ram_save_iterate
seems to work (might not be a good fix though).
> ---
> migration/savevm.c | 5 +++++
> 1 file changed, 5 insertions(+)
>
> diff --git a/migration/savevm.c b/migration/savevm.c index
> e386c5267f..8fd4dc92f2 100644
> --- a/migration/savevm.c
> +++ b/migration/savevm.c
> @@ -2445,6 +2445,11 @@ static int loadvm_process_command(QEMUFile *f)
> return loadvm_postcopy_handle_advise(mis, len);
>
> case MIG_CMD_POSTCOPY_LISTEN:
> + if (migrate_postcopy_preempt() && qemu_in_coroutine()) {
> + aio_co_schedule(qemu_get_current_aio_context(),
> + qemu_coroutine_self());
> + qemu_coroutine_yield();
> + }
The above could be moved to loadvm_postcopy_handle_listen().
Another option is to follow the old way (i.e. pre_7_2) to do
postcopy_preempt_setup
in migrate_fd_connect. This can save the above overhead of switching to the
main thread during the downtime. Seems Peter's previous patch already solved the
channel disordering issue. Let's see Peter and others' opinions.
> return loadvm_postcopy_handle_listen(mis);
>
> case MIG_CMD_POSTCOPY_RUN:
> --
> 2.39.3