[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [PATCH v2] util/async: Add memory barrier to aio_ctx_prepare
From: |
Stefan Hajnoczi |
Subject: |
Re: [PATCH v2] util/async: Add memory barrier to aio_ctx_prepare |
Date: |
Thu, 2 Apr 2020 10:32:21 +0100 |
On Thu, Apr 02, 2020 at 10:44:31AM +0800, Ying Fang wrote:
> Qemu main thread is found to hang up in the mainloop when doing
> image format convert on aarch64 platform and it is highly
> reproduceable by executing test using:
>
> qemu-img convert -f qcow2 -O qcow2 origin.qcow2 converted.qcow2
>
> This mysterious hang can be explained by a race condition between
> the main thread and an io worker thread. There can be a chance that
> the last worker thread has called aio_bh_schedule_oneshot and it is
> checking against notify_me to deliver a notfiy event. At the same
> time, the main thread is calling aio_ctx_prepare however it first
> calls qemu_timeout_ns_to_ms, thus the worker thread did not see
> notify_me as true and did not send a notify event. The time line
> can be shown in the following way:
>
> Main Thread
> ------------------------------------------------
> aio_ctx_prepare
> atomic_or(&ctx->notify_me, 1);
> /* out of order execution goes here */
> *timeout = qemu_timeout_ns_to_ms(aio_compute_timeout(ctx));
>
> Worker Thread
> -----------------------------------------------
> aio_bh_schedule_oneshot -> aio_bh_enqueue
> aio_notify
> smp_mb();
> if (ctx->notify_me) { /* worker thread checks notify_me here */
> event_notifier_set(&ctx->notifier);
> atomic_mb_set(&ctx->notified, true);
> }
Paolo, I'm not sure how to interpret this case according to
docs/devel/atomics.txt. Maybe you can clarify.
atomic_or() is sequentially consistent and I therefore expected it to
act as a barrier. Or does sequential consistency only cover the memory
accessed via the sequentially consistent atomics APIs and everything
else (like aio_compute_timeout()) can be reordered?
>
> Normal VM runtime is not affected by this hang since there is always some
> timer timeout or subsequent io worker come and notify the main thead.
> To fix this problem, a memory barrier is added to aio_ctx_prepare and
> it is proved to have the hang fixed in our test.
>
> This hang is not observed on the x86 platform however it can be easily
> reproduced on the aarch64 platform, thus it is architecture related.
> Not sure if this is revelant to Commit
> eabc977973103527bbb8fed69c91cfaa6691f8ab
>
> Signed-off-by: Ying Fang <address@hidden>
> Signed-off-by: zhanghailiang <address@hidden>
> Reported-by: Euler Robot <address@hidden>
>
> ---
> v2: add comments before the barrier
>
> ---
> util/async.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git a/util/async.c b/util/async.c
> index b94518b..89a4f3e 100644
> --- a/util/async.c
> +++ b/util/async.c
> @@ -250,7 +250,8 @@ aio_ctx_prepare(GSource *source, gint *timeout)
> AioContext *ctx = (AioContext *) source;
>
> atomic_or(&ctx->notify_me, 1);
> -
> + /* Make sure notify_me is set before aio_compute_timeout */
> + smp_mb();
> /* We assume there is no timeout already supplied */
> *timeout = qemu_timeout_ns_to_ms(aio_compute_timeout(ctx));
>
> --
> 1.8.3.1
>
>
signature.asc
Description: PGP signature