qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 5/9] block/io: expand in_flight inc/dec section: simple ca


From: Kevin Wolf
Subject: Re: [PATCH v2 5/9] block/io: expand in_flight inc/dec section: simple cases
Date: Tue, 19 May 2020 16:33:16 +0200

Am 19.05.2020 um 16:01 hat Vladimir Sementsov-Ogievskiy geschrieben:
> 19.05.2020 14:25, Vladimir Sementsov-Ogievskiy wrote:
> > 19.05.2020 14:16, Kevin Wolf wrote:
> > > Am 19.05.2020 um 13:06 hat Vladimir Sementsov-Ogievskiy geschrieben:
> > > > 19.05.2020 13:52, Kevin Wolf wrote:
> > > > > Am 06.05.2020 um 09:02 hat Vladimir Sementsov-Ogievskiy geschrieben:
> > > > > > 27.04.2020 17:39, Vladimir Sementsov-Ogievskiy wrote:
> > > > > > > It's safer to expand in_flight request to start before enter to
> > > > > > > coroutine in synchronous wrappers, due to the following 
> > > > > > > (theoretical)
> > > > > > > problem:
> > > > > > > 
> > > > > > > Consider write.
> > > > > > > It's possible, that qemu_coroutine_enter only schedules execution,
> > > > > > > assume such case.
> > > > > > > 
> > > > > > > Then we may possibly have the following:
> > > > > > > 
> > > > > > > 1. Somehow check that we are not in drained section in outer code.
> > > > > > > 
> > > > > > > 2. Call bdrv_pwritev(), assuming that it will increase in_flight, 
> > > > > > > which
> > > > > > > will protect us from starting drained section.
> > > > > > > 
> > > > > > > 3. It calls bdrv_prwv_co() -> bdrv_coroutine_enter() (not yet 
> > > > > > > increased
> > > > > > > in_flight).
> > > > > > > 
> > > > > > > 4. Assume coroutine not yet actually entered, only scheduled, and 
> > > > > > > we go
> > > > > > > to some code, which starts drained section (as in_flight is zero).
> > > > > > > 
> > > > > > > 5. Scheduled coroutine starts, and blindly increases in_flight, 
> > > > > > > and we
> > > > > > > are in drained section with in_flight request.
> > > > > > > 
> > > > > > > Signed-off-by: Vladimir Sementsov-Ogievskiy <address@hidden>
> > > > > > 
> > > > > > Very interesting: this patch breaks test-replication. It hangs:
> > > > > > 
> > > > > > (gdb) thr a a bt
> > > > > > 
> > > > > > Thread 2 (Thread 0x7eff256cd700 (LWP 2843)):
> > > > > > #0  0x00007eff2f5fd1fd in syscall () from /lib64/libc.so.6
> > > > > > #1  0x000055af9a9a4f11 in qemu_futex_wait (f=0x55af9aa6f758 
> > > > > > <rcu_call_ready_event>, val=4294967295) at 
> > > > > > /work/src/qemu/up-expand-bdrv-in_flight-bounds/include/qemu/futex.h:29
> > > > > > #2  0x000055af9a9a50d5 in qemu_event_wait (ev=0x55af9aa6f758 
> > > > > > <rcu_call_ready_event>) at util/qemu-thread-posix.c:459
> > > > > > #3  0x000055af9a9bd20d in call_rcu_thread (opaque=0x0) at 
> > > > > > util/rcu.c:260
> > > > > > #4  0x000055af9a9a5288 in qemu_thread_start (args=0x55af9c4f1b80) 
> > > > > > at util/qemu-thread-posix.c:519
> > > > > > #5  0x00007eff2f6d44c0 in start_thread () from 
> > > > > > /lib64/libpthread.so.0
> > > > > > #6  0x00007eff2f602553 in clone () from /lib64/libc.so.6
> > > > > > 
> > > > > > Thread 1 (Thread 0x7eff25820a80 (LWP 2842)):
> > > > > > #0  0x00007eff2f5f7bd6 in ppoll () from /lib64/libc.so.6
> > > > > > #1  0x000055af9a99e405 in qemu_poll_ns (fds=0x55af9c52a830, 
> > > > > > nfds=1, timeout=-1) at util/qemu-timer.c:335
> > > > > > #2  0x000055af9a9a1cab in fdmon_poll_wait (ctx=0x55af9c526890, 
> > > > > > ready_list=0x7ffc73e8c5d0, timeout=-1) at util/fdmon-poll.c:79
> > > > > > #3  0x000055af9a9a160c in aio_poll (ctx=0x55af9c526890, 
> > > > > > blocking=true) at util/aio-posix.c:600
> > > > > > #4  0x000055af9a8f0bb0 in bdrv_do_drained_begin 
> > > > > > (bs=0x55af9c52a8d0, recursive=false, parent=0x0, 
> > > > > > ignore_bds_parents=false, poll=true) at block/io.c:429
> > > > > > #5  0x000055af9a8f0c95 in bdrv_drained_begin (bs=0x55af9c52a8d0) 
> > > > > > at block/io.c:435
> > > > > > #6  0x000055af9a8dc6a8 in blk_drain (blk=0x55af9c542c10) at 
> > > > > > block/block-backend.c:1681
> > > > > > #7  0x000055af9a8da0b6 in blk_unref (blk=0x55af9c542c10) at 
> > > > > > block/block-backend.c:473
> > > > > > #8  0x000055af9a8eb5e7 in mirror_exit_common (job=0x55af9c6c45c0) 
> > > > > > at block/mirror.c:667
> > > > > > #9  0x000055af9a8eb9c1 in mirror_prepare (job=0x55af9c6c45c0) at 
> > > > > > block/mirror.c:765
> > > > > > #10 0x000055af9a87cd65 in job_prepare (job=0x55af9c6c45c0) at 
> > > > > > job.c:781
> > > > > > #11 0x000055af9a87b62a in job_txn_apply (job=0x55af9c6c45c0, 
> > > > > > fn=0x55af9a87cd28 <job_prepare>) at job.c:158
> > > > > > #12 0x000055af9a87cdee in job_do_finalize (job=0x55af9c6c45c0) at 
> > > > > > job.c:798
> > > > > > #13 0x000055af9a87cfb5 in job_completed_txn_success 
> > > > > > (job=0x55af9c6c45c0) at job.c:852
> > > > > > #14 0x000055af9a87d055 in job_completed (job=0x55af9c6c45c0) at 
> > > > > > job.c:865
> > > > > > #15 0x000055af9a87d0a8 in job_exit (opaque=0x55af9c6c45c0) at 
> > > > > > job.c:885
> > > > > > #16 0x000055af9a99b981 in aio_bh_call (bh=0x55af9c547440) at 
> > > > > > util/async.c:136
> > > > > > #17 0x000055af9a99ba8b in aio_bh_poll (ctx=0x55af9c526890) at 
> > > > > > util/async.c:164
> > > > > > #18 0x000055af9a9a17ff in aio_poll (ctx=0x55af9c526890, 
> > > > > > blocking=true) at util/aio-posix.c:650
> > > > > > #19 0x000055af9a8f7011 in bdrv_flush (bs=0x55af9c53b900) at 
> > > > > > block/io.c:3019
> > > > > > #20 0x000055af9a874351 in bdrv_close (bs=0x55af9c53b900) at 
> > > > > > block.c:4252
> > > > > > #21 0x000055af9a874ca3 in bdrv_delete (bs=0x55af9c53b900) at 
> > > > > > block.c:4498
> > > > > > #22 0x000055af9a877862 in bdrv_unref (bs=0x55af9c53b900) at 
> > > > > > block.c:5866
> > > > > > #23 0x000055af9a870837 in bdrv_root_unref_child 
> > > > > > (child=0x55af9c6c4430) at block.c:2684
> > > > > > #24 0x000055af9a8da9a2 in blk_remove_bs (blk=0x55af9c547bd0) at 
> > > > > > block/block-backend.c:803
> > > > > > #25 0x000055af9a8d9e54 in blk_delete (blk=0x55af9c547bd0) at 
> > > > > > block/block-backend.c:422
> > > > > > #26 0x000055af9a8da0f8 in blk_unref (blk=0x55af9c547bd0) at 
> > > > > > block/block-backend.c:477
> > > > > > #27 0x000055af9a86a6f1 in teardown_secondary () at 
> > > > > > tests/test-replication.c:392
> > > > > > #28 0x000055af9a86aac1 in test_secondary_stop () at 
> > > > > > tests/test-replication.c:490
> > > > > > #29 0x00007eff2fd7df7e in g_test_run_suite_internal () from 
> > > > > > /lib64/libglib-2.0.so.0
> > > > > > #30 0x00007eff2fd7dd24 in g_test_run_suite_internal () from 
> > > > > > /lib64/libglib-2.0.so.0
> > > > > > #31 0x00007eff2fd7dd24 in g_test_run_suite_internal () from 
> > > > > > /lib64/libglib-2.0.so.0
> > > > > > #32 0x00007eff2fd7e46a in g_test_run_suite () from 
> > > > > > /lib64/libglib-2.0.so.0
> > > > > > #33 0x00007eff2fd7e485 in g_test_run () from /lib64/libglib-2.0.so.0
> > > > > > #34 0x000055af9a86b19c in main (argc=1, argv=0x7ffc73e8d088) at 
> > > > > > tests/test-replication.c:645
> > > > > > 
> > > > > > 
> > > > > > (gdb) p ((BlockBackend *)0x55af9c547bd0)->in_flight
> > > > > > $5 = 0
> > > > > > (gdb) p ((BlockBackend *)0x55af9c542c10)->in_flight
> > > > > > $6 = 0
> > > > > > (gdb) p ((BlockDriverState *)0x55af9c53b900)->in_flight
> > > > > > $7 = 1
> > > > > > (gdb) p ((BlockDriverState *)0x55af9c52a8d0)->in_flight
> > > > > > $8 = 0
> > > > > > (gdb) fr 20
> > > > > > #20 0x000055af9a874351 in bdrv_close (bs=0x55af9c53b900) at 
> > > > > > block.c:4252
> > > > > > 4252        bdrv_flush(bs);
> > > > > > (gdb) p bs->node_name
> > > > > > $9 = "#block5317", '\000' <repeats 21 times>
> > > > > > (gdb) p bs->drv
> > > > > > $10 = (BlockDriver *) 0x55af9aa63c40 <bdrv_replication>
> > > > > > (gdb) p bs->in_flight
> > > > > > $11 = 1
> > > > > > (gdb) p bs->tracked_requests
> > > > > > $12 = {lh_first = 0x0}
> > > > > > 
> > > > > > 
> > > > > > So, we entered bdrv_flush at frame 19, and increased in_flight. Then
> > > > > > we go to aio_poll and to nested event loop, and we never return to
> > > > > > decrease in_flight field.
> > > > > > 
> > > > > > Hmm. I'm afraid, I don't know what to do with that. Kevin, could you
> > > > > > take a look? And could similar thing happen with blk layer, because 
> > > > > > of
> > > > > > you recent similar patch?
> > > > > 
> > > > > Hmm... You mean blk_prw(), right? Looks like it could have the same
> > > > > problem, indeed.
> > > > > 
> > > > > Maybe we need to move the blk/bdrv_dec_in_flight to inside the 
> > > > > coroutine
> > > > > (probably to the place where we currently have aio_wait_kick(), which
> > > > > would already be built in for bdrv_dec_in_flight). This is the last
> > > > > thing the coroutine does, so presumably it will still be late enough.
> > > > > 
> > > > 
> > > > But moving "inc" into coroutine is dangerous too, as we discussed that
> > > > coroutine_enter may only schedule the coroutine, and something may
> > > > call drain before actual "inc".
> > > 
> > > No, I mean moving only the dec, not inc. So inc before entering the
> > > coroutine (outside of it), and dec at the end, but still inside the
> > > coroutine.
> > > 
> > 
> > Hmm. it probably make sense. Ha, I hastened to answer on
> > cover-letter that this all to be dropped. Ok, I'll give it a roll
> > and check your idea, thanks!
> 
> Checked this helps. I think, I'd try to rebase it onto your "[RFC
> PATCH 1/3] block: Factor out bdrv_run_co()"

I haven't received much feedback for that series yet. But I assume
patch 1 won't be contentious, so that makes sense to me.

Kevin




reply via email to

[Prev in Thread] Current Thread [Next in Thread]