qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re:[PATCH] mirror: Avoid assertion failed in mirror_run


From: wang.yi59
Subject: Re:[PATCH] mirror: Avoid assertion failed in mirror_run
Date: Wed, 8 Dec 2021 17:52:03 +0800 (CST)

>[CC-ing qemu-block, Vladimir, Kevin, and John – when sending patches,  
>please look into the MAINTAINERS file or use the  
>scripts/get_maintainer.pl script to find out who to CC on them.  It’s  
>very to overlook patches on qemu-devel :/]
>
>On 07.12.21 11:56, Yi Wang wrote:
>> From: Long YunJian <long.yunjian@zte.com.cn> 
>> 
>> when blockcommit from active leaf node, sometimes, we get assertion failed 
>> with
>> "mirror_run: Assertion `QLIST_EMPTY(&bs->tracked_requests)' failed" messages.
>> According to the core file, we find bs->tracked_requests has IO request,
>> so assertion failed.
>> (gdb) bt
>> #0  0x00007f410df707cf in raise () from /lib64/libc.so.6
>> #1  0x00007f410df5ac05 in abort () from /lib64/libc.so.6
>> #2  0x00007f410df5aad9 in __assert_fail_base.cold.0 () from /lib64/libc..so.6
>> #3  0x00007f410df68db6 in __assert_fail () from /lib64/libc.so.6
>> #4  0x0000556915635371 in mirror_run (job=0x556916ff8600, errp=<optimized 
>> out>) at block/mirror.c:1092
>> #5  0x00005569155e6c53 in job_co_entry (opaque=0x556916ff8600) at job..c:904
>> #6  0x00005569156d9483 in coroutine_trampoline (i0=<optimized out>, 
>> i1=<optimized out>) at util/coroutine-ucontext.c:115
>> (gdb) p s->mirror_top_bs->backing->bs->tracked_requests
>> $12 = {lh_first = 0x7f3f07bfb8b0}
>> (gdb) p s->mirror_top_bs->backing->bs->tracked_requests->lh_first
>> $13 = (struct BdrvTrackedRequest *) 0x7f3f07bfb8b0
>> 
>> Actually, before excuting assert(QLIST_EMPTY(&bs->tracked_requests)),
>> it will excute mirror_flush(s). It may handle new I/O request and maybe
>> pending I/O during this flush. Just likes in bdrv_close fuction,
>> bdrv_drain(bs) followed by bdrv_flush(bs), we should add bdrv_drain fuction
>> to handle pending I/O after mirror_flush.
>
>Oh.  How is that happening, though?  I would have expected that flushing  
>the target BB (and associated BDS) only flushes requests to the OS and  
>lower layers, but the source node (which is `bs`) should (in the case of  
>commit) always be above the target, so I wouldn’t have expected it to  
>get any new requests due to this flush.
>
>Do you have a reproducer for this?

As i know, flush maybe will do some thring write, and then in qcow2_co_pwritev 
function,
if others aready hold "s->lock" lock, qemu_co_mutex_lock(&s->lock) will go to 
qemu_coroutine_yield,
and do some other things. Maybe, it will handle new I/O now. 

reproducer:
first, create leaf alpine-min_leaf1.qcow2, alpine-min_leaf1.qcow2 has backing 
file alpine-min.qcow2,
and create vm from leaf qcow2, xml likes this
<disk type='file' device='disk'>
      <driver name='qemu' type='qcow2' cache='none' iothread='1'/>
      <source file='/home/alpine-min_leaf1.qcow2'/>
      <target dev='vda' bus='virtio'/>
</disk>


and then make I/O request by read and write files all the time in guset os.
finally, run script blockcommit.sh until assertion failed. You should modify 
script according to actual situation.
script as follows: 

#!/bin/bash
QEMUIMG=/home/qemu-6.0.0-rc2/build/qemu-img
VIRSH=/home/libvirt-7.2.0/build/tools/virsh
DOMID=v6_host7
LEAF1=/home/alpine-min_leaf1.qcow2
LEAF2=/home/alpine-min_leaf2.qcow2
while [ 1 ]
do
        [ -f quitfile ] && break
        ${QEMUIMG} create -f qcow2 -F qcow2 -b ${LEAF1} ${LEAF2}
        sleep 2
        ${VIRSH} -k 0 snapshot-create-as ${DOMID} --disk-only --diskspec 
vda,file=${LEAF2} --reuse-external --no-metadata
        [ $? -ne 0 ] && break
        sleep 30
        ${VIRSH} blockcommit --domain ${DOMID} vda --base ${LEAF1} --top 
${LEAF2} --wait --verbose --active --pivot
        [ $? -ne 0 ] && break
done


The probability of the problem is not high, it's difficult to reproduce the 
problem, blockcommit hundreds of times at least.

>> Signed-off-by: Long YunJian <long.yunjian@zte.com.cn> 
>> Signed-off-by: Yi Wang <wang.yi59@zte.com.cn> 
>> ---
>>   block/mirror.c | 2 ++
>>   1 file changed, 2 insertions(+)
>> 
>> diff --git a/block/mirror.c b/block/mirror.c
>> index efec2c7674..1eec356310 100644
>> --- a/block/mirror.c
>> +++ b/block/mirror.c
>> @@ -1079,6 +1079,8 @@ static int coroutine_fn mirror_run(Job *job, Error 
>> **errp)
>>                   s->in_drain = false;
>>                   continue;
>>               }
>> +            /* in case flush left pending I/O */
>> +            bdrv_drain(bs);
>
>I don’t think this works, because if we drain, we would also need to  
>flush the target again.  Essentially I believe we’d basically need  
>something like
>
>do {
>     bdrv_drained_begin(bs);
>     mirror_flush(s);
>     if (!QLIST_EMPTY(&bs->tracked_requests)) {
>         bdrv_drained_end(bs);
>     }
>} while (!QLIST_EMPTY(&bs->tracked_requests));
>
>(Which I know is really ugly)
>
>Hanna

Yes, add bdrv_drain(bs) after mirror_flush(s) can't fix this problem. Thank you 
for your idea of a solution.
and I'd like to modify as follows, how do you like it?
@@ -1074,7 +1074,7 @@ static int coroutine_fn mirror_run(Job *job, Error **errp)
             s->in_drain = true;
             bdrv_drained_begin(bs);
             cnt = bdrv_get_dirty_count(s->dirty_bitmap);
-            if (cnt > 0 || mirror_flush(s) < 0) {
+            if (cnt > 0 || mirror_flush(s) < 0 || 
!QLIST_EMPTY(&bs->tracked_requests)) {
                 bdrv_drained_end(bs);
                 s->in_drain = false;
                 continue;

>    
>               /* The two disks are in sync.  Exit and report successful
>                * completion.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]