qemu-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v11 04/14] block/backup: introduce BlockCopyState


From: Vladimir Sementsov-Ogievskiy
Subject: Re: [PATCH v11 04/14] block/backup: introduce BlockCopyState
Date: Fri, 20 Sep 2019 12:56:07 +0000

20.09.2019 15:46, Max Reitz wrote:
> On 13.09.19 20:25, Vladimir Sementsov-Ogievskiy wrote:
>> 10.09.2019 13:23, Vladimir Sementsov-Ogievskiy wrote:
>>> Split copying code part from backup to "block-copy", including separate
>>> state structure and function renaming. This is needed to share it with
>>> backup-top filter driver in further commits.
>>>
>>> Notes:
>>>
>>> 1. As BlockCopyState keeps own BlockBackend objects, remaining
>>> job->common.blk users only use it to get bs by blk_bs() call, so clear
>>> job->commen.blk permissions set in block_job_create and add
>>> job->source_bs to be used instead of blk_bs(job->common.blk), to keep
>>> it more clear which bs we use when introduce backup-top filter in
>>> further commit.
>>>
>>> 2. Rename s/initializing_bitmap/skip_unallocated/ to sound a bit better
>>> as interface to BlockCopyState
>>>
>>> 3. Split is not very clean: there left some duplicated fields, backup
>>> code uses some BlockCopyState fields directly, let's postpone it for
>>> further improvements and keep this comment simpler for review.
>>>
>>> Signed-off-by: Vladimir Sementsov-Ogievskiy <address@hidden>
>>> ---
>>
>>
>> [..]
>>
>>> +
>>> +static BlockCopyState *block_copy_state_new(
>>> +        BlockDriverState *source, BlockDriverState *target,
>>> +        int64_t cluster_size, BdrvRequestFlags write_flags,
>>> +        ProgressBytesCallbackFunc progress_bytes_callback,
>>> +        ProgressResetCallbackFunc progress_reset_callback,
>>> +        void *progress_opaque, Error **errp)
>>> +{
>>> +    BlockCopyState *s;
>>> +    int ret;
>>> +    uint64_t no_resize = BLK_PERM_CONSISTENT_READ | BLK_PERM_WRITE |
>>> +                         BLK_PERM_WRITE_UNCHANGED | BLK_PERM_GRAPH_MOD;
>>> +    BdrvDirtyBitmap *copy_bitmap;
>>> +
>>> +    copy_bitmap = bdrv_create_dirty_bitmap(source, cluster_size, NULL, 
>>> errp);
>>> +    if (!copy_bitmap) {
>>> +        return NULL;
>>> +    }
>>> +    bdrv_disable_dirty_bitmap(copy_bitmap);
>>> +
>>> +    s = g_new(BlockCopyState, 1);
>>> +    *s = (BlockCopyState) {
>>> +        .source = blk_new(bdrv_get_aio_context(source),
>>> +                          BLK_PERM_CONSISTENT_READ, no_resize),
>>> +        .target = blk_new(bdrv_get_aio_context(target),
>>> +                          BLK_PERM_WRITE, no_resize),
>>> +        .copy_bitmap = copy_bitmap,
>>> +        .cluster_size = cluster_size,
>>> +        .len = bdrv_dirty_bitmap_size(copy_bitmap),
>>> +        .write_flags = write_flags,
>>> +        .use_copy_range = !(write_flags & BDRV_REQ_WRITE_COMPRESSED),
>>> +        .progress_bytes_callback = progress_bytes_callback,
>>> +        .progress_reset_callback = progress_reset_callback,
>>> +        .progress_opaque = progress_opaque,
>>> +    };
>>> +
>>> +    s->copy_range_size = QEMU_ALIGN_UP(MIN(blk_get_max_transfer(s->source),
>>> +                                           
>>> blk_get_max_transfer(s->target)),
>>> +                                       s->cluster_size);
>>
>> preexistent, but it obviously should be QEMU_ALIGN_DOWN. I can resend with a 
>> separate
>> fix, it may be fixed while queuing (if resend is not needed for other 
>> reasons) or
>> I'll send a follow-up fix later, whichever you prefer.
> 
> Hm, true.  But then we’ll also need to handle the (unlikely, admittedly)
> case where max_transfer < cluster_size so this would then return 0 (by
> setting use_copy_range = false).  So how about this:

Done in [PATCH v12 0/2] backup: copy_range fixes.
If it is convenient I'll rebase these series on "[PATCH v12 0/2] backup: 
copy_range fixes"

> 
>> diff --git a/block/backup.c b/block/backup.c
>> index e5bcfe7177..ba4a37dbb5 100644
>> --- a/block/backup.c
>> +++ b/block/backup.c
>> @@ -182,9 +182,13 @@ static BlockCopyState *block_copy_state_new(
>>           .progress_opaque = progress_opaque,
>>       };
>>   
>> -    s->copy_range_size = QEMU_ALIGN_UP(MIN(blk_get_max_transfer(s->source),
>> -                                           blk_get_max_transfer(s->target)),
>> -                                       s->cluster_size);
>> +    s->copy_range_size = 
>> QEMU_ALIGN_DOWN(MIN(blk_get_max_transfer(s->source),
>> +                                             
>> blk_get_max_transfer(s->target)),
>> +                                         s->cluster_size);
>> +    if (s->copy_range_size == 0) {
>> +        /* max_transfer < cluster_size */
>> +        s->use_copy_range = false;
>> +    }
>>   
>>       /*
>>        * We just allow aio context change on our block backends. 
>> block_copy() user
> 
> Max
> 


-- 
Best regards,
Vladimir

reply via email to

[Prev in Thread] Current Thread [Next in Thread]