[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Qemu-devel] [PATCH 4/8] block/backup: improve unallocated clusters
From: |
Max Reitz |
Subject: |
Re: [Qemu-devel] [PATCH 4/8] block/backup: improve unallocated clusters skipping |
Date: |
Wed, 7 Aug 2019 20:01:05 +0200 |
User-agent: |
Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.8.0 |
On 07.08.19 10:07, Vladimir Sementsov-Ogievskiy wrote:
> Limit block_status querying to request bounds on write notifier to
> avoid extra seeking.
I don’t understand this reasoning. Checking whether something is
allocated for qcow2 should just mean an L2 cache lookup. Which we have
to do anyway when we try to copy data off the source.
I could understand saying this makes the code easier, but I actually
don’t think it does. If you implemented checking the allocation status
only for areas where the bitmap is reset (which I think this patch
should), then it’d just duplicate the existing loop.
> Signed-off-by: Vladimir Sementsov-Ogievskiy <address@hidden>
> ---
> block/backup.c | 38 +++++++++++++++++++++-----------------
> 1 file changed, 21 insertions(+), 17 deletions(-)
>
> diff --git a/block/backup.c b/block/backup.c
> index 11e27c844d..a4d37d2d62 100644
> --- a/block/backup.c
> +++ b/block/backup.c
[...]
> @@ -267,6 +267,18 @@ static int coroutine_fn backup_do_cow(BackupBlockJob
> *job,
> wait_for_overlapping_requests(job, start, end);
> cow_request_begin(&cow_request, job, start, end);
>
> + if (job->initializing_bitmap) {
> + int64_t off, chunk;
> +
> + for (off = offset; offset < end; offset += chunk) {
This is what I’m referring to, I think this loop should skip areas that
are clean.
> + ret = backup_bitmap_reset_unallocated(job, off, end - off,
> &chunk);
> + if (ret < 0) {
> + chunk = job->cluster_size;
> + }
> + }
> + }
> + ret = 0;
> +
> while (start < end) {
> int64_t dirty_end;
>
> @@ -276,15 +288,6 @@ static int coroutine_fn backup_do_cow(BackupBlockJob
> *job,
> continue; /* already copied */
> }
>
> - if (job->initializing_bitmap) {
> - ret = backup_bitmap_reset_unallocated(job, start, &skip_bytes);
> - if (ret == 0) {
> - trace_backup_do_cow_skip_range(job, start, skip_bytes);
> - start += skip_bytes;
> - continue;
> - }
> - }
> -
> dirty_end = bdrv_dirty_bitmap_next_zero(job->copy_bitmap, start,
> end - start);
> if (dirty_end < 0) {
Hm, you resolved that conflict differently from me.
I decided the bdrv_dirty_bitmap_next_zero() call should go before the
backup_bitmap_reset_unallocated() call so that we can then do a
dirty_end = MIN(dirty_end, start + skip_bytes);
because we probably don’t want to copy anything past what
backup_bitmap_reset_unallocated() has inquired.
But then again this patch eliminates the difference anyway...
Max
> @@ -546,7 +549,8 @@ static int coroutine_fn backup_run(Job *job, Error **errp)
> goto out;
> }
>
> - ret = backup_bitmap_reset_unallocated(s, offset, &count);
> + ret = backup_bitmap_reset_unallocated(s, offset, s->len - offset,
> + &count);
> if (ret < 0) {
> goto out;
> }
>
signature.asc
Description: OpenPGP digital signature
[Qemu-devel] [PATCH 3/8] block/io: handle alignment and max_transfer for copy_range, Vladimir Sementsov-Ogievskiy, 2019/08/07