qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-block] [Qemu-devel] [PATCH 2/2] iotests: test big qcow2 shrink


From: Eric Blake
Subject: Re: [Qemu-block] [Qemu-devel] [PATCH 2/2] iotests: test big qcow2 shrink
Date: Wed, 17 Apr 2019 10:04:05 -0500
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101 Thunderbird/60.6.1

On 4/17/19 5:09 AM, Vladimir Sementsov-Ogievskiy wrote:
> Signed-off-by: Vladimir Sementsov-Ogievskiy <address@hidden>
> ---
>  tests/qemu-iotests/249     | 69 ++++++++++++++++++++++++++++++++++++++
>  tests/qemu-iotests/249.out | 30 +++++++++++++++++
>  tests/qemu-iotests/group   |  1 +
>  3 files changed, 100 insertions(+)
>  create mode 100755 tests/qemu-iotests/249
>  create mode 100644 tests/qemu-iotests/249.out

> +
> +# This test checks bug in qcow2_process_discards, fixed by previous commit.

This sentence makes sense in a commit message, but is a bit stale in the
test itself. Simpler to just state:

This test checks that qcow2_process_discards does not truncate a discard
request > 2G.

> +# The problem was that bdrv_pdiscard was called with uint64_t bytes parameter
> +# which was cropped to int.

then drop this sentence.

> +# To reproduce bug we need to overflow int by one sequential discard, so we
> +# need size > 2G, bigger cluster size (as with default 64k we may have 
> maximum
> +# of 512M sequential data, corresponding to one L1 entry, and we need to 
> write

s/entry,/entry),/

> +# at offset 0 at last, to prevent bdrv_co_truncate(bs->file) in
> +# qcow2_co_truncate to stole the whole effect of failed discard.

and we need a second write at offset 0 to prevent
bdrv_co_truncate(bs->file) from short-circuiting the entire discard.

> +
> +size=2300M
> +IMGOPTS="cluster_size=1M"
> +
> +_make_test_img $size
> +$QEMU_IO -c 'write 100M 2000M' -c 'write 2100M 200M' \
> +    -c 'write 0 100M' "$TEST_IMG" | _filter_qemu_io

This takes a LONG time to produce; and when I tried it on a 32-bit
machine, I got an ENOMEM failure for being unable to allocate a 2000M
buffer. Is there any faster way to produce a similar setup, perhaps by
manually messing with the qcow2 file rather than directly writing that
much consecutive data?  Even if there isn't, you'll want to at least
make the test gracefully skip rather than fail if it hits ENOMEM.

> +Image resized.
> +image: TEST_DIR/t.qcow2
> +file format: qcow2
> +virtual size: 50M (52428800 bytes)
> +disk size: 54M

I confirmed that when patch 1/2 is not present, this reports 2.2G
instead of 54M.

The output here will need tweaking if my patch to rework qemu-img
human-readable sizing lands first (it's currently queued on Kevin's
block-next tree, but needs a rework to fix a few more iotests and to
resolve a 32-bit compilation issue).

> +++ b/tests/qemu-iotests/group
> @@ -248,3 +248,4 @@
>  246 rw auto quick
>  247 rw auto quick
>  248 rw auto quick
> +249 rw auto
> 

I agree that this isn't quick. I will also point out that at least three
earlier messages have also claimed 249:

Berto: [PATCH for-4.1 v2 2/2] iotests: Check that images are in
read-only mode after block-commit

Max: [PATCH v3 6/7] iotests: Test qemu-img convert --salvage

and yourself :) [PATCH v6 7/7] iotests: test nbd reconnect

so someone gets to rename.

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3226
Virtualization:  qemu.org | libvirt.org

Attachment: signature.asc
Description: OpenPGP digital signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]