[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Qemu-devel] [PATCH] qcow2: Fix the calculation of the maximum L2 ca
From: |
Kevin Wolf |
Subject: |
Re: [Qemu-devel] [PATCH] qcow2: Fix the calculation of the maximum L2 cache size |
Date: |
Fri, 16 Aug 2019 14:59:21 +0200 |
User-agent: |
Mutt/1.11.3 (2019-02-01) |
Am 16.08.2019 um 14:17 hat Alberto Garcia geschrieben:
> The size of the qcow2 L2 cache defaults to 32 MB, which can be easily
> larger than the maximum amount of L2 metadata that the image can have.
> For example: with 64 KB clusters the user would need a qcow2 image
> with a virtual size of 256 GB in order to have 32 MB of L2 metadata.
>
> Because of that, since commit b749562d9822d14ef69c9eaa5f85903010b86c30
> we forbid the L2 cache to become larger than the maximum amount of L2
> metadata for the image, calculated using this formula:
>
> uint64_t max_l2_cache = virtual_disk_size / (s->cluster_size / 8);
>
> The problem with this formula is that the result should be rounded up
> to the cluster size because an L2 table on disk always takes one full
> cluster.
>
> For example, a 1280 MB qcow2 image with 64 KB clusters needs exactly
> 160 KB of L2 metadata, but we need 192 KB on disk (3 clusters) even if
> the last 32 KB of those are not going to be used.
>
> However QEMU rounds the numbers down and only creates 2 cache tables
> (128 KB), which is not enough for the image.
>
> A quick test doing 4KB random writes on a 1280 MB image gives me
> around 500 IOPS, while with the correct cache size I get 16K IOPS.
>
> Signed-off-by: Alberto Garcia <address@hidden>
Hm, this is bad. :-(
The requirement so that this bug doesn't affect the user seems to be
that the image size is a multiple of 64k * 8k = 512 MB. Which means that
users are probably often lucky enough in practice.
I'll Cc: qemu-stable anyway.
Thanks, applied to the block branch.
Kevin