qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Qemu-block] [Qemu-devel] [PATCH 0/2] buffer and delay backup COW wr


From: Vladimir Sementsov-Ogievskiy
Subject: Re: [Qemu-block] [Qemu-devel] [PATCH 0/2] buffer and delay backup COW write operation
Date: Tue, 30 Apr 2019 10:35:32 +0000

28.04.2019 13:01, Liang Li wrote:
> If the backup target is a slow device like ceph rbd, the backup
> process will affect guest BLK write IO performance seriously,
> it's cause by the drawback of COW mechanism, if guest overwrite the
> backup BLK area, the IO can only be processed after the data has
> been written to backup target.
> The impact can be relieved by buffering data read from backup
> source and writing to backup target later, so the guest BLK write
> IO can be processed in time.
> Data area with no overwrite will be process like before without
> buffering, in most case, we don't need a very large buffer.
> 
> An fio test was done when the backup was going on, the test resut
> show a obvious performance improvement by buffering.

Hi Liang!

Good thing. Something like this I've briefly mentioned in my KVM Forum 2018
report as "RAM Cache", and I'd really prefer this functionality to be a separate
filter, instead of complication of backup code. Further more, write notifiers
will go away from backup code, after my backup-top series merged.

v5: https://lists.gnu.org/archive/html/qemu-devel/2018-12/msg06211.html
and separated preparing refactoring v7: 
https://lists.gnu.org/archive/html/qemu-devel/2019-04/msg04813.html

RAM Cache should be a filter driver, with an in-memory buffer(s) for data 
written to it
and with ability to flush data to underlying backing file.

Also, here is another approach for the problem, which helps if guest writing 
activity
is really high and long and buffer will be filled and performance will decrease 
anyway:

1. Create local temporary image, and COWs will go to it. (previously considered 
on list, that we should call
these backup operations issued by guest writes CBW = copy-before-write, as 
copy-on-write
is generally another thing, and using this term in backup is confusing).

2. We also set original disk as a backing for temporary image, and start 
another backup from
temporary to real target.

This scheme is almost possible now, you need to start backup(sync=none) from 
source to temp,
to do [1]. Some patches are still needed to allow such scheme. I didn't send 
them, as I want
my other backup patches go first anyway. But I can. On the other hand if 
approach with in-memory
buffer works for you it may be better.

Also, I'm not sure for now, should we really do this thing through two backup 
jobs, or we just
need one separate backup-top filter and one backup job without filter, or we 
need an additional
parameter for backup job to set cache-block-node.

> 
> Test result(1GB buffer):
> ========================
> fio setting:
> [random-writers]
> ioengine=libaio
> iodepth=8
> rw=randwrite
> bs=32k
> direct=1
> size=1G
> numjobs=1
> 
> result:
>                        IOPS        AVG latency
>         no backup:     19389         410 us
>            backup:      1402        5702 us
> backup w/ buffer:      8684         918 us
> ==============================================
> 
> Cc: John Snow <address@hidden>
> Cc: Kevin Wolf <address@hidden>
> Cc: Max Reitz <address@hidden>
> Cc: Wen Congyang <address@hidden>
> Cc: Xie Changlong <address@hidden>
> Cc: Markus Armbruster <address@hidden>
> Cc: Eric Blake <address@hidden>
> Cc: Fam Zheng <address@hidden>
> 
> Liang Li (2):
>    backup: buffer COW request and delay the write operation
>    qapi: add interface for setting backup cow buffer size
> 
>   block/backup.c            | 118 
> +++++++++++++++++++++++++++++++++++++++++-----
>   block/replication.c       |   2 +-
>   blockdev.c                |   5 ++
>   include/block/block_int.h |   2 +
>   qapi/block-core.json      |   5 ++
>   5 files changed, 118 insertions(+), 14 deletions(-)
> 


-- 
Best regards,
Vladimir

reply via email to

[Prev in Thread] Current Thread [Next in Thread]