qemu-block
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v6 1/3] block: introduce compress filter driver


From: Vladimir Sementsov-Ogievskiy
Subject: Re: [PATCH v6 1/3] block: introduce compress filter driver
Date: Tue, 12 Nov 2019 10:24:09 +0000

12.11.2019 13:07, Andrey Shinkevich wrote:
> On 12/11/2019 12:39, Kevin Wolf wrote:
>> Am 11.11.2019 um 17:04 hat Andrey Shinkevich geschrieben:
>>> Allow writing all the data compressed through the filter driver.
>>> The written data will be aligned by the cluster size.
>>> Based on the QEMU current implementation, that data can be written to
>>> unallocated clusters only. May be used for a backup job.
>>>
>>> Suggested-by: Max Reitz <address@hidden>
>>> Signed-off-by: Andrey Shinkevich <address@hidden>
>>
>>> +static BlockDriver bdrv_compress = {
>>> +    .format_name                        = "compress",
>>> +
>>> +    .bdrv_open                          = zip_open,
>>> +    .bdrv_child_perm                    = zip_child_perm,
>>
>> Why do you call the functions zip_* when the driver is called compress?
>> I think zip would be a driver for zip archives, which we don't use here.
>>
> 
> Kevin,
> Thanks for your response.
> I was trying to make my mind up with a short form for 'compress'.
> I will change the 'zip' for something like 'compr'.

I'd keep it compress, it sounds better.

> 
>>> +    .bdrv_getlength                     = zip_getlength,
>>> +    .bdrv_co_truncate                   = zip_co_truncate,
>>> +
>>> +    .bdrv_co_preadv                     = zip_co_preadv,
>>> +    .bdrv_co_preadv_part                = zip_co_preadv_part,
>>> +    .bdrv_co_pwritev                    = zip_co_pwritev,
>>> +    .bdrv_co_pwritev_part               = zip_co_pwritev_part,
>>
>> If you implement .bdrv_co_preadv/pwritev_part, isn't the implementation
>> of .bdrv_co_preadv/pwritev (without _part) dead code?
>>
> 
> Understood and will remove the dead path.
> 
>>> +    .bdrv_co_pwrite_zeroes              = zip_co_pwrite_zeroes,
>>> +    .bdrv_co_pdiscard                   = zip_co_pdiscard,
>>> +    .bdrv_refresh_limits                = zip_refresh_limits,
>>> +
>>> +    .bdrv_eject                         = zip_eject,
>>> +    .bdrv_lock_medium                   = zip_lock_medium,
>>> +
>>> +    .bdrv_co_block_status               = 
>>> bdrv_co_block_status_from_backing,
>>
>> Why not use bs->file? (Well, apart from the still not merged filter
>> series by Max...)
>>
> 
> We need to keep a backing chain unbroken with the filter inserted. So,
> the backing child should not be zero. It is necessary for the backup
> job, for instance. When I initialized both children pointing to the same
> node, the code didn't work properly. I have to reproduce it again to
> tell you exactly what happened then and will appreciate your advice
> about a proper design.
> 
> Andrey


file-child based filters are pain, which needs 42-patches (seems postponed now 
:(
Max's series to work correct (or at least more correct than now). file-child
based filters break backing-chains, and backing-child-based works well. So, I 
don't
know any benefit of file-child based filters, and I think there is no reason to
create new ones. If in future for some reason we need file-child support in the 
filter
it's simple to add it (so filter will have only one child, but it may be 
backing or
file, as requested by user).

So, I propose to start with backing-child which works better, and add 
file-child support
in future if needed.

Also, note that backup-top filter uses backing child, and there is a reason to 
make it
public filter (to realize image fleecing without backup job), so we'll have 
public
backing-child based filter anyway.

Also, we have pending series about using COR filter in block-stream (it breaks
backing-chain, as it is file-child-based), and now I think that the simplest
way to fix it is support backing child in block-stream (so, block-stream job
will create COR filter with backing child instead of file child).

> 
>>> +    .bdrv_recurse_is_first_non_filter   = zip_recurse_is_first_non_filter,
>>> +
>>> +    .has_variable_length                = true,
>>> +    .is_filter                          = true,
>>> +};
>>
>> Kevin
>>
> 


-- 
Best regards,
Vladimir



reply via email to

[Prev in Thread] Current Thread [Next in Thread]