duplicity-talk
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Duplicity-talk] Deletion


From: edgar . soldin
Subject: Re: [Duplicity-talk] Deletion
Date: Sat, 25 Jan 2020 09:21:25 +0100
User-agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101 Thunderbird/68.3.1

On 24.01.2020 23:05, Scott Hannahs via Duplicity-talk wrote:
>> On Jan 24, 2020, at 4:53 PM, Fogust via Duplicity-talk <address@hidden 
>> <mailto:address@hidden>> wrote:
>>
>> On 2020-01-24 05:02, address@hidden <mailto:address@hidden> wrote:
>>
>>> hi Foust,
>>>  
>>> --cut--
>>>
>>> duplicity treats backend as dumb file storage. it merely writes or reads 
>>> data from there. and because there is not target based duplicity a 
>>> "reconstruction" approach would need to re-transfer the whole data.
>>> that is in turn equal to a simple new full which is the suggested 
>>> methodology here. simply use '--full-if-older-than 6M' during backup and 
>>> you will end up with a new independent chain every 6 months that can be 
>>> safely discarded if not needed anymore.
>>>
>>> personally i keep monthly chains* and verify after every backup, just to be 
>>> sure.
>>>
>>> *if one volume gets corrupted the whole chain might not be restorable by 
>>> default means, so regular full backups are advised.
>>>
>>>  
>>
>> Thank you. I have a large amount of data and there's a good chance that a 
>> significant portion will not change (or rarely change). So, if I have 10TB 
>> of data and 6TB doesn't change between backups, is there a way to 
>> intelligently update that to bring a bunch of incremental changes elsewhere 
>> into a full backup
>>
> Not on the server side!  As Edgar pointed out the server is considered 
> compromised and does not have any keys or access to data.  

well. yes and no :). actually files of one backup in time are determinable by 
name and timestamp. so as a _workaround_ you could base your new chain on the 
old avail full on the remote backend already.

steps
1. remote: create a new folder
2. remote: copy the files starting with full from the old target folder into 
the new
3. local: do an incremental backup against the new location
4. local: verify to make sure everything worked out

> Thus it cannot coalesce  incremental with the initial full backup.  To do so, 
> all the data would have to be downloaded to the local machine, coalesced and 
> then uploaded back to the server.  This is more cpu and network bandwidth 
> than just uploading a new Full backup.

couldn't have written it better.

> Other non-secure backups can do this.  There are several commercial systems 
> or a series of scripts and use of rsync.  Duplicity by its name and design 
> includes top shelf security with as much efficiency as possible.

well. it's as secure as gpg is. if that's top shelf, then yes. ;)

> Also note the issue of corruption with long chains and not making fresh full 
> backups.

consider reasonably short chains, using multibackend to place in several 
locations. there is also a par2 backend to protect against minimal damage.

have fun.. ede/duply.net



reply via email to

[Prev in Thread] Current Thread [Next in Thread]