duplicity-talk
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Duplicity-talk] Minimizing risk on incremental


From: edgar . soldin
Subject: Re: [Duplicity-talk] Minimizing risk on incremental
Date: Sun, 23 Apr 2017 12:44:43 +0200
User-agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:45.0) Gecko/20100101 Thunderbird/45.8.0

hey Manuel,

On 22.04.2017 15:15, Manuel Morales via Duplicity-talk wrote:
> I guess I could do full backup, every week, but that will cause a full 400GB 
> of data every week, so in few weeks I will have a few Terabytes of data on 
> rsync.net.
> 

check the ml or the search engine of your choice. this topic comes up every now 
and then and generally, yes find a pattern of fulls/incrs that fit's your 
storage constraints and max chain length (==risk or corruption).

> 
> Is there any way to make duplicity just sync the file system, and avoid the 
> incremental part?
> 
> The attractive part of duplicity in my case is the encryption, but it seems 
> the “incremental backup” is not ideal for me.

putting files on backends simply is most compatible way to use virtually any 
backend (duplicity even supports imap) and that leads to backup chains. 
incrementals are essentially what you describe as syncing.

problem is duplicity treating backends as dumb, so there is no way to 
efficiently 
 duplicate/verify the old full 
or 
 consolidate a new full from the latest chain state
on the backend only without transferring the whole shebang back to the source 
machine.

but that will probably never change, as the target backend is treated as 
insecure and should not be allowed to decrypt your backups.

in summary:

if you want to stick w/ duplicity, consider 
 to do regular fulls 
and 
 verify regularly
and
 check your backup logs
. 
the verify makes sure that your backup is restorable and not corrupted up to 
now. manually start a new chain in case your verify fails at some point.

..ede/duply.net



reply via email to

[Prev in Thread] Current Thread [Next in Thread]