|
From: | Dominic Raferd |
Subject: | Re: [rdiff-backup-users] Re: Remote encrypted backup with slow connection. |
Date: | Tue, 22 Sep 2009 15:36:32 +0100 |
User-agent: | Thunderbird 2.0.0.23 (Windows/20090812) |
Piotr Karbowski wrote:
I have not used duplicity myself, I use rdiff-backup. But I am not sure you need to run rdiff-backup first, I think duplicity may make its own local copies of backup increments so that it can send future increments without having to access the earlier increments from the remote share. And yes duplicity sends encrypted files so you don't need other encryption.On Tue, Sep 22, 2009 at 10:51 AM, Dominic Raferd <address@hidden> wrote:Piotr Karbowski wrote:On Mon, Sep 21, 2009 at 8:21 PM, Dominic Raferd <address@hidden> wrote:Piotr Karbowski wrote:On Mon, Sep 21, 2009 at 2:02 PM, Matthew Miller <address@hidden> wrote:On Mon, Sep 21, 2009 at 01:58:11PM +0200, Piotr Karbowski wrote:local rdiff-backup dir with remote server but how? If I will use for example rsync it still need to check whole files for changes (read, download it) and upload only new. I hope you will understand what I need and help me.rsync won't check whole files unless you give the -c flag. Otherwise, it just compares metadata. I don't know if that's also the case with rdiff-backup, but I assume so.So I need to know how rdiff default compares data, if by size and mod-time, it will not be so painful but still itefficient will download changed files to generate diff.Rdiff-backup is designed to be ultra-efficient at this activity. It only sends the changes in a file over the wire, not the whole file. To do this it uses the librsync library which is effectively the same as rsync. You can read more about the technique at http://en.wikipedia.org/wiki/Rsync. rdiff-backup does not use file times to determine whether to do backups. It can backup very large files with small changes very quickly. DominicYou dont understand me, rdiff-backup is efficient, but to make diff it must read WHOLE file, on remote nfs or sshfs it is SLOOOOW and painfulSorry I get it now. But I think rdiff-backup and rsync require a separate computer at the remote end in order to optimise transfers, so if you are just accessing a remote share using sshfs or similar then they can still work of course but as you realise they will be slow. I guess it is not possible for you to run rdiff-backup (or rsync) at the remote end as well? You could run rdiff-backup locally to create a backup store and then mirror this store to the remote share using rcp. Still it will be slow because rdiff-backup always stores the latest copy of each file in full and so if this changes even slightly then the whole file will must be transferred by rcp. Duplicity http://duplicity.nongnu.org/ might work better for you, because it uses forward diffs. Also its archives are secure. Although not directly relevant I found a page here http://www.psc.edu/networking/projects/hpn-ssh/ which provides a patch to greatly speed up OpenSSH in some situations.Duplicity is interesing project. What you think about using rdiff-backup to create local backup, for example in /backups and then send this /backups to remote server by duplicity? As far as I know duplicity is encrypted so I DONT need using encfs, dmcrypt or other - only ssh access is needed (realy I dont need duplicity on remote server?). I just want be able to send _ENCRYPTED_ backups to remote server where I have only ssh access (sftp/scp work).
Because duplicity uses forward diffs you have to keep all backups forever, and if there is any corruption of a file you lose all backups that occurred *after* the date of this file. Rdiff-backup uses reverse diffs so corruption, if it occurs, affects backups *before* the date of the corrupted backup. With rdiff-backup you can delete backups before a certain date (though in my experience the storage is so efficient it is not usually worth bothering), which is not possible with duplicity.
Still it sounds like duplicity would better suit your needs.
[Prev in Thread] | Current Thread | [Next in Thread] |