duplicity-talk
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Duplicity-talk] duplicity verification and corrupt backups


From: Ed Blackman
Subject: Re: [Duplicity-talk] duplicity verification and corrupt backups
Date: Sun, 21 Aug 2011 22:25:07 -0400
User-agent: Mutt/1.5.20 (2009-06-14)

On Wed, Aug 17, 2011 at 06:15:26PM +0200, address@hidden wrote:
Good to know. But seriously. A slim line also minimizes the throughput and 
therefor the data you can put through it over a timeframe. Doing a full over a 
timeframe of more than a day is challenging at best. I would not advise it.

Rather

A) split the backup into small parts that are not backed that often
or
B) do what lot's of people with slow upload channels do. Do duplicity backup to 
a local file:// target and rsync or upload it with the software of your 
preference to the remote site.

or
C) take filesystem snapshots (I use LVM on Linux), then backup from the snapshots.

The advantage of snapshots over option B is that the snapshots are created in a matter of seconds, and so represent a much more consistent view of the system than even a quick backup to a local file:// target.

The disadvantage is that there's a significant scripting overhead. Not only setting up and tearing down the snapshots, but also just interacting with duplicity. "--rename $snapshotroot /" gets you most of the way, and it wouldn't be an option without it, but you also have to change all the --includes and --excludes (including filelists) to be relative to the root of the snapshot.

But in the end, it works. Some of my full backups take 10 days to "trickle" up to Amazon S3, by my script creates the snapshot for it and all the incrementals blocked while the full backup completes.

--
Ed Blackman

Attachment: signature.txt
Description: Digital signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]