duplicity-talk
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Duplicity-talk] duplicity verification and corrupt backups


From: Rob Verduijn
Subject: Re: [Duplicity-talk] duplicity verification and corrupt backups
Date: Mon, 22 Aug 2011 13:00:55 +0200

Hi all,

sorry for the late answer, I've been away for a long weekend.

Ok thanx for all the great feedback here's my response about the solutions:

A) split the backup into small parts that are not backed that often
This introduces the risk that I might accidentally forget about that one important file resulting in me thinking I've got good backups only to find out I don't at the worst possible moment.
So I have to add some serious overhead to keep track off it all (which is always prone to errors) and still not sleep comfortable. Even though if I where to implement this solution and keep all the static and non-static data sorted, I'm human and I make errors there will always be that one important file that slips through.
In my experience ass a sysop I have learned that KISS is highly advised as a working method wherever possible, cause complexity only increases the chance of failure.
(always remember, failure is not an option, it's always included)

So even though this is a working alternative I pass on this one.

B) do what lot's of people with slow upload channels do. Do duplicity backup to a local file:// target and rsync or upload it with the software of your preference to the remote site.
This sounds like a good candidate for a solution.

C) take filesystem snapshots (I use LVM on Linux), then backup from the snapshots.
I know about snapshots and lvm and this is a good way to do backups to a local target, but as you already said, this does not deal with the volume 365 out of 2253 being corrupted during the backup.
And I know about the long duration of the backup not being a recommended excercise, but that would require to use solution A with the added risk of accidentally forgetting about that one non-static file between the static files.

Ed: Do you verify your backups from time to time?
Are you telling me there are people who don't do that after each backup run ? ;-)


Is there a way to make duplicity verify each volume after sending and upload it again if it fails ?

Regards
Rob Verduijn



2011/8/22 <address@hidden>
On 22.08.2011 04:25, Ed Blackman wrote:
> On Wed, Aug 17, 2011 at 06:15:26PM +0200, address@hidden wrote:
>> Good to know. But seriously. A slim line also minimizes the throughput and therefor the data you can put through it over a timeframe. Doing a full over a timeframe of more than a day is challenging at best. I would not advise it.
>>
>> Rather
>>
>> A) split the backup into small parts that are not backed that often
>> or
>> B) do what lot's of people with slow upload channels do. Do duplicity backup to a local file:// target and rsync or upload it with the software of your preference to the remote site.
>
> or
> C) take filesystem snapshots (I use LVM on Linux), then backup from the snapshots.
>
> The advantage of snapshots over option B is that the snapshots are created in a matter of seconds, and so represent a much more consistent view of the system than even a quick backup to a local file:// target.
>
> The disadvantage is that there's a significant scripting overhead.  Not only setting up and tearing down the snapshots, but also just interacting with duplicity.  "--rename $snapshotroot /" gets you most of the way, and it wouldn't be an option without it, but you also have to change all the --includes and --excludes (including filelists) to be relative to the root of the snapshot.
>
> But in the end, it works.  Some of my full backups take 10 days to "trickle" up to Amazon S3, by my script creates the snapshot for it and all the incrementals blocked while the full backup completes.
>

Still the probability of line reset or something else interrupting duplicity uploading will significantly raise the probability of resuming gone wrong or corrupt files in general on the backend. I definitely would not advise to have duplicity running that long.

Ed: Do you verify your backups from time to time?

ede/duply.net

_______________________________________________
Duplicity-talk mailing list
address@hidden
https://lists.nongnu.org/mailman/listinfo/duplicity-talk


reply via email to

[Prev in Thread] Current Thread [Next in Thread]