duplicity-talk
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Duplicity-talk] Glacier, 2018 edition


From: DAVE cottingham
Subject: Re: [Duplicity-talk] Glacier, 2018 edition
Date: Fri, 30 Mar 2018 15:13:57 -0400 (EDT)


On March 30, 2018 at 2:14 PM Oliver Cole via Duplicity-talk wrote:


Hi,

I see discussion about Glacier in the archives, but nothing recently.

What's the current state of Duplicity + Glacier? Is anyone doing this
for real?

Can duplicity manage the migration to Glacier for me, or do I need to
setup bucket policies myself?

I'm somewhat new to duplicity - would I be configuring it to do
incremental backups, and then a full backup every 100 days or so? Would
that then result in the previous full+incrementals being deleted
automatically?

What if I wanted to go 100% incremental (after the initial full)?
Obviously I would be at risk of one corrupt backup breaking all the
subsequent incrementals, but would that be a possible route if I was
happy with the risk?

I'm not quite up on the duplicity terminology - but what are the
situations where duplicity would need to retrieve metadata from Glacier
(and hence incur some significant costs or time delay)? Is there a way
to keep the metadata in S3 and only migrate the raw data to Glacier?

I've read the warning:
https://medium.com/@karppinen/how-i-ended-up-paying-150-for-a-single-60gb-download-from-amazon-glacier-6cb77b288c3e
It was updated to say that 'The “gotcha” pricing described herein is no
longer in effect, replaced by simple per-GB retrieval fees.'
Are people generally happy with this now?

Oli

_______________________________________________
Duplicity-talk mailing list
address@hidden
https://lists.nongnu.org/mailman/listinfo/duplicity-talk

I used to do this, but that was some years ago. So with the caveat that Glacier may have changed, duplicity may have changed, and I may remember wrong...


I ran full plus incremental to S3 (full every 90 days), and used AWS rules to transition the data files to Glacier. Back in those days duplicity got a patch to how files were named to make it possible for those rules to distinguish the metadata from the data volumes. Because, as you say, you want to be able to read the metadata, but you only need the data volumes for a restore. (I think the only time you need to read metadata during backup is if something happened to the local cache -- but I found that happened from time to time.) The AWS rules match on a prefix of the path name -- so the patch changed the file names to make that possible.


Duplicity may be able to handle the migration now -- though setting it up on AWS was not at all difficult.


Duplicity does not (did not?) automatically delete old full/incremental sets.


Not related to Glacier -- you should think twice about never starting a new full backup. The implication is not just the risk of losing one in the middle -- it's that a restore means applying every incremental in turn. If you have a crash five years from now, you'll be restoring from 1500 backups. I understand the attraction -- those full backups are pretty time consuming. This is why I have switched to deduplicating backups. (Not that I have that fully under control either.)


Hope this helps.


 - Dave Cottingham



reply via email to

[Prev in Thread] Current Thread [Next in Thread]