duplicity-talk
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Duplicity-talk] Do backup to or restore from S3 get slower as a bucket


From: Brandon Simmons
Subject: [Duplicity-talk] Do backup to or restore from S3 get slower as a bucket increases in size?
Date: Sat, 15 Jan 2011 14:49:44 -0500

I've been working on an upgrade to our backup procedure, and am
considering creating a new sub-folder under an S3 bucket every time we
do a full backup. It looks like from our logs that backups may be
getting slower as a bucket gets more backup tar files, but I'm having
trouble testing this theory (don't want to play around too much with
our repository).

It seems that if duplicity is using the archive directory to store
metadata that it wouldn't matter how many files a bucket contained, in
which case the slowdown I'm seeing is from some other factor.

Can someone provided an explanation of what operations might be
affected by number of files in the repository bucket?

And what i the standard practice for backups to S3? Is it generally
considered safe to keep pushing incremental and periodic full backups
to the same S3 bucket forever?

Thanks again,
Brandon Simmons
http://coder.bsimmons.name



reply via email to

[Prev in Thread] Current Thread [Next in Thread]