duplicity-talk
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Duplicity-talk] Do backup to or restore from S3 get slower as a buc


From: Timothee BESSET
Subject: Re: [Duplicity-talk] Do backup to or restore from S3 get slower as a bucket increases in size?
Date: Sat, 15 Jan 2011 14:07:25 -0600

Can you give a range of how many files and how big your dataset is
maybe. I only backup personal stuff to S3 (around 30GB), and I haven't
noticed anything .. not that I've paid particular attention though,
and you sound like you have quite a larger dataset being pushed up.

I just push everything to the same bucket, I delete old fulls on a
regular basis.

TTimo

On Sat, Jan 15, 2011 at 1:49 PM, Brandon Simmons
<address@hidden> wrote:
> I've been working on an upgrade to our backup procedure, and am
> considering creating a new sub-folder under an S3 bucket every time we
> do a full backup. It looks like from our logs that backups may be
> getting slower as a bucket gets more backup tar files, but I'm having
> trouble testing this theory (don't want to play around too much with
> our repository).
>
> It seems that if duplicity is using the archive directory to store
> metadata that it wouldn't matter how many files a bucket contained, in
> which case the slowdown I'm seeing is from some other factor.
>
> Can someone provided an explanation of what operations might be
> affected by number of files in the repository bucket?
>
> And what i the standard practice for backups to S3? Is it generally
> considered safe to keep pushing incremental and periodic full backups
> to the same S3 bucket forever?
>
> Thanks again,
> Brandon Simmons
> http://coder.bsimmons.name
>
> _______________________________________________
> Duplicity-talk mailing list
> address@hidden
> http://lists.nongnu.org/mailman/listinfo/duplicity-talk
>



reply via email to

[Prev in Thread] Current Thread [Next in Thread]