duplicity-talk
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Duplicity-talk] Problems with restore of LARGE backup from S3


From: edgar . soldin
Subject: Re: [Duplicity-talk] Problems with restore of LARGE backup from S3
Date: Mon, 14 May 2012 16:24:45 +0200
User-agent: Mozilla/5.0 (Windows NT 5.1; rv:12.0) Gecko/20120428 Thunderbird/12.0.1

On 14.05.2012 16:17, Roy Badami wrote:
> On 23/04/2012 15:32, address@hidden wrote:
>> On 23.04.2012 16:26, Roy Badami wrote:
>>> I'm trying to perform a test restore of the S3 backups I've been doing of a 
>>> LARGE fileserver (approx 700GB , approx 100,000 files) which comes to 236 
>>> volumes (with volsize set to 2000) in a full backup.  Duplicity version is 
>>> duplicity-0.6.17-1.el6.x86_64
>>>
>>> I've made two attempts to perform a full test restore to another machine, 
>>> and in both cases it eventually gives up.  It got to volume 127 and then 
>>> keeps failing with
>>> Download s3+http://...//duplicity-full.20120304T000513Z.vol127.difftar.gpg 
>>> failed (attempt #9993, reason: IncompleteRead: IncompleteRead(0 bytes read, 
>>> 1622573204 more expected))
>>>
>>> Then eventually, when it hit the retry limit of 9999, it printed the 
>>> following, and now is hung doing nothing
>>>
>>> BackendException: Error downloading 
>>> s3+http://...//duplicity-full.20120304T000513Z.vol127.difftar.gpg
>>>
>>> A couple of other things I notice.  The duplicity process has grown to 
>>> almost 4GB - is this expected? (however, the machine has lots of RAM and 
>>> swap, so this in itself shouldn't be a problem)
>>>
>>> root     31336  9.1  1.8 3801340 613756 pts/2  Sl+  Apr18 673:44 
>>> /usr/bin/python /usr/bin/duplicity --s3-use-new-style --num-retries 9999 
>>> --tempdir /opt/restore-tmp --archive-dir /opt/restore-archive restore 
>>> s3+http://.../ /opt/restore-test
>>>
>>> Also, there are lots of gpg processes running (presumably left behind)?  
>>> However no where near one per volume (there are 38 gpg processes, and we've 
>>> processed 126 volumes before we hit the problem)
>>>
>>> Any thoughts?  Anyone else successfully using duplicity on filesystems this 
>>> large?
>>>
>> on the first look it is a downloading issue. could you try to download your 
>> backup chain to a local drive and restore from there?
>>
>> second idea is that we had a memory leak between 0.6.13-17 . so please try 
>> 0.6.18 and see if that helps. install from tarball as described here under 
>> TIP
>> http://duply.net/?title=Duply-documentation
>> if your distro does not have the latest.
>>
> 
> Thanks, and sorry for delayed response.  A downloading issue was my first 
> thought too, hence why I tried again with even with num-retries set very 
> high.  The fact that it failed again even with num-retries set high makes me 
> suspect there may be another problem (this is over an enterprise net 
> conenction - not DSL - and I saw no other evidence of a network outage).
> 
> Thanks for your suggestions, I will give them a go and report back.
> 
> Incidentally, is the large number of gpg processes that was seeing considered 
> normal, or is that perhaps a sign of some problem somewhere?
> 

on a side note:
duplicity needs boto for s3 access. make sure you have the latest greatest 
installed.

..ede/duply.net



reply via email to

[Prev in Thread] Current Thread [Next in Thread]