duplicity-talk
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Duplicity-talk] Does duplicity resume?


From: Kenneth Loafman
Subject: Re: [Duplicity-talk] Does duplicity resume?
Date: Thu, 11 Dec 2008 06:58:57 -0600
User-agent: Thunderbird 2.0.0.18 (X11/20081125)

Could work either way.  If we kept the checkpoint files on the local
machine, and resumed from that machine, overhead would be reduced.

A possibly better way to do it would be after N volumes, making sure to
complete the current input file.  Could just dump the state, the current
manifest, and the current sigtar, into temp files.  Resume could then be
done from the last state.

Hmmm, this is sounding plausible.  But, I need more coffee... there's
entirely too much blood in my caffeine stream at the moment.

...Ken

address@hidden wrote:
> I actually thought more of an amount of data, to say .. every 50MB or so
> ... ede
> -- 
> 
>> That would not be true resumability, but checkpointing (I know, a nit).
>>  You are correct.  That could be done every hour or so and would make
>> restart easier.
>>
>> ...Thanks,
>> ...Ken
>>
>> address@hidden wrote:
>>  
>>> In case resumability gets implemented someday:
>>>
>>> Given the fact that a resumed backup can be seen as a string of
>>> incrementals. The first full request was interrupted now only missing
>>> (or inbetween changed) parts will be transferred.
>>> Viewing at the problem from this angel there seems to be a kind of easy
>>> route in implementation.
>>> At specific control points a temporary manifest (it holds the file meta
>>> data list right?) has to be uploaded. It indicates that a backup was
>>> underway and which data is already uploaded. The next backup routine
>>> finds it and can resume from the last manifest up to a final manifest.
>>>
>>> Hope this makes sense... just wanted to write it down before I forget it
>>> .. regards ede
>>> -- 
>>>
>>>    
>>>> Hello
>>>>
>>>> Kenneth Loafman wrote:
>>>>  
>>>>      
>>>>> Duplicity does not have a resume capability.  The suggested route
>>>>> is to
>>>>> subdivide the backup into manageable chunks, said chunks being
>>>>> whatever
>>>>> size you are willing to redo.  My general approach is to make each
>>>>> backup less than about 4 hours.
>>>>>
>>>>>             
>>>> FWIW, my monthly full backup takes about 10 hours (for 185GB of data)
>>>> and I did not have any major problem so far (*knocking wood*), only
>>>> difference is that I use FTP and not SCP...
>>>>
>>>> a+
>>>> Nicolas
>>>>         
>>>
>>> _______________________________________________
>>> Duplicity-talk mailing list
>>> address@hidden
>>> http://lists.nongnu.org/mailman/listinfo/duplicity-talk
>>>
>>>     
>>
>>
>>  
>> ------------------------------------------------------------------------
>>
>> _______________________________________________
>> Duplicity-talk mailing list
>> address@hidden
>> http://lists.nongnu.org/mailman/listinfo/duplicity-talk
>>   
> 
> 
> 
> _______________________________________________
> Duplicity-talk mailing list
> address@hidden
> http://lists.nongnu.org/mailman/listinfo/duplicity-talk
> 


Attachment: signature.asc
Description: OpenPGP digital signature


reply via email to

[Prev in Thread] Current Thread [Next in Thread]