duplicity-talk
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Duplicity-talk] File *.gpg was corrupted during upload.


From: edgar . soldin
Subject: Re: [Duplicity-talk] File *.gpg was corrupted during upload.
Date: Sun, 06 Nov 2011 20:03:16 +0100
User-agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:7.0.1) Gecko/20110929 Thunderbird/7.0.1

On 06.11.2011 19:26, Michael Terry wrote:
> On 6 November 2011 13:04,  <address@hidden> wrote:
>> well it is not a problem in our code if the retry is successful. then it is 
>> most probably an issue with the line/connection/backend_software on which we 
>> do not have an influence.
> 
> But the backends have code to check that.  Take the GIO backend.
> There's a blocking call to upload a file.  If there's a problem with
> the line/connection, there will be an error.  If there's not, and all
> the bits go up, GIO will say "all done".
> 
> In such cases as this corruption test, GIO is saying "all done" and
> the file still isn't what we expected.  Which means we uploaded
> something bad or we have some weird subtle bug somewhere.
> 
> I'm not saying retrying is necessarily a bad idea.  It may help in
> these cases.  

i see your point. but considering the fact that duplicity is a backup app my 
rationale is: 
why not making this test standard (the overhead is marginal and it does not 
break if backends don't implement it) as an extra safety measure. it simply 
makes sure that the uploaded data has the same size and therefore at least 
hints that the uploaded might be complete on the backend.

>Though note that we haven't yet heard reports of someone
> that hit it for its original purpose -- only the person that hit it
> via a new bug in 0.6.16's S3 backend.  It makes me worried that the
> check isn't working for the original cases.

but that proves that the check itself makes sense. it detected a problem if 
only not the one it was initially targeted at. that it didn't come up for the 
original issue might as well only be the case because the issue was a s3 
hosting side issue that they silently fixed internally already. how would we 
know?

> 
> It's a matter of whether we feel that getting to the bottom of this
> subtle bug is more or less important than the impact of people seeing
> this error message.

i just think. now that we have it, let's not only see it as a one problem 
detector but use it as a general safety measure. it could "safeguard" all 
uploaded files that way.

if you see no actual downside, i offer to implement it .. ede/duply.net

> 
> -mt
> 
> _______________________________________________
> Duplicity-talk mailing list
> address@hidden
> https://lists.nongnu.org/mailman/listinfo/duplicity-talk



reply via email to

[Prev in Thread] Current Thread [Next in Thread]