duplicity-talk
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Duplicity-talk] S3 ECONNRESET during restore results in SHA1 hash misma


From: Aphyr
Subject: [Duplicity-talk] S3 ECONNRESET during restore results in SHA1 hash mismatch
Date: Tue, 3 May 2016 22:15:51 -0500
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Icedove/38.7.0

Hello, all!

I've got a couple terabytes backed up to S3 via duply. I'm doing a disaster recovery drill and trying to restore that data, but I can't make it more than a few minutes/hours without hitting a (recoverable!) S3 network hiccup which breaks the restore process and forces me to restart the restore from scratch. Each time it breaks on a different file, so I know the issue is a network fault, not that that the files themselves are corrupt.

For example:

--- Start running command RESTORE at 21:41:53.892 ---
Local and Remote metadata are synchronized, no sync needed.
Last full backup date: Thu Oct 15 11:45:39 2015
Download s3://s3-us-west-2.amazonaws.com/my-bucket/b/duplicity-full.20151015T164539Z.vol26.difftar.gpg failed (attempt #1, reason: SSLError: ('The read operation timed out',)) Download s3://s3-us-west-2.amazonaws.com/my-bucket/b/duplicity-full.20151015T164539Z.vol26.difftar.gpg failed (attempt #2, reason: SSLError: ('The read operation timed out',)) Download s3://s3-us-west-2.amazonaws.com/my-bucket/b/duplicity-full.20151015T164539Z.vol26.difftar.gpg failed (attempt #3, reason: SSLError: ('The read operation timed out',)) Download s3://s3-us-west-2.amazonaws.com/my-bucket/b/duplicity-full.20151015T164539Z.vol26.difftar.gpg failed (attempt #4, reason: error: [Errno 104] Connection reset by peer)
Invalid data - SHA1 hash mismatch for file:
 duplicity-full.20151015T164539Z.vol26.difftar.gpg
 Calculated hash: da39a3ee5e6b4b0d3255bfef95601890afd80709
 Manifest hash: 71d69b04b6ed6aa75b604e4eecff51ab08a24cfe

Sometimes it breaks as early as vol3, other times it gets as far as vol745. I've added

DUPL_PARAMS="$DUPL_PARAMS --num-retries=100 "

to my duply config, but it doesn't seem to make a difference.

I've also seen a suggestion that I download the full S3 archives to a local directory, and restore from there. Sadly, I don't have enough free disk to cache an additional 2+ TB.

I think connection-reset errors are always recoverable here; if Duply could either resume restores, or just sleep and retry the download instead of trying to verify incomplete files, I think this would work fine. Any suggestions?

--Kyle



reply via email to

[Prev in Thread] Current Thread [Next in Thread]