duplicity-talk
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Duplicity-talk] Weird error message after incremental backup of lar


From: Jakob Bohm
Subject: Re: [Duplicity-talk] Weird error message after incremental backup of large drive with a bunch of changed files
Date: Wed, 15 Mar 2023 04:40:05 +0100
User-agent: Mozilla/5.0 (Windows NT 6.3; Win64; x64; rv:5.2) Goanna/20221030 Epyrus/2.0.0

On 2023-03-14 23:48, edgar.soldin--- via Duplicity-talk wrote:

On 14.03.2023 21:54, Jakob Bohm via Duplicity-talk wrote:
Dear group,

hey Jakob,

I have set up some scripts to do various parts of a full system backup
via duplicity to a geographically close S3 bucket (AWS Stockholm),
however for the largest drive, I occasionally experience hangs/errors
near the end of each backup, with the progress display wobbling between
"stalled" and less than 30 minutes left (this time I observed as low as
16 minutes at one point).

Then a day into the stall/short time phase, I received to following
error message (redacted bucket name, mountpoints etc.):

Attempt of put Nr. 1 failed. S3UploadFailedError: Failed to upload
/duplicitypart/tmp/commonprefix-partname/duplicity-g3cvly63-tempdir/duplicity_temp.1/commonprefix-partname_duplicity-inc.20230212T043018Z.to.20230311T203857Z.vol1501.difftar.gpg.vol000+200.par2 to bucketname/partname/commonprefix-partname_duplicity-inc.20230212T043018Z.to.20230311T203857Z.vol1501.difftar.gpg.vol000+200.par2: An error occurred (RequestTimeTooSkewed) when calling the CreateMultipartUpload operation: The difference between the request time and the current time is too large.

sounds like S3 does not like for the upload to take that long?
Yeah, but problem is what makes duplicity take so long to
upload that file, hence why I tried identifying the file
size with ls after the failure.

anyway, to find out what is going on we actually need you to write the full console log to a file and post it somewhere. parse it before upload for sensitive information you don't want to share and obfuscate if needed.
Log already in file, mail contents was extracted as the likely most
relevant parts.  Is verbosity notice not high enough?  Can the file
produced by the --logfile option help?

As I wrote the failure occurs after days of processing, and not every
time, hence any procedure requiring a retry will take weeks .


we will need at least verbosity info.
you will probably need to disable `--progress` as it does not play well with piping.
I know, but I need it to know when it stops doing useful work. Anyway,
I can be pretty good at filtering such logs if need be.

After the message there were some small increases in the GB counter in the progress bar

ls report after killing duplicity:

-rw-r--r-- 1 root root 21336776 Mar 13 18:33 /duplicitypart/tmp/commonprefix-partname/duplicity-g3cvly63-tempdir/duplicity_temp.1/commonprefix-partname_duplicity-inc.20230212T043018Z.to.20230311T203857Z.vol1501.difftar.gpg.vol000+200.par2

sorry, that does not tell us anything.

It tells me that the file was way below the 2G supposedly causing trouble
in that bug report.

Currently invoking duplicity 1.2.1 (patched) with command line

duplicity incremental --name commonprefix-partname \
   --archive-dir /duplicitypart/archive/commonprefix-partname \
   --asynchronous-upload \
   --file-prefix commonprefix-partname_ \
   --tempdir /duplicitypart/tmp/commonprefix-partname \
   --verbosity notice \
   --progress \
   --log-file /duplicitypart/tmp/commonprefix-partname/log_20230311T20_38_52.log \
   --gpg-options '--homedir /configdir/.gnupg --compress-algo=bzip2' \
   --encrypt-secret-keyring /configdir/.gnupg/secret.gpg \
   --encrypt-key 1234567890ABCDEF1234567890ABCDEF12345678 \
   --sign-key 1234567890ABCDEF1234567890ABCDEF12345678 \
   --hidden-encrypt-key 1234567890ABCDEF1234567890ABCDEF12345678 \
   --exclude-other-filesystems \
   --full-if-older-than 3M \
   --s3-use-multiprocessing \
   --numeric-owner \
   /partname \
   par2+boto3+s3://bucketname/partname

probably not error relevant, but some notes as per man page http://duplicity.us/stable/duplicity.1.html

1. you mention AWS Stockholm, so you probably need `--s3-endpoint-url` with boto3. see http://duplicity.us/stable/duplicity.1.html#a-note-on-amazon-s3 2. `--s3-use-multiprocessing` does nothing on boto3, multichunk is activated by default

Unfortunately, whomever wrote the manpage and bug report comment trail
were really bad at telling the difference between boto2 and boto3.

 Since all the previous elements were already uploaded, I strongly
suspect that boto3 identifies the appropriate URL using appropriate AWS
APIs, as this already goes beyond the outdated assumption that AWS has
only 2 locations worldwide.

netstat during another running backup shows that there is indeed a
connection made to s3-r-w.eu-north-1.amazonaws.com in the proper
AWS region.

OS: Debian GNU/Linux 11.7 (bullseye) with Python 3.9.2, python-boto3 version 1.13.14-1

I suspect a relationship with issue #254, and as you see, I have incorporated some of the workarounds into the command line.

as it dies with the par2 file, i doubt that the problem is the size of your signatures.
The behavior is indistinguishable from that bug, thanks to the complete
lack of useful error and progress messages.  Hence why I was checking
the file size manually.

What are the appropriate troubleshooting steps?

as said, a proper log file for a start. you can send it personally, if you don't wanna share it with the list.

Enjoy

Jakob
--
Jakob Bohm, CIO, Partner, WiseMo A/S.  https://www.wisemo.com
Transformervej 29, 2860 Søborg, Denmark.  Direct +45 31 13 16 10
This public discussion message is non-binding and may contain errors.
WiseMo - Remote Service Management for PCs, Phones and Embedded




reply via email to

[Prev in Thread] Current Thread [Next in Thread]