|
From: | tink |
Subject: | Re: [Duplicity-talk] Performance issues w/ S3 |
Date: | Wed, 24 Jul 2024 10:46:43 +1200 |
Hi Tink,
Am 23.07.24 um 04:13 schrieb tink via Duplicity-talk:
On Mon, 22 Jul 2024 at 21:57, Thomas Laubrock via Duplicity-talk <duplicity-talk@nongnu.org> wrote:Am 21.07.24 um 23:21 schrieb tink via Duplicity-talk:
And without compression and encryption the local version takes 55 seconds.The compression and encryption is done by feeding the data into GnuPG binary installed on your system.
I suggest following tests:
- use `gpg` and encrypt a file manually and check the timing.
Encrypting a 10GB file (using compression) with GPG took 6m34s (again reading from rust, writing to SSD).gpg with no compression takes 37sOK, there is not much duplicity can do here, as it rely on gpg. Maybe you have any strange gpg setting in your system. Working with gpg-key in opposite with symmetric passphrase may also have an advantage.
IMHO gpg itself has no option to run on multiple cores.
Because of the architecture of duplicity volumes must be created in sequence.
Long story short, if you are able to speed up gpg, you duplicity will get the same advantage.
Upload to AWS seems not to be your bottleneck, but if try `--concurrency=X`. In my env volume creation worked fast enough to feed 2-3 uploads. Keep in mind that parallel upload will pretty fast saturate other limits like network bandwidth or server limits.
[Prev in Thread] | Current Thread | [Next in Thread] |