[Top][All Lists]
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Duplicity-talk] [patch] massive performance fix for large volume sizes
From: |
Peter Schuller |
Subject: |
[Duplicity-talk] [patch] massive performance fix for large volume sizes |
Date: |
Thu, 6 Sep 2007 00:03:11 +0200 |
User-agent: |
Mutt/1.5.16 (2007-06-09) |
So I noticed that the speed of backups went *WAY* downhill with large
volume sizes. Turns out a large volume size directly translates into
larger read() calls on underlying file objects, and 100 MB read():s
are not handled efficiently by Python.
Performance metrics on one of my test cases:
volsize 100 w/ patch: ~ 13 seconds CPU
volsize 100 w/o patch: ~ 156 seconds CPU
volsize 5 w/o patch: ~ 18 seconds CPU
volsize 5 w/ patch: ~ 13 seconds CPU
The patch is trivial and is attached. I just went with 64 kb size of
each read() because it is a generally sound value; one could benchmark
to determine an optimal value, but for now 64 kb will at least give
sane performance ;)
--
/ Peter Schuller
PGP userID: 0xE9758B7D or 'Peter Schuller <address@hidden>'
Key retrieval: Send an E-Mail to address@hidden
E-Mail: address@hidden Web: http://www.scode.org
duplicity-read-buf-cap.diff
Description: Text Data
pgpUVSXe_9GjD.pgp
Description: PGP signature
- [Duplicity-talk] [patch] massive performance fix for large volume sizes,
Peter Schuller <=