|
From: | roland |
Subject: | [rdiff-backup-users] rdiff-backup reliability ad lzop compression |
Date: | Fri, 27 Jan 2006 03:01:53 +0100 |
hello !
i`m new to rdiff-backup and want to tell about 2
things.
the following isn`t meant as critics, nor is it
meant as "i know better....". rdiff-backup
just looks rather promising and i don`t know of any other application (are
there any?) which is able to do incremental backups with a binary diff - so this
is very efficient for saving space and a really neat utility ! cool stuff -
thanks for making it!
first i'd be happy to know, how "reliable"
rdiff-backup is in general.
is my data really _safe_ ?
is this ready for the enterprise?
some "real world" experiences? ('.....i´m backing
up 2 TB daily and never had a problem.....')
i'm in doubt somehow, because i tried rdiff-backup
some time ago and had some problems (cannot describe exactly anymore - it`s some
time ago) - and now while trying again i`m having problems with large
files.
i posted a bug report for this at http://savannah.nongnu.org/bugs/?func=detailitem&item_id=15539
now the more interesting part:
regarding diskspace, there is one idea coming to my
mind:
what about storing _all_ of the backup data
compressed and adding a layer of "realtime
compression/decompression" - i.e. instead of only gzipping the data in
rdiff-backup-data subdir, why not compressing the data in
"destination_directory", too ?
I'm thinking of lzo, which is sort of realtime
compression library.
there is also a python
version there!
look at this little example:
time cp test.dat test2.dat
real
2m39.438s
user 0m0.163s sys 0m28.336s time lzop -c test.dat >test.dat.lzo
real
2m27.205s
user 1m5.725s sys 0m20.681s you can see, that copying the data is slower than
writing it in compressed format. (ok, compressing needs some more
cpu...)
adding lzo could probably save space and time for
backup.
i'm not sure, but maybe someone finds this
interesting and likes to spend some thoughts about this ?
regards
roland
(sysadmin)
|
[Prev in Thread] | Current Thread | [Next in Thread] |