[Top][All Lists]
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Duplicity-talk] restoring into sparsebundle fails with "No space le
From: |
4qgo8vv02 |
Subject: |
Re: [Duplicity-talk] restoring into sparsebundle fails with "No space left on device" |
Date: |
Thu, 3 Jan 2008 18:40:08 -0800 |
On Jan 3, 2008 12:42 PM, Darik Horn wrote:
> > Is it possible that duplicity is creating 100GB of temporary
> > files during the restore?
>
> Yes, I've seen this behavior on Linux systems too. It is not an
> incompatibility with sparse bundles.
Thanks! I'm glad to know this.
> Duplicity removes its temporary files but the file handles are not
> actually closed until the Python process exits, which means that you
> need twice the backup size available in /tmp to do the restore.
Is this something that can/should be fixed? I'm willing to write up a
patch if you can point me in the right direction.
OK, this isn't cool. I've been restoring for 6 hours now and
duplicity has recreated only 1.3G. It seems to be stuck while trying
to restore a 14G file. I can hear my drive grinding, and see log
reports of GPG decrypting data, but I don't see incremental progress
on the 14G file anywhere (please tell me it's not all in virtual
memory!), and I am watching the free space on my drive slowly go to
zero.
> The /tmp directory may seem empty, but check the output of `lsof /tmp`
> while duplicity is running.
I tried 'lsof /tmp/duplicity-ATV1_9-tempdir/' but it didn't produce
much of interest:
bash-3.2# lsof /tmp/duplicity-ATV1_9-tempdir/
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
bash 55340 root cwd DIR 14,2 850 1475866
/private/tmp/duplicity-ATV1_9-tempdir
lsof 55600 root cwd DIR 14,2 850 1475866
/private/tmp/duplicity-ATV1_9-tempdir
lsof 55601 root cwd DIR 14,2 850 1475866
/private/tmp/duplicity-ATV1_9-tempdir
I've successfully restored all my data (about 12G) before using
duplicity, when it did not include the 14G file. I suspect it's the
large file that's gumming up the works. My hard disk is only 150G, so
I don't have much more than 100G to spare for temporary files (and
don't think I should need to).
I need to get my data back ASAP, so I'm going to try a bunch of
restores by hand using --file-to-restore in order to restore as much
as I can while excluding the 14G file.
Has anyone else had problems restoring large files? Is this a failure
case for duplicity? I love duplicity and want it to be my rock-solid
backup solution. I speak Python and am willing to write code to solve
this problem if someone can confirm there is a bug and help me to
understand what's going wrong.
Thanks,
David