[Top][All Lists]
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[rdiff-backup-users] Re: parallelized rdiff-backup running on multiple h
From: |
Sabuj Pattanayek |
Subject: |
[rdiff-backup-users] Re: parallelized rdiff-backup running on multiple hosts connected to shared source and destination filesystems and reporting scripts |
Date: |
Thu, 26 Jun 2008 00:20:14 -0500 |
Hi,
> directories to backup. Each process will open a file called
> rdiff-backup-data/prdiff-backup.running (under various destination
> dirs), then lock the file using advisory locking (to make sure it
> doesn't collide with other rdiff-backup jobs trying to backup to the
> same destination directory), and store in it a string that has the
rdiff-backup seems to throw a warning:
$ rdiff-backup src dest2
Fatal Error: It appears that a previous rdiff-backup session with process
id 6775 is still running. If two different rdiff-backup processes write
the same repository simultaneously, data corruption will probably
result. To proceed with regress anyway, rerun rdiff-backup with the
--force option.
if one tries to run multiple rdiff-backup processes simultaneously to
the same destination directory on the same host but doesn't when
running across multiple hosts into the same destination directory
simultaneously. Perhaps in a future release a check for this could be
implemented using advisory file locking but for now I'm going to have
to keep this functionality in the wrapper.
> output of `hostname -s` indicating which host is currently backing up
> into that directory. Since some backup jobs won't even finish in one
Thanks,
Sabuj Pattanayek