rdiff-backup-users
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [rdiff-backup-users] rdiff-backup and rotated external drives


From: Thomas Harold
Subject: Re: [rdiff-backup-users] rdiff-backup and rotated external drives
Date: Wed, 25 Sep 2013 22:30:27 -0400
User-agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:17.0) Gecko/20130801 Thunderbird/17.0.8



On 9/3/2013 9:52 AM, Dan Joyce wrote:
Looking at rdiff-backup as an Ubuntu Server solution. It looks great,
but I can't find any documentation on using multiple external hard
drives and how it will handle them being rotated off-site. I'd like to
have at least two externals that will rotate weekly. Does rdiff work
with this setup?  If so, can I keep the two drives in sync? In other
words, if Drive1 is attached this week and backs up incrementally daily
then Friday I rotate in Drive2 through next Friday, will Drive2 somehow
include all the incremental changes--and therefore deleted/changed
files--that happened in the week it was off-site? Or, will I only have
every-other week's changes on one drive?



The solution we use at the office is:

1) Linux server with a big enough drive to hold all of the rdiff-backups for the entire organization. Use of LVM2 is a must.

2) Lots of LVs (each backup "client" gets their own LV to write their rdiff-backups to). There might be multiple rdiff-backup directories inside each LV.

So we mount our "svn" rdiff-backups LV (the LV is called "rdiffs-svn") under /backup/rdiffs/svn. There will be multiple directories under /backup/rdiffs/svn, one for each SVN Repository.

3) We have configured "autofs" to automatically mount any backup drive that is attached under /mnt/offsite/LABEL. Our drive labels are like "OFFSITE01" through "OFFSITE99". We use the UUID of the partition on the USB drive to identify the drive to "autofs". It mounts the file system on first access, then dismounts it after 5 minutes of inactivity. Which makes it safer to pull the drive without having to manually dismount the file system.

4) Individual backup clients write their rdiff-backup data across the LAN to the backup server, to their dedicated directory under the /backup/rdiffs tree on the backup server.

5) We have a script that runs once per day on the server which searches under /backup/rdiffs for rdiff-backup directories and queues them up to be rsync'd to the USB drive.

This is where LVM comes into play. First, the script checks that no files within /backup/rdiff/clientdirectory have been changed in the last 300 seconds. That would indicate that the client is currently backing up to the rdiff-backup directory and we would be sync'ing something that isn't good to sync at the moment. Then it creates a read-only snapshot of the LV for that client directory and uses that read-only snapshot to do the rsync. After the rsync finishes, we drop the snapshot.

6) Clients run rdiff-backup, then immediately run the rdiff-backup "trim" command to trim out any incrementals older then some defined time period. How long you keep incrementals around depends on your needs, we might only keep 14 days for a "every 3 hours" backup, or we might keep 26W or 53W for a daily backup.

Notes:

- The "sync to USB" script has to be smart enough to search for the "lost+found" directory by iterating all of the directories under "/mnt/offsite". This forces autofs to load the file system if that drive is attached. It also means that we can easily add new USB drives to the mix down the road. They just need an unique mount point under /mnt/offsite and to be added to the autofs configuration file.

- You can use LUKS to encrypt the partition on the USB drive. This is /slightly/ more complex in that there is no way for cryptsetup to automatically check that a new drive was attached (/etc/crypttab only applies to drives attached at server start), but a periodic cron job can check for LUKS volumes that need opened.

- In addition to pushing to USB, we also use the same backup server and rdiff-backup directories to push our backups to an offsite location over SSH.

PS: I can go into more detail if needed on all this.



reply via email to

[Prev in Thread] Current Thread [Next in Thread]