Friday, September 19, 2008

Simple Backups for your Mac

You are probably well aware of the need for offsite backups; as a technology professional this is one of the first arrangements I look into for any permanent storage of business information. When I started two years ago for an internal "startup" for a large company, one of the first things we did was set up an SVN repository and then work out an arrangement with an offsite data storage provider. However, the cobbler's children have no shoes: I've never set up a proper backup scheme for my own data at home, and it's about time to take care of business.

Fortunately, now that we've moved to using Macs at home and with the advent of cheap UN*X hosting providers, it's about time I stopped putting this off. The scheme here is pretty simple: get a hosted Linux server from someone like,, or where the storage is backed up and they take care of security updates for the OS. Then set up a pretty simple combination of the UN*X utilities rsync, ssh, cron, and bash scripts to get secure nightly backups going. Just to make it more fun, I'm going to challenge myself to have this all working in under an hour! I'll keep notes as I'm doing it as to how long it takes, not counting the writeup before or afterward.

I decided to register a domain name with a hosting provider, since it was included. My basic requirements were:

  • SSH access
  • rsync installed
  • enough storage for my data

Be sure to acquire the following information from your hosting provider:

  • username/password with SSH access (preferably root, if you want to use the server for other purposes, but this is not necessary)
  • IP address
  • SSH host key of the server

I ended up registering a new domain with at at $9.95/month. As it happened, DreamHost was running a promotion with unlimited disk space and bandwidth for the lifetime of my account. Score! I did have to email tech support to get the ssh host key. If you find yourself in a similar position, you can ask for the output of:

$ ssh-keygen -l -f
2048 0e:c2:f6:f4:d9:86:9d:4b:c4:3d:77:e7:a4:bb:59:14

Ok, great! Now you have a destination for your offsite storage. Next step is to make sure we can securely log in over the network (we'll use ssh for this). On the Mac you want to back up, open up a Terminal window, and ssh into your server using your username and the server's hostname, as in the following. N.B. Do not finish connecting if the ssh server host key you got from your hosting provider does not match the key you see when you try this!

macbook:~ jonm$ ssh
The authenticity of host ' (' can't be established.
RSA key fingerprint is 0e:c2:f6:f4:d9:86:9d:4b:c4:3d:77:e7:a4:bb:59:14.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added ',' (RSA) to the list of known hosts.'s password:

Ok, so far so good. Now we need to make sure we can do it without needing a password; this is where user ssh keys come into play. First, let's create an ssh key to use for backups. We'll want to do this as the root user on our Mac, so that when we run the backup script out of cron, we won't run into permissions problems. You can use the "sudo" command to become root on your Mac:

macbook:~ jonm$ sudo su -

WARNING: Improper use of the sudo command could lead to data loss
or the deletion of important system files. Please double-check your
typing when using sudo. Type "man sudo" for more information.

To proceed, enter your password, or type Ctrl-C to abort.

Password: <enter jonm's password on my mac>
macbook:~ root#

Now we need to create an SSH public/private key pair; this is a similar concept to PGP email encryption/signing; you can read a really interesting description of the chronology behind public key cryptography in the book Crypto by Steven Levy. We'll keep the private key locally on our Mac, and take a copy of the public key and copy it securely up to our backup server; then ssh will use the private key when we connect, allowing the backup server to verify using the public key that we are who we say we are, without having to send a password. Nice.

Specifically, we will want to do the following (still as root):

macbook:~ root# mkdir .ssh
macbook:~ root# chmod 700 .ssh
macbook:~ root# ls -ld .ssh
drwx------  2 root  wheel  68 Sep 19 22:01 .ssh
macbook:~ root# ssh-keygen -t dsa
Generating public/private dsa key pair.
Enter file in which to save the key (/var/root/.ssh/id_dsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /var/root/.ssh/id_dsa.
Your public key has been saved in /var/root/.ssh/
The key fingerprint is:
fd:47:1d:a6:ac:d0:7d:fb:a5:17:cf:e2:8a:93:a5:30 root@jon-moores-macbook.local
macbook:~ root#

Use an empty passphrase (i.e. just hit return when prompted for the passphrase), as this will allow the ssh program to load the key without interaction from you. Also note, however, that anyone who gets root access to your Mac will be able to ssh into your backup server at will. Given that our backup server contains a copy of what this would-be hacker would be able to see on the actual Mac anyway, I don't really see this being a big risk....

Now, we need to copy the public key over to the backup server:

macbook:~ root# scp .ssh/'s password:                             100%  619     0.6KB/s   00:00
macbook:~ root#

You'll have to verify the server SSH key one more time, because now you are connecting from root rather than from your normal user account. Now we'll tell the backup host to accept a login from this key pair:

[backup]$ mkdir -p .ssh
[backup]$ chmod 700 .ssh
[backup]$ cat > ~/.ssh/authorized_keys
[backup]$ chmod 600 ~/.ssh/authorized_keys
[backup]$ exit

Now, we should be able to log in without a password from our Mac:

macbook:~ root# ssh

Sweet. Now we create a directory where our mirrored filesystems will live:

[backup]$ mkdir mac-backups
[backup]$ chmod 700 mac-backups
[backup]$ exit
macbook:~ root#

The utility we'll use to do the mirroring is the rsync utility, which can be invoked to run securely over ssh. This actually makes a nice backup utility for regular use, as the rsync protocol is actually pretty smart about being able to find just the small subsets of data that changed since the last sync; after the first big sync, for most personal file use, there won't be much work to do every night.

For now, let's set up a test directory on our local Mac.

macbook:~ root# mkdir /tmp/back-me-up
macbook:~ root# echo "data" > /tmp/back-me-up/afile.txt

Now, to make the magic happen, we do this:

macbook:~ root# rsync -avz -e ssh /tmp/back-me-up
building file list ... done

sent 116 bytes  received 40 bytes  62.40 bytes/sec
total size is 5  speedup is 0.03
macbook:~ root#

Now we can keep a window open on our backups host, and we should see everything show up there:

[backup]$ ls -lR mac-backups
total 4
drwxr-xr-x 2 jonm pg1807352 4096 2008-09-19 19:11 back-me-up/

total 4
-rw-r--r-- 1 jonm pg1807352 5 2008-09-19 19:11 afile.txt

Just for fun, run the same rsync command above and see that nothing happens if there have been no changes (or rather, just that a very small amount of data gets exchanged to verify no changes).

Let's just make sure changes show up:

macbook:~ root# echo "changed-data" > /tmp/back-me-up/afile.txt
macbook:~ root# rsync -avz -e ssh /tmp/back-me-up

(other window)

[backup]$ cat mac-backups/back-me-up/afile.txt

Ok, looking good. Next step is to identify all the directories you want to back up; let's keep a list of them in a config file on our mac:

macbook:~ root# mkdir -p /usr/local/etc
macbook:~ root# cat - > /usr/local/etc/backups.conf
macbook:~ root#

Note that it is important *not* to have trailing slashes on these directory names, as this changes rsync's behavior slightly in a way that you will probably find annoying (it won't copy the directory name over, just the contents).

Ok, now the next step is to set up a script that can sync each of the directories:

macbook:~ root# mkdir -p /usr/local/bin
macbook:~ root# touch /usr/local/bin/do-backups
macbook:~ root# chmod 700 /usr/local/bin/do-backups
macbook:~ root# cat - > /usr/local/bin/do-backups
for dir in `cat /usr/local/etc/backups.conf`; do
  rsync -avz -e ssh $dir
macbook:~ root#

Now we run it once by hand to make sure it works:

macbook:~ root# /usr/local/bin/do-backups

Finally, we install this in root's crontab as follows:

macbook:~ root# crontab -l > /tmp/root.cron
macbook:~ root# cat - >> /tmp/root.cron
# take a backup every day at 3am
0 3 * * * /usr/local/bin/do-backups >/dev/null
macbook:~ root# crontab /tmp/root.cron
macbook:~ root# rm /tmp/root.cron

Nice and simple. Now the backups are off and running every night without your intervention.

If you ever need to restore from the backup, you can always reverse the rsync process like this:

macbook:~ root# rsync -avz -e ssh /tmp

for each of the directories you have backed up over there.

Enjoy, and sleep well tonight....

P.S. Total elapsed time for the exercise was 2 hours from the time I placed the hosting order to the time the crontab was installed, but I took a one hour break in the middle for dessert and bedtime with the kids. So I'll claim this really did only take one hour of "CPU time" for me.


Dan said...

That's pretty good. On linux (and I think they'd both work on a mac) I actually use two backup solutions, which may seem like (and be!) overkill, but they're automated so cause me no hassle. The first is to a local usb disk, inspired by jwz. This one is good b/c if my disk crashes, I can just plug in the USB drive and boot from my last backup image.

The second is using duplicity. It uses rsync under the hood, I think. The nice thing is that it automatically handles incremental vs. full backups and, further, uses gpg to encrypt the remote images.

I only backup critical data I'd be sad to lose with duplicity because it has to transfer over the network. I keep a month's worth of backups (full once per week). It has saved me a couple of times.

The USB drive backup is a full backup of my root & boot partitions every night and has no history. I've only needed it once and boy was I stoked that I had it when I did!

To extend your rsync strategy, I think you could use --link-dest to do a sort of pseudo-incremental backup without using much more diskspace on the remote side, no?

Jon Moore said...


Sounds like a great solution. I went with simple rsync mirroring for two main reasons: (a) don't think I need the incremental backups; rsync already optimizes to just send diffs, and I'm ok with only having the latest version; and (b) I didn't feel like writing out the script logic to manage the incrementals. :)