2 min read

DIY your Dropbox -like backup with ZFS, sshfs and rsync

After some tests, I choose Ubuntu 12.04.3 as ZFS platform and Elementary OS as netbook OS. ZFS already supports replication among different system but with < 4gb I don't think using ZFS is a good idea.


  • Incremental backup snapshot -like with history, efficient and fast -> rsync + ZFS
  • Client: normal Linux Desktop with ext4 or *nix like
  • Userland managed. No root password required to configure the client
  • No CRON dependant -> Manual synchronization (via user click) or automatic (eventually with inotify client side)
  • Secure comunication: so no NFS or SMB/CIFS but ssh fs

Here the commands I use, just take them as reference and adapt them to your needs. I assume You've set up zfs driver and a pool (here some more steps if you want zfs on root partition)

Server setup

A dataset per user and visible snapshots in the .zfs directory

sudo zfs create rpool/testuser
zfs set snapdir=visible rpool

We need to monitor when the remote user wants a snapshot, it means the rsync is completed and an empty "snapshot.lock" file is created and after the snapshot we remove it.

sudo apt-get install inotify-tools

And cut and paste this script where you like.

__ zfsmonitor.sh __
# We have the user dataset directories under /rpool                     
datasets=$(ls /rpool/*/ -d -1| xargs -n 1| sed 's:^.\(.*\).$:\1:')

monitor() {
  echo "Watching directory: $dataset for new files"
  inotifywait -m -q -e create /"$dataset" --format "%f" |
    while read filename
      # remove fake trigger file
      if [ $filename == "snapshot.lock" ]; then
        rm /$dataset/$filename
        echo "snapshot "$dataset@$(date "+%F-%T-%a")
        zfs snapshot $dataset@$(date "+%F-%T-%a") # Create the snapshot
        zfs list -rt snapshot -H  -S creation -o name $dataset| tail -n +8 | xargs -n 1 zfs destroy # Remove old snapshots
# initialization of the listeners
for dataset in $datasets; do
  monitor $dataset & # BUG: doesn't close on EXIT!

This script works, but on restart it leaks because "inotifywait" continue to run, so you need to "kill $(pgrep inotifywait)" before run it again. Fixes are welcome in the comments.

Start zfsmonitor.sh and to the client

Client setup

Install sshfs if not present:

sudo apt-get install sshfs

Then create a directory where to mount remote resources

mkdir ~/Remote

And mount the remote directory inside it:

sshfs remoteuser@yourserver.local:/rpool/testuser ~/Remote

Type a password or use and ssk_key, you should see it mounted with the "mount" command.

__ zfs-click-to-backup.sh __

REMOTEHOME="testuser"  # or # ~/Remote/$(whoami) 
rsync -rvt --stats --delete --exclude ".zfs" ~/Documenti $REMOTEHOME
touch $REMOTEHOME/snapshot.lock # the rsync is done the zfs snapshot will remove it
notify-send 'backup done' # the popoup

This script could be a launcher / icon or triggered manually.

That's all, if you set up everything correctly you should start to see the snapshots in ~/Remote/testuser/.zfs/snapshot .