A common customer's question about Ceph is how we can make backups of Ceph.
To me, this is kinda redundant, however I make this lab because my clients always have a complex scenario to deploy... and who knows, have a backup, will save you a lot of headaches.

Looking into Ceph Documentation, I found: INCREMENTAL SNAPSHOTS WITH RBD
And it looks good, but not exactly what I want.

The rbd(8) manpage says:

export (image-spec | snap-spec) [dest-path]
Exports image to dest path (use - for stdout).

import [–image-format format-id] [–object-size B/K/M] [–stripe-unit size-in-B/K/M –stripe-count num] [–image-feature feature-name]... [–image-shared] src-path [image-spec]
Creates a new image and imports its data from path (use - for stdin)...

Could it be that simple? let's see

In this lab, we have my SES Cluster (sesadm) and a Linux Client (sesclient) using a image (rbd/linuxshare) mounting in /home/backups through a iSCSI Gateway (iscsigw)

+---------------+             +--------------------+            +---------------------+
|               |             |                    |            |                     |
| SES3 Cluster  | +---------> | iSCSI Gateway      | +--------> |  iSCSI Linux Client |
|               |             | rbd / linuxshare   |            |  /home/backups      |
|               |             |                    |            |                     |
+---------------+             +--------------------+            +---------------------+
    sesadm                          iscsigw                           sesclient

Let's check some data on sesclient

sesclient:~ # df -H
Filesystem                                     Size  Used Avail Use% Mounted on
/dev/vda2                                       51G  1.9G   48G   4% /
devtmpfs                                       2.0G     0  2.0G   0% /dev
tmpfs                                          2.0G     0  2.0G   0% /dev/shm
tmpfs                                          2.0G   19M  2.0G   1% /run
tmpfs                                          2.0G     0  2.0G   0% /sys/fs/cgroup
/dev/mapper/3600140503a0ce22122637ad974976cef  5.2G  4.2G  696M  86% /home/backups
sesclient:~ # md5sum /home/backups/*.iso
c5d2148c2b66ac3ca211484cf2167fab  /home/backups/SLES-11-SP4-DVD-x86_64-GM-DVD1.iso
36ee3ef3ab4173c8459d77c6f781b2c1  /home/backups/SUSE-Enterprise-Storage-3-DVD-x86_64-GM-DVD1.iso
sesclient:~ #

Now, let's play, make the export

sesadm:~ # rbd export -p rbd linuxshare /home/cephadm/backup/linuxshare-backup
Exporting image: 100% complete...done.
sesadm:~ #

ERASE the POOL!... Yes! the entire pool

sesadm:~ # ceph osd pool delete rbd rbd --yes-i-really-really-mean-it
pool 'rbd' removed
sesadm:~ #

Force the chaos in sesclient

sesclient:~ # umount /home/backups/
sesclient:~ # mount /dev/mapper/3600140503a0ce22122637ad974976cef /home/backups/
mount: /dev/mapper/3600140503a0ce22122637ad974976cef: can't read superblock
sesclient:~ #

And restore...

sesadm:~ # ceph osd pool create rbd 128
pool 'rbd' created
sesadm:~ #
sesadm:~ # rbd import --dest-pool rbd /home/cephadm/backup/linuxshare-backup linuxshare
Importing image: 100% complete...done.
sesadm:~ #

In sesclient we need to re discover and login for the target (iscsigw) and we can see after this, that device mapper ID changed

sesclient:~ # multipath -ll
36001405c79056639a623ff2b2fc85f34 dm-0 SUSE,RBD
size=5.0G features='1 queue_if_no_path' hwhandler='1 alua' wp=rw
`-+- policy='service-time 0' prio=50 status=active
  `- 2:0:0:0 sda 8:0   active ready running
sesclient:~ #

However, lets mount and check the files

sesclient:~ # mount /dev/mapper/36001405c79056639a623ff2b2fc85f34 /home/backups/
sesclient:~ # ls /home/backups/
SLES-11-SP4-DVD-x86_64-GM-DVD1.iso  SUSE-Enterprise-Storage-3-DVD-x86_64-GM-DVD1.iso  lost+found
sesclient:~ #

But are the same files? The same checksum?

Yeah! Same checksum, We can say it works

And yes, I made a ugly and dirty script for this:

#!/bin/bash
# wvera@suse.com

BackupPath=/home/cephadm/backups

usage() {
    echo "Ceph Export / Import Block Storage"
    echo "$0 <export|import>"
}

export() {
for pool in $(rados lspools)
do
  for  images in $(rbd ls -p $pool)
  do
  PoolExportPath=${BackupPath}/${pool}
  mkdir -p $PoolExportPath
  rbd export -p $pool $images ${PoolExportPath}/${images}
  done
done
}

import() {
if [ ! -d $BackupPath ]
    then
    echo "You don't have backups in $BackupPath or variable is empty"
    exit 1
    else 
    for pool in $(ls $BackupPath);do
    ceph osd pool create $pool 128
    for image in $(ls ${BackupPath}/${pool});do 
    echo ${BackupPath}/${pool}/$image 
    rbd import --dest-pool $pool ${BackupPath}/${pool}/$image $image
    done
    done
fi

}

ask() {
	case "$DOIT" in
  	"export")
    export
  	;;
  	"import")
    import
  	;;
  	*)
    usage
  	;;
	esac
	
}
if [ "$#" -ne "1" ]; then usage; else DOIT=$1; ask;fi

This script has two options:

  • export: sweep all pools of you Ceph Cluster and then export the image(s) into the path defined with the var $BackupPath
  • import: asumes that the pool does not exit and create it, then import the images according the tree of $BackupPath

Of course this is a very, very, very rustic stuff, I uploaded it at my github with a TODO list, any ideas are welcome!

https://github.com/bvera/ceph-backup

Remember this is tested on my personal lab, please test and re test before you use it at production environment

Happy backup!

Billy Vera © 2017. All Rights Reserved.

The opinions expressed in this blog are purely my own and are not in any way endorsed by my employer.

Proudly published with Ghost using Ghostium Theme