SUSE Enterprise Storage (SES) based on CEPH is very easy to deploy, the tricky part IMHO is the hardware sizing, SUSE has and excellent documentation about the System Requirements for SES2:

Minimal recommendations per storage node

  • 2 GB of RAM per Object Storage Device (OSD).
  • 1.5 GHz of a CPU core per OSD.
  • Separate 10 GbE networks (public/client and back-end).
  • OSD disks in JBOD configurations, or local RAID.
  • OSD disks should be exclusively used by SUSE Enterprise Storage.
  • Dedicated disk/SSD for the operating system, preferably in a RAID1 configuration.
  • Additional 4 GB of RAM if cache tiering is used.

Minimal recommendations per monitor node

  • 3 SUSE Enterprise Storage monitor nodes recommended.
  • 2 GB of RAM per monitor.
  • SSD or fast hard disk in a RAID1 configuration
  • On installations with fewer than seven nodes, these can be hosted on the system disk of the OSD nodes.
  • Nodes should be bare metal, not virtualized, for performance reasons.
  • Mixing OSDs or monitor nodes with the actual workload is not supported.
  • Configurations may vary from, and frequently exceed, these recommendations depending on individual sizing and performance needs.
  • Bonded network interfaces for redundancy.

But... what about how many OSD Nodes needs to serve the Storage that you need?
They are a lot of factors to consider when you design your CEPH Cluster: IOPS, Tiers, Datacenters, Optimize for?, Access, Budget! and many others.

However, a dirty and quick math to get the number is:

TB needed / (Total of OSD on server * TBs of each OSD * Block Size) * Number of replicas = Servers (nodes) needed.

Example:
I have a server (node) with 10 OSD each one with 20TB and want to know how many servers like this need for serve 1 Petabyte with 3 replicas.

(Block Size is an space used in case of failure/recovery to try to avoid losing performance, we can play with this number, It depends on the purpose of your cluster)

1000 / (10 * 20 * .85) * 3 = 18

We need 18 servers like this about to deliver 1 Petabyte of storage with 3 replicas, for this example.

A script? Of course!

Billy@hackintosh:~/lab$ ./OSDServersCalc.sh

SES: OSD Nodes calculator  
Usage: ./OSDServersCalc.sh

<OSDs on server>  
<TB of OSDs>  
<Block Size>  
<How many TB needs to serve?>  
<How many replicas?>

Billy@hackintosh:~/lab$ ./OSDServersCalc.sh 10 20 .85 1000 3  
You need approximately: 18 nodes, with 10 OSD, 20 TB each one  
To serve: 1000 TB with 3 replicas  
Billy@hackintosh:~/lab$  
#!/bin/bash
# wvera@suse.com

Usage() {  
  echo -e "\nSES: OSD Nodes calculator"
  echo -e "Usage: $0\n\n<OSDs on server>\n<TB of OSDs> \
  \n<Block Size>\n<How many TB needs to serve?> \
  \n<How many replicas?>\n"
  exit 0
}
[ $# -eq 5 ] || Usage
OSDtotal="$1"  
TBxosd="$2"  
BS="$3"  
ServeTB="$4"  
Reptotal="$5"  
OSDNodes=$(printf "%.0f\n" \  
  $(echo "scale = 2; $ServeTB/($OSDtotal*$TBxosd*$BS)*$Reptotal" | bc))
  echo -e "You need approximately: $OSDNodes nodes,\
 with $OSDtotal OSD, $TBxosd TB each one\nTo serve:\
 $ServeTB TB with $Reptotal replicas"

https://github.com/bvera/ses-osd-calc

Also available "online" (another lab!):
http://billy.sh/ses-osd-calc/

This is just an exercise, please take the time to plan carefully and in detail its cluster of CEPH

Billy Vera © 2017. All Rights Reserved.

The opinions expressed in this blog are purely my own and are not in any way endorsed by my employer.

Proudly published with Ghost using Ghostium Theme