[Pacemaker] recent experience w/ cluster filesystems

Miles Fidelman mfidelman at meetinghouse.net
Tue Jan 1 15:56:54 EST 2013


Hi Folks,

I've seen some presentations recently - particularly by Florian Haas, 
from LinuxCon EU 2012, about mixing pacemaker with GlusterFS and Ceph, 
in high-availability environments - which leads me to wonder about 
whether either of them is getting mature enough for production use, and 
whether anybody here has experiences they can share.

I've been running a 2-node high-availability cluster, for years, with a 
basic pacemaker/corosync setup, and DRBD for storage mirroring.  All our 
actual services run in a couple of Xen virtual machines that run with a 
primary->secondary failover model.

I also have two additional servers that I'd like to incorporate into the 
cluster, but so far the issue always comes back to how to make the 
underlying storage system more generalized - using something like 
GlusterFS, or Ceph, or Sheepdog as a self-configuring storage cloud, 
with auto-replication across the servers.

I'm faced with a couple of key constraints:
- intermixed compute and storage resources (small rack, 4 servers, each 
with multi-core processors and 4 drives)
- Xen (Sheepdog seems like the ideal solution, but it's Qemu/KVM only, 
and not clear it's production ready)
- Debian (well, that can be changed for Dom0, but it would be a minor pain)

So... I'm wondering if anybody has recent experience to share building a 
small, HA cluster, using GlusterFS or Ceph instead of DRBD to provide a 
HA storage pool?

Thanks very much,

Miles Fidelman


-- 
In theory, there is no difference between theory and practice.
In practice, there is.   .... Yogi Berra





More information about the Pacemaker mailing list