[Pacemaker] 2-node cluster with shared storage: what is current solution

Саша Александров shurrman at gmail.com
Thu Mar 20 11:34:05 EDT 2014


I removed all clustr-related staff and installed from
However, stonith-ng uses fence_* agents here...  So I cannot put into crmsh

primitive stonith_sbd stonith:external/sbd


2014-03-19 20:14 GMT+04:00 Lars Marowsky-Bree <lmb at suse.com>:

> On 2014-03-19T19:20:35, Саша Александров <shurrman at gmail.com> wrote:
> > Now, we got shared storage over multipath FC there, so we need to move
> from
> > drbd to shared storage. And I got totally confused now - I can not find a
> > guide on how to set things up. I see two options:
> > - use gfs2
> > - use ext4 with sbd
> If you don't need concurrent access from both nodes to the same file
> system, using ext4/XFS in a fail-over configuration is to be preferred
> over the complexity of a cluster file system like GFS2/OCFS2.
> RHT has chosen to not ship sbd, unfortunately, so you can't use this
> very reliable fencing mechanism on CentOS/RHEL. Or you'd have to build
> it yourself. Assuming you have hardware fencing right now, you can
> continue to use that too.
> Regards,
>     Lars
> --
> Architect Storage/HA
> SUSE LINUX Products GmbH, GF: Jeff Hawn, Jennifer Guild, Felix
> Imendörffer, HRB 21284 (AG Nürnberg)
> "Experience is the name everyone gives to their mistakes." -- Oscar Wilde
> _______________________________________________
> Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.clusterlabs.org/pipermail/pacemaker/attachments/20140320/bd9246f8/attachment-0003.html>

More information about the Pacemaker mailing list