[Pacemaker] problems with cman + corosync + pacemaker in debian

diego fanesi diego.fanesi at gmail.com
Mon Feb 20 16:16:19 EST 2012


Actually I'm studying this technology. This is only a test. I'm trying to
understand all possible configuration and at the moment I have some
problems to understand the differences among file systems. Now I'm trying
to use ocfs2 and it seems to work well but what is the best alternative?

My big question is when should I use one rather than another one? I know
glusterfs and, If I'm right, it doesn't need drbd but it is slower than
gfs2 and ocfs2. If I understood you can have the best performance using
ocfs2, so I'm trying to use it. In case split-brain happens what is the
best?
And with mysql. To realize two node active/active there are many ways. you
can use mysql master/master ndb replication or put data folder in a drbd
partition on ocfs2 with the option "external locking" activated. what is
the fastest and the safest way? in the case of split-brain happen what is
the best? in my opinion the best is drbd. maybe you will loose

As you can see, at the moment I only need to make experiences to understand
these concepts.

Thank you very much for your help.


Il giorno 19/feb/2012 21:52, "Florian Haas" <florian at hastexo.com> ha
scritto:

> On 02/18/12 10:59, diego fanesi wrote:
> > are you saying I can install drbd + gfs2 + pacemaker without using cman?
> > It seems that gfs2 depends on cman...
>
> Only on RHEL/CentOS/Fedora. Not on Debian.
>
> > I want to realize active/active cluster and I'm following the document
> > "cluster from scratch" that you can found on this website.
> >
> > I don't know if there are other ways to realize it.
>
> Here's a reference config; we use this in classes we teach (where we run
> the Pacemaker stack on Debian because that's the only distro that
> supports all of Pacemaker, OCFS2, GFS2, GlusterFS and Ceph). This makes
> no claims at being perfect, but it works rather well.
>
> primitive p_dlm_controld ocf:pacemaker:controld \
>  params daemon="dlm_controld.pcmk" \
>  op start interval="0" timeout="90" \
>  op stop interval="0" timeout="100" \
>  op monitor interval="10"
> primitive p_gfs_controld ocf:pacemaker:controld \
>  params daemon="gfs_controld.pcmk" \
>  op start interval="0" timeout="90" \
>  op stop interval="0" timeout="100" \
>  op monitor interval="10"
> group g_gfs2 p_dlm_controld p_gfs_controld
> clone cl_gfs2 g_gfs2 \
>        meta interleave="true"
>
> Here's the corresponding DRBD/Pacemaker configuration.
>
> primitive p_drbd_gfs2 ocf:linbit:drbd \
>  params drbd_resource="gfs2" \
>  op monitor interval="10" role="Master" \
>  op monitor interval="30" role="Slave"
> ms ms_drbd_gfs2 p_drbd_gfs2 \
>  meta notify="true" master-max="2" \
>  interleave="true"
> colocation c_gfs2_on_drbd inf: cl_gfs2 ms_drbd_gfs2:Master
> order o_drbd_before_gfs2 inf: ms_drbd_gfs2:promote cl_gfs2:start
>
> Of course, you'll have to add proper fencing, and there are several DRBD
> configuration options that you must remember to set. And, obviously, you
> need the actual Filesystem resources to manage your GFSs proper.
>
> That being said, it's entirely possible that a GlusterFS based solution
> would solve your issue as well, and be easier to set up. Or even
> something NFS based, backed by a single-Primary DRBD config for HA. You
> didn't give many details of your setup, however, so it's impossible to
> tell for certain.
>
> Hope this helps.
> Florian
>
> --
> Need help with High Availability?
> http://www.hastexo.com/now
>
> _______________________________________________
> Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.clusterlabs.org/pipermail/pacemaker/attachments/20120220/150cadfd/attachment-0003.html>


More information about the Pacemaker mailing list