[Pacemaker] Using Pacemaker/Corosync to manage 2 node SHARED-DISK Cluster

mark - pacemaker list m+pacemaker at nerdish.us
Fri Apr 8 19:48:57 EDT 2011


Hi Phil,

On Fri, Apr 8, 2011 at 11:13 AM, Phil Hunt <Phil.Hunt at orionhealth.com> wrote:
>
> Hi
>
> I have been playing with DRBD, thats cool
>
> But I have 2 VM RHEL linux boxes.  They each have a boot device (20g) and a shared ISCSI 200G volume.
>
> I've played with ucarp and have the commands to make available/mount the disk and dismount the shared disk using vgchange/mount/umount, etc.
>
> But decided to user packemaker/heartbeat, since it is more robust
> Got corosync running, vip running.
>
> But I do not see how to make the Pacemaker section mount or dismount a STANDARD EXT3
> disk shared.  I've seen tons of tutorials for drbd and clustered FS  but none showing a simple mount of a disk as a resouce on a node becoming master, or how to dismount it.
>
> Does pacemaker run a script if requested?  Or is the mount/dismount all hard-coded.
>
> I know I'm missing something simple here.  I built the following, but the colocation and order commands error out as syntax error:
>
> node prodmessage1v
> node prodmessage2v
> primitive p_fs_data ocf:heartbeat:Filesystem \
>        params device="/dev/mapper/vg2-dbdata" directory="/data" fstype="ext3"
> primitive p_ip ocf:heartbeat:IPaddr2 \
>        params ip="10.64.114.80" cidr_netmask="32" \
>        op monitor interval="30s"
> primitive p_ping ocf:pacemaker:ping \
>        params name="p_ping" host_list="10.64.114.47 10.64.114.48 10.64.114.4" \
>        op monitor interval="15s" timeout="30s"
> primitive p_rhap lsb:rhapsody \
>        op monitor interval="60s" timeout="120s"
> group g_cluster_services p_ip p_fs_data p_rhap
> clone c_ping p_ping \
>        meta globally-unique="false"
> location loc_ping g_cluster_services \
>        rule $id="loc_ping-rule" -inf: not_defined p_ping or p_ping lte 0
> colocation colo_mnt_on_master inf: g_cluster_services
> order ord_mount_after_drbd inf: g_cluster_services:start
> property $id="cib-bootstrap-options" \
>        dc-version="1.0.10-da7075976b5ff0bee71074385f8fd02f296ec8a3" \
>        cluster-infrastructure="openais" \
>        expected-quorum-votes="2" \
>        stonith-enabled="false" \
>        no-quorum-policy="ignore"
> rsc_defaults $id="rsc-options" \
>        resource-stickiness="100"
>


If I'm understanding your goal here, it's to have an iSCSI disk with
an LVM volume(s) formated with ext3 that moves around with your
virtual IP and the services it handles.  I'm doing the same thing with
three MySQL instances, and I'll throw my config on here but I think
all you're missing is ocf:heartbeat:LVM (you only want vg2 active on
whichever node is going to mount the filesystem) and probably
ocf:heartbeat:iscsi.  I suppose you could let all nodes attach to the
iSCSI disk but only activate the volume group on one, I just like the
simplicity of one node, one LUN.

I'm on the tail-end of beating on this configuration in a test lab
before we install with the real hardware, but it is proving to be very
robust and reliable so far:

============
Last updated: Fri Apr  8 18:33:43 2011
Stack: Heartbeat
Current DC: cn3.testlab.local (860664d4-6731-4af0-b596-fbeacd5ec300) -
partition with quorum
Version: 1.0.10-da7075976b5ff0bee71074385f8fd02f296ec8a3
3 Nodes configured, unknown expected votes
4 Resources configured.
============
Online: [ cn2.testlab.local cn3.testlab.local cn1.testlab.local ]
 Resource Group: MySQL-history
     iscsi_mysql_history (ocf::heartbeat:iscsi): Started cn1.testlab.local
     volgrp_mysql_history (ocf::heartbeat:LVM): Started cn1.testlab.local
     fs_mysql_history (ocf::heartbeat:Filesystem): Started cn1.testlab.local
     ip_mysql_history (ocf::heartbeat:IPaddr2): Started cn1.testlab.local
     mysql_history (ocf::heartbeat:mysql): Started cn1.testlab.local
 Resource Group: MySQL-hsa
     iscsi_mysql_hsa (ocf::heartbeat:iscsi): Started cn2.testlab.local
     volgrp_mysql_hsa (ocf::heartbeat:LVM): Started cn2.testlab.local
     fs_mysql_hsa (ocf::heartbeat:Filesystem): Started cn2.testlab.local
     ip_mysql_hsa (ocf::heartbeat:IPaddr2): Started cn2.testlab.local
     mysql_hsa (ocf::heartbeat:mysql): Started cn2.testlab.local
 Resource Group: MySQL-livedata
     iscsi_mysql_livedata (ocf::heartbeat:iscsi): Started cn3.testlab.local
     volgrp_mysql_livedata (ocf::heartbeat:LVM): Started cn3.testlab.local
     fs_mysql_livedata (ocf::heartbeat:Filesystem): Started cn3.testlab.local
     ip_mysql_livedata (ocf::heartbeat:IPaddr2): Started cn3.testlab.local
     mysql_livedata (ocf::heartbeat:mysql): Started cn3.testlab.local
 stonith_sbd (stonith:external/sbd): Started cn3.testlab.local

You'll see in my config (stuck at the bottom because of length) that I
don't have any colocations; groups have seemed quite sufficient to
make sure everything that needs to be together stays together.  If
handing iSCSI over to pacemaker too, you'll want to make certain that
you disable "iscsi" but leave "iscsid" enabled at boot.  If you leave
the iscsi init script enabled it logs in to everything it knows of
from past sessions, which then prevents pacemaker from being able to
start the iscsi resources since they've been started outside of its
control.

My cluster is three nodes, and I do use location constraints to
influence the preferred primary and secondary hosts for each MySQL
instance, but lumping together of disk/vg/fs/ip/service is handled by
groups, not colocations.

Anyhow, hope all that helps you out somehow. If you notice something
crazy broken in my configuration that I've missed, let me know, eh?
: )

Regards,
Mark


node $id="0dab93e7-6abf-4cdf-959a-b7afb217cda0" cn2.testlab.local
node $id="814b426f-ab10-445c-9158-a1765d82395e" cn1.testlab.local
node $id="860664d4-6731-4af0-b596-fbeacd5ec300" cn3.testlab.local
primitive fs_mysql_history ocf:heartbeat:Filesystem \
	params device="/dev/vgHISTORY/mysql" directory="/mysql-HISTORY" fstype="ext3" \
	op monitor interval="20" timeout="40" on-fail="restart" depth="0" \
	op start interval="0" timeout="60" \
	op stop interval="0" timeout="60"
primitive fs_mysql_hsa ocf:heartbeat:Filesystem \
	params device="/dev/vgHSA/mysql" directory="/mysql-HSA" fstype="ext3" \
	op monitor interval="20" timeout="40" on-fail="restart" depth="0" \
	op start interval="0" timeout="60" \
	op stop interval="0" timeout="60"
primitive fs_mysql_livedata ocf:heartbeat:Filesystem \
	params device="/dev/vgLIVEDATA/mysql" directory="/mysql-LIVEDATA"
fstype="ext3" \
	op monitor interval="20" timeout="40" on-fail="restart" depth="0" \
	op start interval="0" timeout="60" \
	op stop interval="0" timeout="60"
primitive ip_mysql_history ocf:heartbeat:IPaddr2 \
	params ip="192.168.233.61" \
	op monitor interval="10s" timeout="20s"
primitive ip_mysql_hsa ocf:heartbeat:IPaddr2 \
	params ip="192.168.233.62" \
	op monitor interval="10s" timeout="20s"
primitive ip_mysql_livedata ocf:heartbeat:IPaddr2 \
	params ip="192.168.233.63" \
	op monitor interval="10s" timeout="20s"
primitive iscsi_mysql_history ocf:heartbeat:iscsi \
	params portal="192.168.233.25:3260"
target="iqn.2006-01.com.openfiler:tsn.historyd5650a6f7f57" \
	op start interval="0" timeout="120s" \
	op stop interval="0" timeout="120s" \
	op monitor interval="120s" timeout="30s"
primitive iscsi_mysql_hsa ocf:heartbeat:iscsi \
	params portal="192.168.233.25:3260"
target="iqn.2006-01.com.openfiler:tsn.hsa128f974bd2df" \
	op start interval="0" timeout="120s" \
	op stop interval="0" timeout="120s" \
	op monitor interval="120s" timeout="30s"
primitive iscsi_mysql_livedata ocf:heartbeat:iscsi \
	params portal="192.168.233.25:3260"
target="iqn.2006-01.com.openfiler:tsn.live3cf6e76c7e32" \
	op start interval="0" timeout="120s" \
	op stop interval="0" timeout="120s" \
	op monitor interval="120s" timeout="30s"
primitive mysql_history ocf:heartbeat:mysql \
	meta migration-threshold="3" failure-timeout="30s" is-managed="true" \
	params binary="/usr/bin/mysqld_safe" config="/mysql-HISTORY/my.cnf"
datadir="/mysql-HISTORY/DATA" pid="/mysql-HISTORY/history.pid"
log="/mysql-HISTORY/LOG/history.log"
socket="/mysql-HISTORY/mysql.sock" \
	op start interval="0" timeout="120" \
	op stop interval="0" timeout="120" \
	op monitor interval="30s" timeout="30s" on-fail="restart"
primitive mysql_hsa ocf:heartbeat:mysql \
	meta migration-threshold="3" failure-timeout="30s" is-managed="true" \
	params binary="/usr/bin/mysqld_safe" config="/mysql-HSA/my.cnf"
datadir="/mysql-HSA/DATA" pid="/mysql-HSA/hsa.pid"
log="/mysql-HSA/LOG/hsa.log" socket="/mysql-HSA/mysql.sock" \
	op start interval="0" timeout="120" \
	op stop interval="0" timeout="120" \
	op monitor interval="30s" timeout="30s" on-fail="restart"
primitive mysql_livedata ocf:heartbeat:mysql \
	meta migration-threshold="3" failure-timeout="30s" is-managed="true" \
	params binary="/usr/bin/mysqld_safe" config="/mysql-LIVEDATA/my.cnf"
datadir="/mysql-LIVEDATA/DATA" pid="/mysql-LIVEDATA/livedata.pid"
log="/mysql-LIVEDATA/LOG/livedata.log"
socket="/mysql-LIVEDATA/mysql.sock" \
	op start interval="0" timeout="120" \
	op stop interval="0" timeout="120" \
	op monitor interval="30s" timeout="30s" on-fail="restart"
primitive stonith_sbd stonith:external/sbd \
	params sbd_device="/dev/disk/by-path/ip-192.168.233.25:3260-iscsi-iqn.2006-01.com.openfiler:tsn.SBDbf8eb04afc20-lun-0"
\
	op start interval="0" timeout="60s" \
	op stop interval="0" timeout="60s"
primitive volgrp_mysql_history ocf:heartbeat:LVM \
	params volgrpname="vgHISTORY" exclusive="yes" \
	op monitor interval="10" timeout="30" on-fail="restart" depth="0" \
	op start interval="0" timeout="30" \
	op stop interval="0" timeout="30"
primitive volgrp_mysql_hsa ocf:heartbeat:LVM \
	params volgrpname="vgHSA" exclusive="yes" \
	op monitor interval="10" timeout="30" on-fail="restart" depth="0" \
	op start interval="0" timeout="30" \
	op stop interval="0" timeout="30" \
	meta is-managed="true"
primitive volgrp_mysql_livedata ocf:heartbeat:LVM \
	params volgrpname="vgLIVEDATA" exclusive="yes" \
	op monitor interval="10" timeout="30" on-fail="restart" depth="0" \
	op start interval="0" timeout="30" \
	op stop interval="0" timeout="30"
group MySQL-history iscsi_mysql_history volgrp_mysql_history
fs_mysql_history ip_mysql_history mysql_history \
	meta target-role="Started"
group MySQL-hsa iscsi_mysql_hsa volgrp_mysql_hsa fs_mysql_hsa
ip_mysql_hsa mysql_hsa \
	meta target-role="Started"
group MySQL-livedata iscsi_mysql_livedata volgrp_mysql_livedata
fs_mysql_livedata ip_mysql_livedata mysql_livedata \
	meta target-role="Started"
location Primary-history-host MySQL-history \
	rule $id="Primary-history-host-rule" 500: #uname eq cn1.testlab.local
location Primary-hsa-host MySQL-hsa \
	rule $id="Primary-hsa-host-rule" 500: #uname eq cn2.testlab.local
location Primary-livedata-host MySQL-livedata \
	rule $id="Primary-livedata-host-rule" 500: #uname eq cn3.testlab.local
location Secondary-history-host MySQL-history \
	rule $id="Secondary-history-host-rule" 250: #uname eq cn2.testlab.local
location Secondary-hsa-host MySQL-hsa \
	rule $id="Secondary-hsa-host-rule" 250: #uname eq cn3.testlab.local
location Secondary-livedata-host MySQL-livedata \
	rule $id="Secondary-livedata-host-rule" 250: #uname eq cn2.testlab.local
property $id="cib-bootstrap-options" \
	dc-version="1.0.10-da7075976b5ff0bee71074385f8fd02f296ec8a3" \
	cluster-infrastructure="Heartbeat" \
	stonith-enabled="true" \
	last-lrm-refresh="1301348147" \
	stonith-timeout="30s"
rsc_defaults $id="rsc-options" \
	resource-stickiness="1000"




More information about the Pacemaker mailing list