[Pacemaker] Resource Failover in 2 Node Cluster

Darren.Mansell at opengi.co.uk Darren.Mansell at opengi.co.uk
Wed Aug 19 10:33:51 UTC 2009


I've now re-installed the SLES 11 HAE DRBD module, usertools and set the
cluster to use the heartbeat RA and it now fails over as expected. Does
the Linbit provided RA work differently? Is the following anything to do
with it from the logs?

 

Aug 19 11:08:37 gihub2 pengine: [4837]: notice: clone_print:
Master/Slave Set: MS-DRBD-Disk

Aug 19 11:08:37 gihub2 crmd: [4838]: info: unpack_graph: Unpacked
transition 126: 29 actions in 29 synapses

Aug 19 11:08:37 gihub2 pengine: [4837]: notice: print_list:     Masters:
[ gihub1 ]

Aug 19 11:08:37 gihub2 crmd: [4838]: info: do_te_invoke: Processing
graph 126 (ref=pe_calc-dc-1250676517-500) derived from
/var/lib/pengine/pe-warn-1356.bz2

Aug 19 11:08:37 gihub2 pengine: [4837]: notice: print_list:     Slaves:
[ gihub2 ]

Aug 19 11:08:37 gihub2 crmd: [4838]: info: te_pseudo_action: Pseudo
action 36 fired and confirmed

Aug 19 11:08:37 gihub2 pengine: [4837]: notice: group_print: Resource
Group: Resource-Group

Aug 19 11:08:37 gihub2 crmd: [4838]: info: te_pseudo_action: Pseudo
action 46 fired and confirmed

Aug 19 11:08:37 gihub2 pengine: [4837]: notice: native_print:
FileSystem    (ocf::heartbeat:Filesystem):    Started gihub1

Aug 19 11:08:37 gihub2 crmd: [4838]: info: te_rsc_command: Initiating
action 43: stop Virtual-IP_stop_0 on gihub1

Aug 19 11:08:37 gihub2 pengine: [4837]: notice: native_print:
ProFTPD       (lsb:proftpd):  Started gihub1

Aug 19 11:08:37 gihub2 crmd: [4838]: info: te_rsc_command: Initiating
action 62: notify DRBD-Disk:0_pre_notify_demote_0 on gihub2 (local)

Aug 19 11:08:37 gihub2 pengine: [4837]: notice: native_print:     Tomcat
(lsb:tomcat):   Started gihub1

Aug 19 11:08:37 gihub2 crmd: [4838]: info: do_lrm_rsc_op: Performing
key=62:126:0:76a53bb6-ce93-4f38-81b5-f3af04223710
op=DRBD-Disk:0_notify_0 )

Aug 19 11:08:37 gihub2 pengine: [4837]: notice: native_print:
Virtual-IP    (ocf::heartbeat:IPaddr2):       Started gihub1

Aug 19 11:08:37 gihub2 crmd: [4838]: info: te_rsc_command: Initiating
action 65: notify DRBD-Disk:1_pre_notify_demote_0 on gihub1

Aug 19 11:08:37 gihub2 pengine: [4837]: WARN: native_color: Resource
DRBD-Disk:1 cannot run anywhere

 

From: Darren.Mansell at opengi.co.uk [mailto:Darren.Mansell at opengi.co.uk] 
Sent: 19 August 2009 10:20
To: pacemaker at oss.clusterlabs.org
Subject: [Pacemaker] Resource Failover in 2 Node Cluster

 

Hello everyone. I'm a little confused about how this set up should work.
This is my config:

 

node gihub1

node gihub2

primitive stonith-SSH stonith:ssh \

        params hostlist="gihub1 gihub2"

primitive DRBD-Disk ocf:linbit:drbd \

        params drbd_resource="gihub_disk" \

        op monitor interval="59s" role="Master" timeout="30s" \

        op monitor interval="60s" role="Slave" timeout="30s"

primitive FileSystem ocf:heartbeat:Filesystem \

        params fstype="ext3" directory="/www" device="/dev/drbd0" \

        op monitor interval="30s" timeout="15s" \

        meta migration-threshold="10"

primitive ProFTPD lsb:proftpd \

        op monitor interval="20s" timeout="10s" \

        meta migration-threshold="10"

primitive Tomcat lsb:tomcat \

        op monitor interval="20s" timeout="10s" \

        meta migration-threshold="10"

primitive Virtual-IP ocf:heartbeat:IPaddr2 \

        params ip="2.21.4.45" broadcast="2.255.255.255" nic="eth0"
cidr_netmask="8" \

        op monitor interval="30s" timeout="15s" \

        meta migration-threshold="10"

group Resource-Group FileSystem ProFTPD Tomcat Virtual-IP

ms MS-DRBD-Disk DRBD-Disk \

        meta clone-max="2" notify="true" globally-unique="false"

clone STONITH-clone stonith-SSH

location DRBD-Master-Prefers-GIHub1 MS-DRBD-Disk \

        rule $id="drbd_loc_rule" $role="master" 100: #uname eq gihub1

colocation Resource-Group-With-DRBD-Master inf: Resource-Group
MS-DRBD-Disk:Master

order Start-DRBD-Before-Filesystem inf: MS-DRBD-Disk:promote
FileSystem:start

property $id="cib-bootstrap-options" \

        dc-version="1.0.3-0080ec086ae9c20ad5c4c3562000c0ad68374f0a" \

        expected-quorum-votes="2" \

        no-quorum-policy="ignore" \

        start-failure-is-fatal="false" \

        stonith-action="poweroff" \

        last-lrm-refresh="1250615730" \

        stonith-enabled="false"

 

 

I had assumed (and I'm sure it worked like this before) that if I reboot
gihub1, all the resources should instead start on gihub2. I have tried
with stonith-enabled=true which doesn't seem to change anything. Can
anyone see from my config or the attached messages log what is going on?
I've compiled DRBD 8.3.2 and I'm using the new Libit DRBD RA. I'll try
using the heartbeat RA in the meantime.

 

Many thanks

Darren Mansell

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.clusterlabs.org/pipermail/pacemaker/attachments/20090819/a2b28c69/attachment-0002.htm>


More information about the Pacemaker mailing list