Sir, <br>I have setup a 2 node cluster with heartbeat 2.99 pacemaker 1.05. I am using Ubuntu 9.1. Both the packages are installed from ubuntu karmic repository. <br>My packages are:<br><br clear="all">heartbeat 2.99.2+sles11r9-5ubuntu1 <br>
<div id=":uy" class="ii gt">heartbeat-common 2.99.2+sles11r9-5ubuntu1 <br>
heartbeat-common-dev 2.99.2+sles11r9-5ubuntu1 <br>heartbeat-dev 2.99.2+sles11r9-5ubuntu1<br>libheartbeat2 2.99.2+sles11r9-5ubuntu1<br>libheartbeat2-dev 2.99.2+sles11r9-5ubuntu1 <br>
pacemaker-heartbeat 1.0.5+hg20090813-0ubuntu4<br>pacemaker-heartbeat-dev 1.0.5+hg20090813-0ubuntu4</div><br><br>My <a href="http://ha.cf">ha.cf</a> file, crm configuration are all attached in the mail. <br>
<br>I am making a postgres database cluster with slony replication. eth1 is my heartbeat link, a cross over cable is connected between the servers in eth1. eth0 is my external network where my cluster IP get assigned. <br>
server1--> hostname node1<br>node 1 192.168.10.129 eth1<br>192.168.1.1-->eth0<br><br><br>servver2 --> hostname node2<br>node2 192.168.10.130 eth1<br>192.168.1.2 --> eth0 <br><br>Now when I pull out my eth1 cable, I need to make a failover to the other node. For that i have configured pingd as follows. But it is not working. My resources are not at all starting when I give rule as <br>
<span style="background-color: rgb(255, 255, 51);">rule -inf: not_defined pingd or pingd lte0</span><br><br>I tried changing the -inf: to inf: then the resources got started but resource failover is not taking place when i pull out the eth1 cable.<br>
<br>Please check my configuration and kindly point out where I am missing. PLease see that I am using default resource stickness as INFINITY which is compulsory for slony replication.<br><br>MY <a href="http://ha.cf">ha.cf</a> file<br>
------------------------------------------------------------------<br><br>autojoin none<br>keepalive 2<br>deadtime 15<br>warntime 10<br>initdead 64<br>initdead 64<br>bcast eth1<br>auto_failback off<br>node node1<br>node node2<br>
crm respawn<br>use_logd yes<br>____________________________________________<br><br>My crm configuration<br><br>node $id="3952b93e-786c-47d4-8c2f-a882e3d3d105" node2 \<br> attributes standby="off"<br>
node $id="ac87f697-5b44-4720-a8af-12a6f2295930" node1 \<br> attributes standby="off"<br>primitive pgsql lsb:postgresql-8.4 \<br> meta target-role="Started" resource-stickness="inherited" \<br>
op monitor interval="15s" timeout="25s" on-fail="standby"<br><span style="background-color: rgb(255, 255, 51);">primitive pingd ocf:pacemaker:pingd \</span><br style="background-color: rgb(255, 255, 51);">
<span style="background-color: rgb(255, 255, 51);"> params name="pingd" hostlist="192.168.10.1 192.168.10.75" \</span><br style="background-color: rgb(255, 255, 51);"><span style="background-color: rgb(255, 255, 51);"> op monitor interval="15s" timeout="5s"</span><br>
primitive slony-fail lsb:slony_failover \<br> meta target-role="Started"<br>primitive slony-fail2 lsb:slony_failover2 \<br> meta target-role="Started"<br>primitive vir-ip ocf:heartbeat:IPaddr2 \<br>
params ip="192.168.10.10" nic="eth0" cidr_netmask="24" broadcast="192.168.10.255" \<br> op monitor interval="15s" timeout="25s" on-fail="standby" \<br>
meta target-role="Started"<br>clone pgclone pgsql \<br> meta notify="true" globally-unique="false" interleave="true" target-role="Started"<br><span style="background-color: rgb(255, 255, 51);">clone pingclone pingd \<br>
meta globally-unique="false" clone-max="2" clone-node-max="1"<br>location vir-ip-with-pingd vir-ip \<br> rule $id="vir-ip-with-pingd-rule" inf: not_defined pingd or pingd lte 0<br>
</span>meta globally-unique="false" clone-max="2" clone-node-max="1"<br>colocation ip-with-slony inf: slony-fail vir-ip<br>colocation ip-with-slony2 inf: slony-fail2 vir-ip<br>order ip-b4-slony2 inf: vir-ip slony-fail2<br>
order slony-b4-ip inf: vir-ip slony-fail<br>property $id="cib-bootstrap-options" \<br> dc-version="1.0.5-3840e6b5a305ccb803d29b468556739e75532d56" \<br> cluster-infrastructure="Heartbeat" \<br>
no-quorum-policy="ignore" \<br> stonith-enabled="false" \<br> last-lrm-refresh="1266851027"<br><span style="background-color: rgb(255, 255, 51);">rsc_defaults $id="rsc-options" \</span><br style="background-color: rgb(255, 255, 51);">
<span style="background-color: rgb(255, 255, 51);"> resource-stickiness="INFINITY"</span><br><br>_____________________________________<br><br>My crm status:<br>__________________________<br><br>crm(live)# status<br>
<br><br>============<br>Last updated: Mon Feb 22 23:15:56 2010<br>Stack: Heartbeat<br>Current DC: node2 (3952b93e-786c-47d4-8c2f-a882e3d3d105) - partition with quorum<br>Version: 1.0.5-3840e6b5a305ccb803d29b468556739e75532d56<br>
2 Nodes configured, unknown expected votes<br>5 Resources configured.<br>============<br><br>Online: [ node2 node1 ]<br><br>Clone Set: pgclone<br> Started: [ node1 node2 ]<br>Clone Set: pingclone<br> Started: [ node2 node1 ]<br>
<br>============================<br><br>please help me out. <br>--<br>Regards,<br><br>Jayakrishnan. L<br><br>Visit: <a href="http://www.jayakrishnan.bravehost.com">www.jayakrishnan.bravehost.com</a><br><br>