<div><br></div><div>Simon,</div><div><br></div><div>I'm new to this, so if this doesn't help, don't despair - the more experienced members will be along shortly. :-)</div><div><br></div>Could it be that you need "stickyness?"<div>
I think that's the term for the concept you are describing.<div><br></div><div>Also, if that's a two node cluster, have you defined cluster property <meta http-equiv="content-type" content="text/html; charset=utf-8">no-quorum-policy="ignore"?</div>
<div><br></div><div>Best regards,</div><div>Mike<br><br><div class="gmail_quote">On Mon, Dec 13, 2010 at 7:27 AM, Simon Jansen <span dir="ltr"><<a href="mailto:simon.jansen1@googlemail.com">simon.jansen1@googlemail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">Hi,<br><br>I'm trying to set up location contraints for my cluster, but I don't get them to work in the way that I want.<br>
<br>The constraints should implement the following behaviour:<br>- Normal operation<br>msDRBD0 and resIP0 start on node1, msDRBD1 and resIP1 start on node2<br>
- Loss of network connection<br>When the network connection to the public network is lost (tested with pingd resource) the corresponding DRBD resource should failover to the other node.<br>When the connection comes back again the cluster should go into normal operation again.<br>
<br><br>The pingd resource is configured as follows:<br>primitive resPing ocf:pacemaker:pingd \<br> params host_list="node1 node2 standard-gateway" dampen="5s" interval="2" multiplier="10000" \<br>
op start timeout="90" op stop timeout="120"<br>where node1, node2 and standard-gateway stands for the corresponding IP adress.<br><br>The definition of the location contraints to bind the DRBD partition to one node:<br>
location locDRBD0Node1 msDRBD0 \<br>rule $id="locDRBD0Node1-rule" $role="Master" 1000: #uname eq node1<br>location locDRBD1Node2 msDRBD1 \<br>rule $id="locDRBD1Node2-rule" $role="Master" 1000: #uname eq node2<br>
location locIP0Node1 resIP0 \<br>rule $id="locIP0Node1-rule" 1000: #uname eq node1<br>location locIP1Node2 resIP1 \<br>rule $id="locIP1Node2-rule" 1000: #uname eq node2<br><br><br>I tried the following location constraint:<br>
location locDRBD0Ping msDRBD0 rule $role=Master -inf: not_defined pingd or pingd lte 10000<br><br>But with this constraint the cluster does not go into normal operation again.<br><br><br>Maybe someone has implemented an assimilable configuration.<br clear="all">
Thanks for any answers.<br><br>-- <br><br><br>Regards,<br><br>Simon Jansen<br><br><br>---------------------------<br><font color="#888888">Simon Jansen<br>64291 Darmstadt<br>
</font><br>_______________________________________________<br>
Pacemaker mailing list: <a href="mailto:Pacemaker@oss.clusterlabs.org">Pacemaker@oss.clusterlabs.org</a><br>
<a href="http://oss.clusterlabs.org/mailman/listinfo/pacemaker" target="_blank">http://oss.clusterlabs.org/mailman/listinfo/pacemaker</a><br>
<br>
Project Home: <a href="http://www.clusterlabs.org" target="_blank">http://www.clusterlabs.org</a><br>
Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" target="_blank">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br>
Bugs: <a href="http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker" target="_blank">http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker</a><br>
<br></blockquote></div><br><br clear="all"><br>-- <br>Mike Diehn<br>Senior Systems Administrator<br>ANSYS, Inc - Lebanon, NH Office<br><a href="mailto:mike.diehn@ansys.com">mike.diehn@ansys.com</a>, (603) 727-5492<br>
</div></div>