<div dir="ltr">Just to tie this off.<div><br></div><div style>It now seems stable since reinstalling vmware tools on both nodes. So it seems nothing to do with corosync or pacemaker.</div><div style><br></div><div style>Regards,</div>
<div style>Darren</div></div><div class="gmail_extra"><br><br><div class="gmail_quote">On 7 February 2013 11:03, Darren Mansell <span dir="ltr"><<a href="mailto:darren.mansell@gmail.com" target="_blank">darren.mansell@gmail.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr">Hi all.<div><br></div><div>I've installed a Corosync/Pacemaker cluster of 2 nodes into a VMware ESX environment. The install uses Debian squeeze (6.0) with packages from squeeze-backports.</div>
<div><br>
</div><div>These are package versions in use:</div><div><br></div><div>corosync 1.4.2-1~bpo60+1<br></div><div>pacemaker 1.1.7-1~bpo60+1<br></div>
<div>( + required packages and libs )</div><div>( I had to use backports to get the failure-timeout ability )</div><div><br></div><div>I use these 2 nodes to run ldirectord and a VIP to load-balance a MS Exchange cluster and it works very well in the main. But about twice a day there are losses of quorum where the cluster will go split-brain then recover after about 30 seconds.</div>
<div><br></div><div>I've already had to disable STONITH for this issue as it was causing long shoot-outs and taking a while to recover. Now with failure-timeouts and no STONITH it comes back fairly quickly.</div>
<div><br></div><div>I've attached a hb_report from both nodes and put the cluster config below. Any ideas or thoughts would be most welcome.</div><div><br></div><div>Many thanks.</div><div>
Darren</div><div><br></div><div>crm configure show:</div><div><div>node exlb01</div><div>node exlb02</div><div>primitive VIP1 ocf:heartbeat:IPaddr2 \</div><div> params lvs_support="true" ip="10.8.35.55" cidr_netmask="24" broadcast="10.8.35.255" \</div>
<div> op monitor interval="60" timeout="60" \</div><div> meta migration-threshold="2" failure-timeout="120"</div><div>primitive ldirectord ocf:heartbeat:ldirectord \</div>
<div> params configfile="/etc/ha.d/<a href="http://ldirectord.cf" target="_blank">ldirectord.cf</a>" \</div><div> op monitor interval="60" timeout="60" \</div><div> meta migration-threshold="2" target-role="Started" failure-timeout="120"</div>
<div>group lb VIP1 ldirectord \</div><div> meta target-role="Started"</div><div>location l-lb-100 lb 100: exlb01</div><div>property $id="cib-bootstrap-options" \</div><div> dc-version="1.1.7-ee0730e13d124c3d58f00016c3376a1de5323cff" \</div>
<div> cluster-infrastructure="openais" \</div><div> expected-quorum-votes="2" \</div><div> no-quorum-policy="ignore" \</div><div> stonith-enabled="false" \</div>
<div> last-lrm-refresh="1355878292" \</div><div> cluster-recheck-interval="60s"</div><div><br></div><div>crm status:</div><div><div>============</div><div>Last updated: Thu Feb 7 11:01:06 2013</div>
<div>Last change: Wed Dec 19 01:32:40 2012</div><div>Stack: openais</div><div>Current DC: exlb02 - partition with quorum</div><div>Version: 1.1.7-ee0730e13d124c3d58f00016c3376a1de5323cff</div><div>2 Nodes configured, 2 expected votes</div>
<div>2 Resources configured.</div><div>============</div><div><br></div><div>Online: [ exlb02 exlb01 ]</div><div><br></div><div> Resource Group: lb</div><div> VIP1 (ocf::heartbeat:IPaddr2): Started exlb01</div>
<div> ldirectord (ocf::heartbeat:ldirectord): Started exlb01</div></div></div></div>
</blockquote></div><br></div>