<div dir="ltr"><div><div><div><div><div><div><div><div><div><div>Hi All,<br><br></div>Can someone please help me in my below setup??<br><br></div>I have 2 node setup with HB+pacemaker.<br></div>I have my app running on both the nodes before the start of HB and pacemaker.<br>
<br>Later I configured the crm as below:<br><br># crm configure primitive havip ocf:IPaddr2 params ip=192.168.101.205 cidr_netmask=32 nic=eth1 op monitor interval=30s<br></div># crm configure primitive oc_proxyapp lsb::proxyapp meta allow-migrate="true" migration-threshold="3" failure-timeout="30s" op monitor interval="5s"<br>
<br>#crm configure colocation oc-havip INFINITY: havip oc_proxyapp<br><br></div>My intention is to monitor the already running instance and a vip is attached to only one node. If the app fails for 3 times on the current node, automatically the vip should be moved to another node but the app shouldn't be restarted.<br>
<br></div>With the above config, the app is getting stopped on the second node and getting re-started in first node.<br></div>From logs, I could see WARN: native_create_actions: See <a href="http://clusterlabs.org/wiki/FAQ#Resource_is_Too_Active">http://clusterlabs.org/wiki/FAQ#Resource_is_Too_Active</a> for more information.<br>
<br></div>I tried with is-managed="false" also but in that case the app will not get restarted as per my understanding.<br><br></div><div>So please let me know how can I monitor an already running instance on both nodes with some migration threshold ???<br>
</div><div><br></div>Thanks<br></div>Eswar<br><div><div><div><div><div><br></div></div></div></div></div></div>