<div id="yiv8274253526"><div id="yMail_cursorElementTracker_1628509152520">I've setup something similar with VIP that is everywhere using the globally-unique=true (where cluster controls which node to be passive and which active). This allows that the VIP is everywhere but only 1 node answers the requests , while the WEB server was running everywhere with config and data on a shared FS.</div><div id="yMail_cursorElementTracker_1628509217505"><br></div><div id="yMail_cursorElementTracker_1628509217680">Sadly, I can't find my notes right now.</div><div id="yMail_cursorElementTracker_1628509228432"><br></div><div id="yMail_cursorElementTracker_1628509228612">Best Regards,</div><div id="yMail_cursorElementTracker_1628509232346">Strahil Nikolov<br clear="none"> <br clear="none"> <blockquote style="margin:0 0 20px 0;"> <div class="yiv8274253526yqt7340411764" id="yiv8274253526yqt44772"><div style="font-family:Roboto, sans-serif;color:#6D00F6;"> <div>On Mon, Aug 9, 2021 at 13:43, Andreas Janning</div><div><andreas.janning@qaware.de> wrote:</div> </div> <div style="padding:10px 0 0 20px;margin:10px 0 0 0;border-left:1px solid #6D00F6;"> <div id="yiv8274253526"><div dir="ltr"><div>Hi all,</div><div><br clear="none"></div><div>we recently experienced an outage in our pacemaker cluster and I would like to understand how we can configure the cluster to avoid this problem in the future.</div><div><br clear="none"></div><div>First our basic setup:</div><div>- CentOS7</div><div>- Pacemaker 1.1.23</div><div>- Corosync 2.4.5</div><div>- Resource-Agents 4.1.1</div><div><br clear="none"></div><div>Our cluster is composed of multiple active/passive nodes. Each software component runs on two nodes simultaneously and all traffic is routed to the active node via Virtual IP.</div><div>If the active node fails, the passive node grabs the Virtual IP and immediately takes over all work of the failed node. Since the software is already up and running on the passive node, there should be virtually no downtime.</div><div>We have tried achieved this in pacemaker by configuring clone-sets for each software component.</div><div><br clear="none"></div><div>Now the problem:</div><div>When a software component fails on the active node, the Virtual-IP is correctly grabbed by the passive node. BUT the software component is also immediately restarted on the passive Node.</div><div>That unfortunately defeats the purpose of the whole setup, since we now have a downtime until the software component is restarted on the passive node and the restart might even fail and lead to a complete outage.</div><div>After some investigating I now understand that the cloned resource is restarted on all nodes after a monitoring failure because the default "on-fail" of "monitor" is restart. But that is not what I want.</div><div><br clear="none"></div><div>I have created a minimal setup that reproduces the problem:</div><div><br clear="none"></div><blockquote class="yiv8274253526gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex;"><div><configuration><br clear="none"> <crm_config><br clear="none"> <cluster_property_set id="cib-bootstrap-options"><br clear="none"> <nvpair id="cib-bootstrap-options-have-watchdog" name="have-watchdog" value="false"/><br clear="none"> <nvpair id="cib-bootstrap-options-dc-version" name="dc-version" value="1.1.23-1.el7_9.1-9acf116022"/><br clear="none"> <nvpair id="cib-bootstrap-options-cluster-infrastructure" name="cluster-infrastructure" value="corosync"/><br clear="none"> <nvpair id="cib-bootstrap-options-cluster-name" name="cluster-name" value="pacemaker-test"/><br clear="none"> <nvpair id="cib-bootstrap-options-stonith-enabled" name="stonith-enabled" value="false"/><br clear="none"> <nvpair id="cib-bootstrap-options-symmetric-cluster" name="symmetric-cluster" value="false"/><br clear="none"> </cluster_property_set><br clear="none"> </crm_config><br clear="none"> <nodes><br clear="none"> <node id="1" uname="active-node"/><br clear="none"> <node id="2" uname="passive-node"/><br clear="none"> </nodes><br clear="none"> <resources><br clear="none"> <primitive class="ocf" id="vip" provider="heartbeat" type="IPaddr2"><br clear="none"> <instance_attributes id="vip-instance_attributes"><br clear="none"> <nvpair id="vip-instance_attributes-ip" name="ip" value="{{infrastructure.virtual_ip}}"/><br clear="none"> </instance_attributes><br clear="none"> <operations><br clear="none"> <op id="psa-vip-monitor-interval-10s" interval="10s" name="monitor" timeout="20s"/><br clear="none"> <op id="psa-vip-start-interval-0s" interval="0s" name="start" timeout="20s"/><br clear="none"> <op id="psa-vip-stop-interval-0s" interval="0s" name="stop" timeout="20s"/><br clear="none"> </operations><br clear="none"> </primitive><br clear="none"> <clone id="apache-clone"><br clear="none"> <primitive class="ocf" id="apache" provider="heartbeat" type="apache"><br clear="none"> <instance_attributes id="apache-instance_attributes"><br clear="none"> <nvpair id="apache-instance_attributes-port" name="port" value="80"/><br clear="none"> <nvpair id="apache-instance_attributes-statusurl" name="statusurl" value="<a rel="nofollow noopener noreferrer" shape="rect" target="_blank" href="http://localhost/server-status">http://localhost/server-status</a>"/><br clear="none"> </instance_attributes><br clear="none"> <operations><br clear="none"> <op id="apache-monitor-interval-10s" interval="10s" name="monitor" timeout="20s"/><br clear="none"> <op id="apache-start-interval-0s" interval="0s" name="start" timeout="40s"/><br clear="none"> <op id="apache-stop-interval-0s" interval="0s" name="stop" timeout="60s"/><br clear="none"> </operations><br clear="none"> </primitive><br clear="none"> <meta_attributes id="apache-meta_attributes"><br clear="none"> <nvpair id="apache-clone-meta_attributes-clone-max" name="clone-max" value="2"/><br clear="none"> <nvpair id="apache-clone-meta_attributes-clone-node-max" name="clone-node-max" value="1"/><br clear="none"> </meta_attributes><br clear="none"> </clone><br clear="none"> </resources><br clear="none"> <constraints><br clear="none"> <rsc_location id="location-apache-clone-active-node-100" node="active-node" rsc="apache-clone" score="100" resource-discovery="exclusive"/><br clear="none"> <rsc_location id="location-apache-clone-passive-node-0" node="passive-node" rsc="apache-clone" score="0" resource-discovery="exclusive"/><br clear="none"> <rsc_location id="location-vip-clone-active-node-100" node="active-node" rsc="vip" score="100" resource-discovery="exclusive"/><br clear="none"> <rsc_location id="location-vip-clone-passive-node-0" node="passive-node" rsc="vip" score="0" resource-discovery="exclusive"/><br clear="none"> <rsc_colocation id="colocation-vip-apache-clone-INFINITY" rsc="vip" score="INFINITY" with-rsc="apache-clone"/><br clear="none"> </constraints><br clear="none"> <rsc_defaults><br clear="none"> <meta_attributes id="rsc_defaults-options"><br clear="none"> <nvpair id="rsc_defaults-options-resource-stickiness" name="resource-stickiness" value="50"/><br clear="none"> </meta_attributes><br clear="none"> </rsc_defaults><br clear="none"></configuration><br clear="none"></div></blockquote><div><br clear="none"></div><div><br clear="none"></div><div>When this configuration is started, httpd will be running on active-node and passive-node. The VIP runs only on active-node.</div><div>When crashing the httpd on active-node (with killall httpd), passive-node immediately grabs the VIP and restarts its own httpd.</div><div><br clear="none"></div><div>How can I change this configuration so that when the resource fails on active-node:</div><div>- passive-node immediately grabs the VIP (as it does now).<br clear="none"></div><div>- active-node tries to restart the failed resource, giving up after x attempts.</div><div>- passive-node does NOT restart the resource.</div><div><br clear="none"></div><div>Regards</div><div><br clear="none"></div><div>Andreas Janning<br clear="none"></div><br clear="none"><div><div><br clear="none"></div><div><br clear="none">-- <br clear="none"><div class="yiv8274253526gmail_signature" dir="ltr"><div dir="ltr"><div>
<hr align="center" style="min-height:1px;background-color:#ccc;border:none;" width="100%" size="1">
<div style="font-size:8pt;font-family:sans-serif;">
<p style="margin:0pt 1pt 0pt;">
<b>Beste Arbeitgeber ITK 2021 - 1. Platz für QAware</b><br clear="none">
ausgezeichnet von
<a rel="nofollow noopener noreferrer" shape="rect" target="_blank" href="https://www.qaware.de/news/platz-1-bei-beste-arbeitgeber-in-der-itk-2021/">Great Place to Work</a>
</p>
<hr align="center" style="min-height:1px;background-color:#ccc;border:none;" width="100%" size="1">
<p style="margin:0pt 1pt 8pt;">
Andreas Janning<br clear="none">
Expert Software Engineer<br clear="none">
</p>
<p style="margin:0pt 1pt 0pt;">
QAware GmbH<br clear="none">
Aschauer Straße 32<br clear="none">
81549 München, Germany<br clear="none">
Mobil <span>+49 160 1492426</span><br clear="none">
<a rel="nofollow noopener noreferrer" shape="rect" ymailto="mailto:andreas.janning@qaware.de" target="_blank" href="mailto:andreas.janning@qaware.de">andreas.janning@qaware.de</a><br clear="none">
<a rel="nofollow noopener noreferrer" shape="rect" target="_blank" href="https://www.qaware.de">www.qaware.de</a><br clear="none">
</p>
</div>
<hr align="center" style="min-height:1px;background-color:#ccc;border:none;" width="100%" size="1">
<div style="font-size:7pt;font-family:sans-serif;">
<p style="margin:0pt 1pt 14pt;">
Geschäftsführer: Christian Kamm, Johannes Weigend, Dr. Josef Adersberger<br clear="none">
Registergericht: München<br clear="none">
Handelsregisternummer: HRB 163761<br clear="none">
</p>
</div>
</div></div></div></div></div></div>
</div>_______________________________________________<br clear="none">Manage your subscription:<br clear="none"><a rel="nofollow noopener noreferrer" shape="rect" target="_blank" href="https://lists.clusterlabs.org/mailman/listinfo/users">https://lists.clusterlabs.org/mailman/listinfo/users</a><br clear="none"><br clear="none">ClusterLabs home: <a rel="nofollow noopener noreferrer" shape="rect" target="_blank" href="https://www.clusterlabs.org/">https://www.clusterlabs.org/</a><br clear="none"> </div></div> </blockquote></div></div>