<div dir="ltr"><div><div>Hi,<br><br></div>I have a strange situation, which I would like to ask about, whether it is a bug, misconfiguration or an intended behavior.<br><br></div><div>A disconnected node does not detect it is lost, and does not perform any actions to stop, even though resource agents report errors when monitored, just the number of processes (of some hanged resource agents) keeps growing.<br>
<br></div><div>Seems like pacemaker ignores timeouts when trying to update CIB.<br><br></div><div>The situation is caused by corosync not detecting lost quorum due to firewall blocking lo. As far as I checked this prevents corosync from detecting problems with the cluster, and when lo access is restored everything should be fine, but shouldn't pacemaker detect lost CIB service and do something about it? Maybe there is a configuration parameter to control this?<br>
</div><div><br></div><div>Technical details:<br></div><div><br></div>1)<br>1.1) machine: Amazon Linux: Linux ... 3.10.35-43.137.amzn1.x86_64 #1 SMP Wed Apr 2 09:36:59 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux<br clear="all">
<div><div><div>1.2) Pacemaker: Pacemaker 1.1.9-1512.el6<br></div><div>1.3) corosync: Corosync Cluster Engine, version '2.3.2'<br><br></div><div><br></div><div>2) Net: basic: ethx, lo<br><br></div><div>3) iptables:<br>
*filter<br>:INPUT ACCEPT [0:0]<br>:FORWARD ACCEPT [0:0]<br>:OUTPUT ACCEPT [0:0]<br>-A INPUT -p tcp -m tcp -s <my_machine> --dport 22 -j ACCEPT<br>-A INPUT -j DROP<br>-A OUTPUT -p tcp -m tcp -d <my_machine> --sport 22 -j ACCEPT<br>
-A OUTPUT -j DROP<br>COMMIT<br><br></div><div>4) crm config:<br><crm_config><br> <cluster_property_set id="cib-bootstrap-options"><br> <nvpair id="cib-bootstrap-options-stonith-enabled" name="stonith-enabled" value="false"/><br>
<nvpair id="cib-bootstrap-options-no-quorum-policy" name="no-quorum-policy" value="stop"/><br> <nvpair id="cib-bootstrap-options-stop-orphan-resources" name="stop-orphan-resources" value="true"/><br>
<nvpair id="cib-bootstrap-options-start-failure-is-fatal" name="start-failure-is-fatal" value="true"/><br> <nvpair id="cib-bootstrap-options-expected-quorum-votes" name="expected-quorum-votes" value="3"/><br>
<nvpair id="cib-bootstrap-options-dc-version" name="dc-version" value="1.1.9-1512.el6-2a917dd"/><br> <nvpair id="cib-bootstrap-options-cluster-infrastructure" name="cluster-infrastructure" value="corosync"/><br>
</cluster_property_set><br></crm_config><br><br><br></div><div>5) Example resource config:<br> <primitive class="ocf" id="dbx_ready_nodes" provider="dbxcl" type="<a href="http://ready.ocf.sh">ready.ocf.sh</a>"><br>
<instance_attributes id="dbx_ready_nodes-instance_attributes"><br> <nvpair id="dbx_ready_nodes-instance_attributes-dbxclrole" name="dbxclrole" value="''"/><br>
</instance_attributes><br> <operations><br> <op id="dbx_ready_nodes-start-timeout-1min-on-fail-stop" interval="0s" name="start" on-fail="stop" timeout="1min"/><br>
<op id="dbx_ready_nodes-stop-timeout-8min" interval="0s" name="stop" timeout="8min"/><br> <op id="dbx_ready_nodes-monitor-interval-83s" interval="83s" name="monitor" on-fail="stop" timeout="60s"/><br>
<op id="dbx_ready_nodes-validate-all-interval-29s" interval="29s" name="validate-all" on-fail="stop" timeout="60s"/><br> </operations><br> </primitive><br>
<br><br></div><div>6) Logs:<br>Below a resource "dbx_ready_nodes" monitor action returns error, but nothing happens, the resource is not being requested to stop (even though it should, as can be seen above)<br><br>
May 02 20:04:13 [16191] ip-10-116-169-85 lrmd: debug: operation_finished: dbx_ready_nodes_monitor_83000:8669 - exited with rc=1<br>May 02 20:04:13 [16191] ip-10-116-169-85 lrmd: debug: log_finished: finished - rsc:dbx_ready_nodes action:monitor call_id:142 pid:8669 exit-code:1 exec-time:0ms queue-time:0ms<br>
May 02 20:04:13 [16154] ip-10-116-169-85 corosync debug [TOTEM ] sendmsg(mcast) failed (non-critical): Operation not permitted (1)<br>May 02 20:04:13 [16154] ip-10-116-169-85 corosync debug [TOTEM ] sendmsg(mcast) failed (non-critical): Operation not permitted (1)<br>
May 02 20:04:13 [16154] ip-10-116-169-85 corosync debug [TOTEM ] sendmsg(mcast) failed (non-critical): Operation not permitted (1)<br>May 02 20:04:13 [16154] ip-10-116-169-85 corosync debug [TOTEM ] sendmsg(mcast) failed (non-critical): Operation not permitted (1)<br>
May 02 20:04:13 [16154] ip-10-116-169-85 corosync debug [TOTEM ] sendmsg(mcast) failed (non-critical): Operation not permitted (1)<br>May 02 20:04:13 [16154] ip-10-116-169-85 corosync debug [TOTEM ] sendmsg(mcast) failed (non-critical): Operation not permitted (1)<br>
May 02 20:04:13 [16154] ip-10-116-169-85 corosync warning [MAIN ] Totem is unable to form a cluster because of an operating system or network fault. The most common cause of this message is that th<br>e local firewall is configured improperly.<br>
<br></div><div><br></div><div>Thanks in advance<br><br></div><div>-- <br><div dir="ltr"><div>Best Regards,<br><br>Radoslaw Garbacz<br></div>XtremeData Incorporation<br></div>
</div></div></div></div>