<div dir="ltr"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex">1. install the resource related packages on node3 even though you never want<br>them to run there. This will allow the resource-agents to verify the resource<br>is in fact inactive.</blockquote><div><br></div><div>Thanks, your advise helped: I installed all the services at node3 as well (including DRBD, but without it configs) and stopped+disabled them. Then I added the following line to my configuration:</div><div><br></div><div>location loc_drbd drbd rule -inf: #uname eq node3<br></div><div><br></div><div>So node3 is never a target for DRBD, and this helped: "crm nodr standby node1" doesn't tries to use node3 anymore.</div><div><br></div><div>But I have another (related) issue. If some node (e.g. node1) becomes isolated from other 2 nodes, how to force it to shutdown its services? I cannot use IPMB-based fencing/stonith, because there are no reliable connections between nodes at all (the nodes are in geo-distributed datacenters), and IPMI call to shutdown a node from another node is impossible.</div><div><br></div><div>E.g. initially I have the following:</div><div><br></div><div><b># crm status</b></div><div><div>Online: [ node1 node2 node3 ]</div><div>Master/Slave Set: ms_drbd [drbd]<br></div><div> Masters: [ node2 ]</div><div> Slaves: [ node1 ]</div><div>Resource Group: server</div><div> fs (ocf::heartbeat:Filesystem): Started node2</div><div> postgresql (lsb:postgresql): Started node2</div><div> bind9 (lsb:bind9): Started node2</div><div> nginx (lsb:nginx): Started node2</div></div><div><br></div><div>Then I turn on firewall on node2 to isolate it from the outside internet:</div><div><br></div><div><div><b>root@node2:~# iptables -A INPUT -p tcp --dport 22 -j ACCEPT</b></div><div><b>root@node2:~# </b><b>iptables -A OUTPUT -p tcp --sport 22 -j ACCEPT</b></div><div><b>root@node2:~# </b><b>iptables -A INPUT -i lo -j ACCEPT</b></div><div><b>root@node2:~# </b><b>iptables -A OUTPUT -o lo -j ACCEPT</b></div><div><b>root@node2:~# </b><b>iptables -P INPUT DROP; iptables -P OUTPUT DROP</b></div></div><div><br></div><div>Then I see that, although node2 clearly knows it's isolated (it doesn't see other 2 nodes and does not have quorum), it does not stop its services:</div><div><br></div><div><div><b>root@node2:~# crm status</b></div><div>Online: [ node2 ]<br></div><div>OFFLINE: [ node1 node3 ]</div><div>Master/Slave Set: ms_drbd [drbd]<br></div><div> Masters: [ node2 ]</div><div> Stopped: [ node1 node3 ]</div><div>Resource Group: server</div><div> fs<span class="" style="white-space:pre"> </span>(ocf::heartbeat:Filesystem):<span class="" style="white-space:pre"> </span>Started node2</div><div> postgresql<span class="" style="white-space:pre"> </span>(lsb:postgresql):<span class="" style="white-space:pre"> </span>Started node2</div><div> bind9<span class="" style="white-space:pre"> </span>(lsb:bind9):<span class="" style="white-space:pre"> </span>Started node2</div><div> nginx<span class="" style="white-space:pre"> </span>(lsb:nginx):<span class="" style="white-space:pre"> </span>Started node2</div></div><div><br></div><div>So is there a way to say pacemaker to shutdown nodes' services when they become isolated?</div><div><br></div><div><br></div><div class="gmail_extra"><br><div class="gmail_quote">On Mon, Jan 12, 2015 at 8:25 PM, David Vossel <span dir="ltr"><<a href="mailto:dvossel@redhat.com" target="_blank">dvossel@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left-width:1px;border-left-color:rgb(204,204,204);border-left-style:solid;padding-left:1ex"><div class=""><div class="h5"><br>
<br>
----- Original Message -----<br>
> Hello.<br>
><br>
> I have 3-node cluster managed by corosync+pacemaker+crm. Node1 and Node2 are<br>
> DRBD master-slave, also they have a number of other services installed<br>
> (postgresql, nginx, ...). Node3 is just a corosync node (for quorum), no<br>
> DRBD/postgresql/... are installed at it, only corosync+pacemaker.<br>
><br>
> But when I add resources to the cluster, a part of them are somehow moved to<br>
> node3 and since then fail. Note than I have a "colocation" directive to<br>
> place these resources to the DRBD master only and "location" with -inf for<br>
> node3, but this does not help - why? How to make pacemaker not run anything<br>
> at node3?<br>
><br>
> All the resources are added in a single transaction: "cat config.txt | crm -w<br>
> -f- configure" where config.txt contains directives and "commit" statement<br>
> at the end.<br>
><br>
> Below are "crm status" (error messages) and "crm configure show" outputs.<br>
><br>
><br>
> root@node3:~# crm status<br>
> Current DC: node2 (1017525950) - partition with quorum<br>
> 3 Nodes configured<br>
> 6 Resources configured<br>
> Online: [ node1 node2 node3 ]<br>
> Master/Slave Set: ms_drbd [drbd]<br>
> Masters: [ node1 ]<br>
> Slaves: [ node2 ]<br>
> Resource Group: server<br>
> fs (ocf::heartbeat:Filesystem): Started node1<br>
> postgresql (lsb:postgresql): Started node3 FAILED<br>
> bind9 (lsb:bind9): Started node3 FAILED<br>
> nginx (lsb:nginx): Started node3 (unmanaged) FAILED<br>
> Failed actions:<br>
> drbd_monitor_0 (node=node3, call=744, rc=5, status=complete,<br>
> last-rc-change=Mon Jan 12 11:16:43 2015, queued=2ms, exec=0ms): not<br>
> installed<br>
> postgresql_monitor_0 (node=node3, call=753, rc=1, status=complete,<br>
> last-rc-change=Mon Jan 12 11:16:43 2015, queued=8ms, exec=0ms): unknown<br>
> error<br>
> bind9_monitor_0 (node=node3, call=757, rc=1, status=complete,<br>
> last-rc-change=Mon Jan 12 11:16:43 2015, queued=11ms, exec=0ms): unknown<br>
> error<br>
> nginx_stop_0 (node=node3, call=767, rc=5, status=complete, last-rc-change=Mon<br>
> Jan 12 11:16:44 2015, queued=1ms, exec=0ms): not installed<br>
<br>
</div></div>Here's what is going on. Even when you say "never run this resource on node3"<br>
pacemaker is going to probe for the resource regardless on node3 just to verify<br>
the resource isn't running.<br>
<br>
The failures you are seeing "monitor_0 failed" indicate that pacemaker failed<br>
to be able to verify resources are running on node3 because the related<br>
packages for the resources are not installed. Given pacemaker's default<br>
behavior I'd expect this.<br>
<br>
You have two options.<br>
<br>
1. install the resource related packages on node3 even though you never want<br>
them to run there. This will allow the resource-agents to verify the resource<br>
is in fact inactive.<br>
<br>
2. If you are using the current master branch of pacemaker, there's a new<br>
location constraint option called 'resource-discovery=always|never|exclusive'.<br>
If you add the 'resource-discovery=never' option to your location constraint<br>
that attempts to keep resources from node3, you'll avoid having pacemaker<br>
perform the 'monitor_0' actions on node3 as well.<br>
<br>
-- Vossel<br>
<div><div class="h5"><br>
><br>
> root@node3:~# crm configure show | cat<br>
> node $id="1017525950" node2<br>
> node $id="13071578" node3<br>
> node $id="1760315215" node1<br>
> primitive drbd ocf:linbit:drbd \<br>
> params drbd_resource="vlv" \<br>
> op start interval="0" timeout="240" \<br>
> op stop interval="0" timeout="120"<br>
> primitive fs ocf:heartbeat:Filesystem \<br>
> params device="/dev/drbd0" directory="/var/lib/vlv.drbd/root"<br>
> options="noatime,nodiratime" fstype="xfs" \<br>
> op start interval="0" timeout="300" \<br>
> op stop interval="0" timeout="300"<br>
> primitive postgresql lsb:postgresql \<br>
> op monitor interval="10" timeout="60" \<br>
> op start interval="0" timeout="60" \<br>
> op stop interval="0" timeout="60"<br>
> primitive bind9 lsb:bind9 \<br>
> op monitor interval="10" timeout="60" \<br>
> op start interval="0" timeout="60" \<br>
> op stop interval="0" timeout="60"<br>
> primitive nginx lsb:nginx \<br>
> op monitor interval="10" timeout="60" \<br>
> op start interval="0" timeout="60" \<br>
> op stop interval="0" timeout="60"<br>
> group server fs postgresql bind9 nginx<br>
> ms ms_drbd drbd meta master-max="1" master-node-max="1" clone-max="2"<br>
> clone-node-max="1" notify="true"<br>
> location loc_server server rule $id="loc_server-rule" -inf: #uname eq node3<br>
> colocation col_server inf: server ms_drbd:Master<br>
> order ord_server inf: ms_drbd:promote server:start<br>
> property $id="cib-bootstrap-options" \<br>
> stonith-enabled="false" \<br>
> last-lrm-refresh="1421079189" \<br>
> maintenance-mode="false"<br>
><br>
</div></div>> _______________________________________________<br>
> Pacemaker mailing list: <a href="mailto:Pacemaker@oss.clusterlabs.org">Pacemaker@oss.clusterlabs.org</a><br>
> <a href="http://oss.clusterlabs.org/mailman/listinfo/pacemaker" target="_blank">http://oss.clusterlabs.org/mailman/listinfo/pacemaker</a><br>
><br>
> Project Home: <a href="http://www.clusterlabs.org" target="_blank">http://www.clusterlabs.org</a><br>
> Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" target="_blank">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br>
> Bugs: <a href="http://bugs.clusterlabs.org" target="_blank">http://bugs.clusterlabs.org</a><br>
><br>
<br>
_______________________________________________<br>
Pacemaker mailing list: <a href="mailto:Pacemaker@oss.clusterlabs.org">Pacemaker@oss.clusterlabs.org</a><br>
<a href="http://oss.clusterlabs.org/mailman/listinfo/pacemaker" target="_blank">http://oss.clusterlabs.org/mailman/listinfo/pacemaker</a><br>
<br>
Project Home: <a href="http://www.clusterlabs.org" target="_blank">http://www.clusterlabs.org</a><br>
Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" target="_blank">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br>
Bugs: <a href="http://bugs.clusterlabs.org" target="_blank">http://bugs.clusterlabs.org</a><br>
</blockquote></div><br></div></div>