<html><head><meta http-equiv="Content-Type" content="text/html; charset=utf-8"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;" class=""><div class="">While I am by no means a CRM/Pacemaker expert, I only see the resource primitives and the order constraints. Wouldn’t you need location and/or colocation as well as stickiness settings to prevent this from happening? What I think it might be doing is seeing the new node, then trying to move the resources (but not finding it a suitable target) and then moving them back where they came from, but fast enough for you to only see it as a restart.</div><div class=""><br class=""></div><div class="">If you crm_resource -P, it should also restart all resources, but put them in the preferred spot. If they end up in the same place, you probably didn’t put and weighing in the config or have stickiness set to INF.</div><br class=""><div class="">
Kind regards,<br class=""><br class="">John Keates<br class=""></div><div class=""><br class="webkit-block-placeholder"></div><div><blockquote type="cite" class=""><div class="">On 26 Aug 2017, at 14:23, Octavian Ciobanu <<a href="mailto:coctavian1979@gmail.com" class="">coctavian1979@gmail.com</a>> wrote:</div><br class="Apple-interchange-newline"><div class=""><div dir="ltr" class=""><div class=""><div class=""><div class=""><div class=""><div class=""><div class=""><div class=""><div class="">Hello all,</div><div class=""><br class=""></div>While playing with cluster configuration I noticed a strange behavior. If I stop/standby cluster services on one node and reboot it, when it joins the cluster all the resources that were started and working on active nodes get stopped and restarted.<br class=""></div><br class=""></div>My testing configuration is based on 4 nodes. One node is a storage node that makes 3 iSCSI targets available for the other nodes to use,it is not configured to join cluster, and three nodes that are configured in a cluster using the following commands.<br class=""><br class=""></div>pcs resource create DLM ocf:pacemaker:controld op monitor interval="60" on-fail="fence" clone meta clone-max="3" clone-node-max="1" interleave="true" ordered="true"<br class="">pcs resource create iSCSI1 ocf:heartbeat:iscsi portal="<a href="http://10.0.0.1:3260/" class="">10.0.0.1:3260</a>" target="<a href="http://iqn.2017-08.example.com" class="">iqn.2017-08.example.com</a>:tgt1" op start interval="0" timeout="20" op stop interval="0" timeout="20" op monitor interval="120" timeout="30" clone meta clone-max="3" clone-node-max="1"<br class="">pcs resource create iSCSI2 ocf:heartbeat:iscsi portal="<a href="http://10.0.0.1:3260/" class="">10.0.0.1:3260</a>" target="<a href="http://iqn.2017-08.example.com" class="">iqn.2017-08.example.com</a>:tgt2" op start interval="0" timeout="20" op stop interval="0" timeout="20" op monitor interval="120" timeout="30" clone meta clone-max="3" clone-node-max="1"<br class="">pcs resource create iSCSI3 ocf:heartbeat:iscsi portal="<a href="http://10.0.0.1:3260/" class="">10.0.0.1:3260</a>"
target="<a href="http://iqn.2017-08.example.com" class="">iqn.2017-08.example.com</a>:tgt3" op start interval="0" timeout="20"
op stop interval="0" timeout="20" op monitor interval="120"
timeout="30" clone meta clone-max="3" clone-node-max="1"<br class="">pcs resource create Mount1 ocf:heartbeat:Filesystem device="/dev/disk/by-label/MyCluster:Data1" directory="/mnt/data1" fstype="gfs2" options="noatime,nodiratime,rw" op monitor interval="90" on-fail="fence" clone meta clone-max="3" clone-node-max="1" interleave="true"<br class="">pcs resource create Mount2 ocf:heartbeat:Filesystem
device="/dev/disk/by-label/MyCluster:Data2" directory="/mnt/data2"
fstype="gfs2" options="noatime,nodiratime,rw" op monitor interval="90"
on-fail="fence" clone meta clone-max="3" clone-node-max="1"
interleave="true"<br class="">pcs resource create Mount3 ocf:heartbeat:Filesystem
device="/dev/disk/by-label/MyCluster:Data3" directory="/mnt/data3"
fstype="gfs2" options="noatime,nodiratime,rw" op monitor interval="90"
on-fail="fence" clone meta clone-max="3" clone-node-max="1"
interleave="true"<br class="">pcs constraint order DLM-clone then iSCSI1-clone<br class="">pcs constraint order DLM-clone then iSCSI2-clone<br class="">pcs constraint order DLM-clone then iSCSI3-clone<br class="">pcs constraint order iSCSI1-clone then Mount1-clone<br class="">pcs constraint order iSCSI2-clone then Mount2-clone<br class="">pcs constraint order iSCSI3-clone then Mount3-clone<br class=""><br class=""></div>If I issue the command "pcs cluster standby node1" or "pcs cluster stop" on node 1 and after that I reboot the node. When the node gets back online (unstandby if it was put in standby mode) all the "MountX" resources get stopped on node 3 and 4 and started again.<br class=""><br class=""></div>Can anyone help me figure out where and what is the mistake in my configuration as I would like to keep the started resources on active nodes (avoid stop and start of resources)?<br class=""><br class=""></div>Thank you in advance<br class=""></div>Octavian Ciobanu<br class=""></div>
_______________________________________________<br class="">Users mailing list: <a href="mailto:Users@clusterlabs.org" class="">Users@clusterlabs.org</a><br class=""><a href="http://lists.clusterlabs.org/mailman/listinfo/users" class="">http://lists.clusterlabs.org/mailman/listinfo/users</a><br class=""><br class="">Project Home: http://www.clusterlabs.org<br class="">Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf<br class="">Bugs: http://bugs.clusterlabs.org<br class=""></div></blockquote></div><br class=""></body></html>