<div dir="ltr"><div><div><div><div><div><div><div><div>Hello all,</div><div><br></div>While playing with cluster configuration I noticed a strange behavior. If I stop/standby cluster services on one node and reboot it, when it joins the cluster all the resources that were started and working on active nodes get stopped and restarted.<br></div><br></div>My testing configuration is based on 4 nodes. One node is a storage node that makes 3 iSCSI targets available for the other nodes to use,it is not configured to join cluster, and three nodes that are configured in a cluster using the following commands.<br><br></div>pcs resource create DLM ocf:pacemaker:controld op monitor interval="60" on-fail="fence" clone meta clone-max="3" clone-node-max="1" interleave="true" ordered="true"<br>pcs resource create iSCSI1 ocf:heartbeat:iscsi portal="<a href="http://10.0.0.1:3260">10.0.0.1:3260</a>" target="iqn.2017-08.example.com:tgt1" op start interval="0" timeout="20" op stop interval="0" timeout="20" op monitor interval="120" timeout="30" clone meta clone-max="3" clone-node-max="1"<br>pcs resource create iSCSI2 ocf:heartbeat:iscsi portal="<a href="http://10.0.0.1:3260">10.0.0.1:3260</a>" target="iqn.2017-08.example.com:tgt2" op start interval="0" timeout="20" op stop interval="0" timeout="20" op monitor interval="120" timeout="30" clone meta clone-max="3" clone-node-max="1"<br>pcs resource create iSCSI3 ocf:heartbeat:iscsi portal="<a href="http://10.0.0.1:3260">10.0.0.1:3260</a>"
target="iqn.2017-08.example.com:tgt3" op start interval="0" timeout="20"
op stop interval="0" timeout="20" op monitor interval="120"
timeout="30" clone meta clone-max="3" clone-node-max="1"<br>pcs resource create Mount1 ocf:heartbeat:Filesystem device="/dev/disk/by-label/MyCluster:Data1" directory="/mnt/data1" fstype="gfs2" options="noatime,nodiratime,rw" op monitor interval="90" on-fail="fence" clone meta clone-max="3" clone-node-max="1" interleave="true"<br>pcs resource create Mount2 ocf:heartbeat:Filesystem
device="/dev/disk/by-label/MyCluster:Data2" directory="/mnt/data2"
fstype="gfs2" options="noatime,nodiratime,rw" op monitor interval="90"
on-fail="fence" clone meta clone-max="3" clone-node-max="1"
interleave="true"<br>pcs resource create Mount3 ocf:heartbeat:Filesystem
device="/dev/disk/by-label/MyCluster:Data3" directory="/mnt/data3"
fstype="gfs2" options="noatime,nodiratime,rw" op monitor interval="90"
on-fail="fence" clone meta clone-max="3" clone-node-max="1"
interleave="true"<br>pcs constraint order DLM-clone then iSCSI1-clone<br>pcs constraint order DLM-clone then iSCSI2-clone<br>pcs constraint order DLM-clone then iSCSI3-clone<br>pcs constraint order iSCSI1-clone then Mount1-clone<br>pcs constraint order iSCSI2-clone then Mount2-clone<br>pcs constraint order iSCSI3-clone then Mount3-clone<br><br></div>If I issue the command "pcs cluster standby node1" or "pcs cluster stop" on node 1 and after that I reboot the node. When the node gets back online (unstandby if it was put in standby mode) all the "MountX" resources get stopped on node 3 and 4 and started again.<br><br></div>Can anyone help me figure out where and what is the mistake in my configuration as I would like to keep the started resources on active nodes (avoid stop and start of resources)?<br><br></div>Thank you in advance<br></div>Octavian Ciobanu<br></div>