<div dir="ltr"><div><div><div><div>Hello,<br></div><div>I have a CentOS 6.5 based cluster with <br>pacemaker-1.1.10-14.el6_5.3.x86_64<br>cman-3.0.12.1-59.el6_5.2.x86_64<br><br></div><div>and configured pacemaker with cman integration.<br>
</div><div>The nodes are two blades inside an Intel enclosure.<br><br></div>At the moment my configuration has this in cluster.conf<br><br>  <fencedevices><br>    <fencedevice name="pcmk" agent="fence_pcmk"/><br>
  </fencedevices><br><br>and this if I run "pcs cluster edit"<br><br>      <primitive id="Fencing" class="stonith" type="fence_intelmodular"><br>        <instance_attributes id="Fencing-params"><br>
          <nvpair id="Fencing-passwd-script" name="passwd_script" value="/usr/local/bin/fence_pwd.sh"/><br>          <nvpair id="Fencing-login" name="login" value="snmpv3user"/><br>
          <nvpair id="Fencing-ipaddr" name="ipaddr" value="192.168.150.150"/><br>          <nvpair id="Fencing-debug" name="power_wait" value="15"/><br>
          <nvpair id="Fencing-snmp_version" name="snmp_version" value="3"/><br>          <nvpair id="Fencing-snmp_auth_prot" name="snmp_auth_prot" value="SHA"/><br>
          <nvpair id="Fencing-snmp_sec_level" name="snmp_sec_level" value="authNoPriv"/><br>          <nvpair id="Fencing-pcmk_host_list" name="pcmk_host_list" value="srvmgmt01.localdomain.local,srvmgmt02.localdomain.local"/><br>
          <nvpair id="Fencing-pcmk_host_map" name="pcmk_host_map" value="srvmgmt01.localdomain.local:5;srvmgmt02.localdomain.local:6"/><br>        </instance_attributes><br>        <operations><br>
          <op id="Fencing-monitor-10m" interval="10m" name="monitor" timeout="300s"/><br>        </operations><br>      </primitive><br><br><br></div>If I want to set a delay on one of the two nodes to make it privileged in case of split brain, what is the right place and how to put it?<br>
</div>Or do I have to decouple the fencing definition?<br><br></div>BTW: the fencing in general seems ok, but running crm_mon -1 on the two nodes I have in my opinion confusing output (see below); is this expected?<br><br>
[root@srvmgmt01 ~]# crm_mon -1<br>Last updated: Sat Jun 21 10:24:25 2014<br>Last change: Thu Jun 12 00:09:21 2014 via crmd on srvmgmt01.localdomain.local<br>Stack: cman<br>Current DC: srvmgmt02.localdomain.local - partition with quorum<br>
Version: 1.1.10-14.el6_5.3-368c726<br>2 Nodes configured<br>4 Resources configured<br><br><br>Online: [ srvmgmt01.localdomain.local srvmgmt02.localdomain.local ]<br><br> Master/Slave Set: ms_drbd_kvm-ovirtmgr [p_drbd_kvm-ovirtmgr]<br>
     Masters: [ srvmgmt01.localdomain.local ]<br>     Slaves: [ srvmgmt02.localdomain.local ]<br> p_kvm-ovirtmgr    (ocf::heartbeat:VirtualDomain):    Started srvmgmt01.localdomain.local <br> Fencing    (stonith:fence_intelmodular):    Started srvmgmt02.localdomain.local <br>
<br>[root@srvmgmt02 ~]# crm_mon -1<br>Last updated: Sat Jun 21 10:24:19 2014<br>Last change: Thu Jun 12 00:09:21 2014 via crmd on srvmgmt01.localdomain.local<br>Stack: cman<br>Current DC: srvmgmt02.localdomain.local - partition with quorum<br>
Version: 1.1.10-14.el6_5.3-368c726<br>2 Nodes configured<br>4 Resources configured<br><br><br>Online: [ srvmgmt01.localdomain.local srvmgmt02.localdomain.local ]<br><br> Master/Slave Set: ms_drbd_kvm-ovirtmgr [p_drbd_kvm-ovirtmgr]<br>
     Masters: [ srvmgmt01.localdomain.local ]<br>     Slaves: [ srvmgmt02.localdomain.local ]<br> p_kvm-ovirtmgr    (ocf::heartbeat:VirtualDomain):    Started srvmgmt01.localdomain.local <br> Fencing    (stonith:fence_intelmodular):    Started srvmgmt02.localdomain.local <br>
<br><div><div>Thanks in advance,<br>Gianluca<br></div></div></div>