[ClusterLabs] Cold star of one node only

Ken Gaillot kgaillot at redhat.com
Tue Sep 13 17:14:19 EDT 2016


On 09/13/2016 03:27 PM, Gienek Nowacki wrote:
> Hi,
> 
> I'm still testing (before production running) the solution with
> pacemaker+corosync+drbd+dlm+gfs2 on Centos7 with double-primary config.
> 
> I have two nodes: wirt1v and wirt2v - each node contains LVM partition 
> with DRBD (/dev/drbd2) and filesystem mounted as /virtfs2. Filesystems
> /virtfs2 contain the images of virtual machines.
> 
> My problem is so - I can't start the cluster and the resources on one
> node only (cold start) when the second node is completely powered off.

"two_node: 1" implies "wait_for_all: 1" in corosync.conf; see the
votequorum(5) man page for details.

This is a safeguard against the situation where the other node is up,
but not reachable from the newly starting node.

You can get around this by setting "wait_for_all: 0", and rely on
pacemaker's fencing to resolve that situation. But if so, be careful
about starting pacemaker when the nodes can't see each other, because
each will try to fence the other.

Example: wirt1v's main LAN network port gets fried in an electrical
surge, but its iDRAC network port is still operational. wirt2v may
successfully fence wirt1v and take over all resources, but if wirt1v is
rebooted and starts pacemaker, without wait_for_all it will fence wirt2v.

> Is it in such configuration at all posssible - is it posible to start
> one node only?
> 
> Could you help me, please?
> 
> The  configs and log (during cold start)  are attached.
> 
> Thanks in advance,
> Gienek Nowacki
> 
> ==============================================================
> 
> #---------------------------------
> ### result:  cat /etc/redhat-release  ###
> 
> CentOS Linux release 7.2.1511 (Core)
> 
> #---------------------------------
> ### result:  uname -a  ###
> 
> Linux wirt1v.example.com <http://wirt1v.example.com>
> 3.10.0-327.28.3.el7.x86_64 #1 SMP Thu Aug 18 19:05:49 UTC 2016 x86_64
> x86_64 x86_64 GNU/Linux
> 
> #---------------------------------
> ### result:  cat /etc/hosts  ###
> 
> 127.0.0.1   localhost localhost.localdomain localhost4
> localhost4.localdomain4
> 172.31.0.23     wirt1.example.com <http://wirt1.example.com> wirt1
> 172.31.0.24     wirt2.example.com <http://wirt2.example.com> wirt2
> 1.1.1.1         wirt1v.example.com <http://wirt1v.example.com> wirt1v
> 1.1.1.2         wirt2v.example.com <http://wirt2v.example.com> wirt2v
> 
> #---------------------------------
> ### result:  cat /etc/drbd.conf  ###
> 
> include "drbd.d/global_common.conf";
> include "drbd.d/*.res";
> 
> #---------------------------------
> ### result:  cat /etc/drbd.d/global_common.conf  ###
> 
> common {
>         protocol C;
>         syncer {
>                 verify-alg sha1;
>         }
>         startup {
>                 become-primary-on both;
>                 wfc-timeout 30;
>                 outdated-wfc-timeout 20;
>                 degr-wfc-timeout 30;
>         }
>         disk {
>                 fencing resource-and-stonith;
>         }
>         handlers {
>                 fence-peer "/usr/lib/drbd/crm-fence-peer.sh";
>                 after-resync-target "/usr/lib/drbd/crm-unfence-peer.sh";
>                 split-brain            
> "/usr/lib/drbd/notify-split-brain.sh linuxadmin at example.com
> <mailto:linuxadmin at example.com>";
>                 pri-lost-after-sb      
> "/usr/lib/drbd/notify-split-brain.sh linuxadmin at example.com
> <mailto:linuxadmin at example.com>";
>                 out-of-sync            
> "/usr/lib/drbd/notify-out-of-sync.sh linuxadmin at example.com
> <mailto:linuxadmin at example.com>";
>                 local-io-error         
> "/usr/lib/drbd/notify-io-error.sh    linuxadmin at example.com
> <mailto:linuxadmin at example.com>";
>         }
>         net {
>                 allow-two-primaries;
>                 after-sb-0pri discard-zero-changes;
>                 after-sb-1pri discard-secondary;
>                 after-sb-2pri disconnect;
>         }
> }
> 
> #---------------------------------
> ### result:  cat /etc/drbd.d/drbd2.res  ###
> 
> resource drbd2 {
>         meta-disk internal;
>         device /dev/drbd2;
>         on wirt1v.example.com <http://wirt1v.example.com> {
>                 disk /dev/vg02/drbd2;
>                 address 1.1.1.1:7782 <http://1.1.1.1:7782>;
>         }
>         on wirt2v.example.com <http://wirt2v.example.com> {
>                 disk /dev/vg02/drbd2;
>                 address 1.1.1.2:7782 <http://1.1.1.2:7782>;
>         }
> }
> 
> #---------------------------------
> ### result:  cat /etc/corosync/corosync.conf  ###
> 
> totem {
>     version: 2
>     secauth: off
>     cluster_name: klasterek
>     transport: udpu
> }
> nodelist {
>     node {
>         ring0_addr: wirt1v
>         nodeid: 1
>     }
>     node {
>         ring0_addr: wirt2v
>         nodeid: 2
>     }
> }
> quorum {
>     provider: corosync_votequorum
>     two_node: 1
> }
> logging {
>     to_logfile: yes
>     logfile: /var/log/cluster/corosync.log
>     to_syslog: yes
> }
> 
> #---------------------------------
> ### result:  mount | grep virtfs2  ###
> 
> /dev/drbd2 on /virtfs2 type gfs2 (rw,relatime,seclabel)
> 
> #---------------------------------
> ### result:  pcs status  ###
> 
> Cluster name: klasterek
> Last updated: Tue Sep 13 20:01:40 2016          Last change: Tue Sep 13
> 18:31:33 2016 by root via crm_resource on wirt1v
> Stack: corosync
> Current DC: wirt1v (version 1.1.13-10.el7_2.4-44eb2dd) - partition with
> quorum
> 2 nodes and 8 resources configured
> Online: [ wirt1v wirt2v ]
> Full list of resources:
>  Master/Slave Set: Drbd2-clone [Drbd2]
>      Masters: [ wirt1v wirt2v ]
>  Clone Set: Virtfs2-clone [Virtfs2]
>      Started: [ wirt1v wirt2v ]
>  Clone Set: dlm-clone [dlm]
>      Started: [ wirt1v wirt2v ]
>  fencing-idrac1 (stonith:fence_idrac):  Started wirt1v
>  fencing-idrac2 (stonith:fence_idrac):  Started wirt2v
> PCSD Status:
>   wirt1v: Online
>   wirt2v: Online
> Daemon Status:
>   corosync: active/disabled
>   pacemaker: active/disabled
>   pcsd: active/enabled
> 
> #---------------------------------
> ### result:  pcs property  ###
> 
> Cluster Properties:
>  cluster-infrastructure: corosync
>  cluster-name: klasterek
>  dc-version: 1.1.13-10.el7_2.4-44eb2dd
>  have-watchdog: false
>  no-quorum-policy: ignore
>  stonith-enabled: true
>  symmetric-cluster: true
> 
> #---------------------------------
> ### result:  pcs cluster cib  ###
> 
> <cib crm_feature_set="3.0.10" validate-with="pacemaker-2.3" epoch="69"
> num_updates="38" admin_epoch="0" cib-last-written="Tue Sep 13 18:31:33
> 2016" update-origin="wirt1v" update-client="crm_resource"
> update-user="root" have-quorum="1" dc-uuid="1">
>   <configuration>
>     <crm_config>
>       <cluster_property_set id="cib-bootstrap-options">
>         <nvpair id="cib-bootstrap-options-have-watchdog"
> name="have-watchdog" value="false"/>
>         <nvpair id="cib-bootstrap-options-dc-version" name="dc-version"
> value="1.1.13-10.el7_2.4-44eb2dd"/>
>         <nvpair id="cib-bootstrap-options-cluster-infrastructure"
> name="cluster-infrastructure" value="corosync"/>
>         <nvpair id="cib-bootstrap-options-cluster-name"
> name="cluster-name" value="klasterek"/>
>         <nvpair id="cib-bootstrap-options-no-quorum-policy"
> name="no-quorum-policy" value="ignore"/>
>         <nvpair id="cib-bootstrap-options-symmetric-cluster"
> name="symmetric-cluster" value="true"/>
>         <nvpair id="cib-bootstrap-options-stonith-enabled"
> name="stonith-enabled" value="true"/>
>       </cluster_property_set>
>     </crm_config>
>     <nodes>
>       <node id="1" uname="wirt1v"/>
>       <node id="2" uname="wirt2v"/>
>     </nodes>
>     <resources>
>       <master id="Drbd2-clone">
>         <primitive class="ocf" id="Drbd2" provider="linbit" type="drbd">
>           <instance_attributes id="Drbd2-instance_attributes">
>             <nvpair id="Drbd2-instance_attributes-drbd_resource"
> name="drbd_resource" value="drbd2"/>
>           </instance_attributes>
>           <operations>
>             <op id="Drbd2-start-interval-0s" interval="0s" name="start"
> timeout="240"/>
>             <op id="Drbd2-promote-interval-0s" interval="0s"
> name="promote" timeout="90"/>
>             <op id="Drbd2-demote-interval-0s" interval="0s"
> name="demote" timeout="90"/>
>             <op id="Drbd2-stop-interval-0s" interval="0s" name="stop"
> timeout="100"/>
>             <op id="Drbd2-monitor-interval-60s" interval="60s"
> name="monitor"/>
>           </operations>
>         </primitive>
>         <meta_attributes id="Drbd2-clone-meta_attributes">
>           <nvpair id="Drbd2-clone-meta_attributes-master-max"
> name="master-max" value="2"/>
>           <nvpair id="Drbd2-clone-meta_attributes-master-node-max"
> name="master-node-max" value="1"/>
>           <nvpair id="Drbd2-clone-meta_attributes-clone-max"
> name="clone-max" value="2"/>
>           <nvpair id="Drbd2-clone-meta_attributes-clone-node-max"
> name="clone-node-max" value="1"/>
>           <nvpair id="Drbd2-clone-meta_attributes-notify" name="notify"
> value="true"/>
>           <nvpair id="Drbd2-clone-meta_attributes-globally-unique"
> name="globally-unique" value="false"/>
>           <nvpair id="Drbd2-clone-meta_attributes-interleave"
> name="interleave" value="true"/>
>           <nvpair id="Drbd2-clone-meta_attributes-ordered"
> name="ordered" value="true"/>
>         </meta_attributes>
>       </master>
> 
>       <clone id="Virtfs2-clone">
>         <primitive class="ocf" id="Virtfs2" provider="heartbeat"
> type="Filesystem">
>           <instance_attributes id="Virtfs2-instance_attributes">
>             <nvpair id="Virtfs2-instance_attributes-device"
> name="device" value="/dev/drbd2"/>
>             <nvpair id="Virtfs2-instance_attributes-directory"
> name="directory" value="/virtfs2"/>
>             <nvpair id="Virtfs2-instance_attributes-fstype"
> name="fstype" value="gfs2"/>
>           </instance_attributes>
>           <operations>
>             <op id="Virtfs2-start-interval-0s" interval="0s"
> name="start" timeout="60"/>
>             <op id="Virtfs2-stop-interval-0s" interval="0s" name="stop"
> timeout="60"/>
>             <op id="Virtfs2-monitor-interval-20" interval="20"
> name="monitor" timeout="40"/>
>           </operations>
>         </primitive>
>         <meta_attributes id="Virtfs2-clone-meta_attributes">
>           <nvpair id="Virtfs2-interleave" name="interleave" value="true"/>
>         </meta_attributes>
>       </clone>
>       <clone id="dlm-clone">
>         <primitive class="ocf" id="dlm" provider="pacemaker"
> type="controld">
>           <instance_attributes id="dlm-instance_attributes"/>
>           <operations>
>             <op id="dlm-start-interval-0s" interval="0s" name="start"
> timeout="90"/>
>             <op id="dlm-stop-interval-0s" interval="0s" name="stop"
> timeout="100"/>
>             <op id="dlm-monitor-interval-60s" interval="60s"
> name="monitor"/>
>           </operations>
>         </primitive>
>         <meta_attributes id="dlm-clone-meta_attributes">
>           <nvpair id="dlm-clone-max" name="clone-max" value="2"/>
>           <nvpair id="dlm-clone-node-max" name="clone-node-max" value="1"/>
>           <nvpair id="dlm-interleave" name="interleave" value="true"/>
>           <nvpair id="dlm-ordered" name="ordered" value="true"/>
>         </meta_attributes>
>       </clone>
>       <primitive class="stonith" id="fencing-idrac1" type="fence_idrac">
>         <instance_attributes id="fencing-idrac1-instance_attributes">
>           <nvpair id="fencing-idrac1-instance_attributes-pcmk_host_list"
> name="pcmk_host_list" value="wirt1v"/>
>           <nvpair id="fencing-idrac1-instance_attributes-ipaddr"
> name="ipaddr" value="172.31.0.223"/>
>           <nvpair id="fencing-idrac1-instance_attributes-lanplus"
> name="lanplus" value="on"/>
>           <nvpair id="fencing-idrac1-instance_attributes-login"
> name="login" value="root"/>
>           <nvpair id="fencing-idrac1-instance_attributes-passwd"
> name="passwd" value="my1secret2password3"/>
>           <nvpair id="fencing-idrac1-instance_attributes-action"
> name="action" value="reboot"/>
>         </instance_attributes>
>         <operations>
>           <op id="fencing-idrac1-monitor-interval-60" interval="60"
> name="monitor"/>
>         </operations>
>       </primitive>
>       <primitive class="stonith" id="fencing-idrac2" type="fence_idrac">
>         <instance_attributes id="fencing-idrac2-instance_attributes">
>           <nvpair id="fencing-idrac2-instance_attributes-pcmk_host_list"
> name="pcmk_host_list" value="wirt2v"/>
>           <nvpair id="fencing-idrac2-instance_attributes-ipaddr"
> name="ipaddr" value="172.31.0.224"/>
>           <nvpair id="fencing-idrac2-instance_attributes-lanplus"
> name="lanplus" value="on"/>
>           <nvpair id="fencing-idrac2-instance_attributes-login"
> name="login" value="root"/>
>           <nvpair id="fencing-idrac2-instance_attributes-passwd"
> name="passwd" value="my1secret2password3"/>
>           <nvpair id="fencing-idrac2-instance_attributes-action"
> name="action" value="reboot"/>
>         </instance_attributes>
>         <operations>
>           <op id="fencing-idrac2-monitor-interval-60" interval="60"
> name="monitor"/>
>         </operations>
>       </primitive>
>     </resources>
>     <constraints>
>       <rsc_colocation id="colocation-Virtfs2-clone-Drbd2-clone-INFINITY"
> rsc="Virtfs2-clone" score="INFINITY" with-rsc="Drbd2-clone"
> with-rsc-role="Master"/>
>       <rsc_order first="Drbd2-clone" first-action="promote"
> id="order-Drbd2-clone-Virtfs2-clone-mandatory" then="Virtfs2-clone"
> then-action="start"/>
>       <rsc_order first="dlm-clone" first-action="start"
> id="order-dlm-clone-Virtfs2-clone-mandatory" then="Virtfs2-clone"
> then-action="start"/>
>       <rsc_colocation id="colocation-Virtfs2-clone-dlm-clone-INFINITY"
> rsc="Virtfs2-clone" score="INFINITY" with-rsc="dlm-clone"/>
>     </constraints>
>     <rsc_defaults>
>       <meta_attributes id="rsc_defaults-options">
>         <nvpair id="rsc_defaults-options-resource-stickiness"
> name="resource-stickiness" value="100"/>
>       </meta_attributes>
>     </rsc_defaults>
>   </configuration>
>   <status>
>     <node_state id="1" uname="wirt1v" in_ccm="true" crmd="online"
> crm-debug-origin="do_update_resource" join="member" expected="member">
>       <lrm id="1">
>         <lrm_resources>
>           <lrm_resource id="fencing-idrac1" type="fence_idrac"
> class="stonith">
>             <lrm_rsc_op id="fencing-idrac1_last_0"
> operation_key="fencing-idrac1_start_0" operation="start"
> crm-debug-origin="do_update_resource" crm_feature_set="3.0.10"
> transition-key="55:0:0:5f2f0724-e33d-4494-90b2-9e06a0e2b0df"
> transition-magic="0:0;55:0:0:5f2f0724-e33d-4494-90b2-9e06a0e2b0df"
> on_node="wirt1v" call-id="27" rc-code="0" op-status="0" interval="0"
> last-run="1473786030" last-rc-change="1473786030" exec-time="54"
> queue-time="0" op-digest="c5f495355c70285327a4ecd128166155"
> op-secure-params=" passwd "
> op-secure-digest="58f15e2aeb9ef41c7d7016ac60c95b3d"/>
>             <lrm_rsc_op id="fencing-idrac1_monitor_60000"
> operation_key="fencing-idrac1_monitor_60000" operation="monitor"
> crm-debug-origin="do_update_resource" crm_feature_set="3.0.10"
> transition-key="51:1:0:5f2f0724-e33d-4494-90b2-9e06a0e2b0df"
> transition-magic="0:0;51:1:0:5f2f0724-e33d-4494-90b2-9e06a0e2b0df"
> on_node="wirt1v" call-id="29" rc-code="0" op-status="0" interval="60000"
> last-rc-change="1473786031" exec-time="54" queue-time="0"
> op-digest="2c3a04590a892a02a6109a0e8bd4b89a" op-secure-params=" passwd "
> op-secure-digest="58f15e2aeb9ef41c7d7016ac60c95b3d"/>
>           </lrm_resource>
>           <lrm_resource id="fencing-idrac2" type="fence_idrac"
> class="stonith">
>             <lrm_rsc_op id="fencing-idrac2_last_0"
> operation_key="fencing-idrac2_monitor_0" operation="monitor"
> crm-debug-origin="do_update_resource" crm_feature_set="3.0.10"
> transition-key="8:0:7:5f2f0724-e33d-4494-90b2-9e06a0e2b0df"
> transition-magic="0:7;8:0:7:5f2f0724-e33d-4494-90b2-9e06a0e2b0df"
> on_node="wirt1v" call-id="24" rc-code="7" op-status="0" interval="0"
> last-run="1473786029" last-rc-change="1473786029" exec-time="0"
> queue-time="0" op-digest="62957a33f7a67eda09c15e3f933f2d0b"
> op-secure-params=" passwd "
> op-secure-digest="65925748cee98be7e9d827ae5f2eb74f"/>
>           </lrm_resource>
>           <lrm_resource id="Drbd2" type="drbd" class="ocf"
> provider="linbit">
>             <lrm_rsc_op id="Drbd2_last_0"
> operation_key="Drbd2_promote_0" operation="promote"
> crm-debug-origin="do_update_resource" crm_feature_set="3.0.10"
> transition-key="10:2:0:5f2f0724-e33d-4494-90b2-9e06a0e2b0df"
> transition-magic="0:0;10:2:0:5f2f0724-e33d-4494-90b2-9e06a0e2b0df"
> on_node="wirt1v" call-id="33" rc-code="0" op-status="0" interval="0"
> last-run="1473786032" last-rc-change="1473786032" exec-time="64"
> queue-time="1" op-digest="d0c8a735862843030d8426a5218ceb92"/>
>           </lrm_resource>
>           <lrm_resource id="Virtfs2" type="Filesystem" class="ocf"
> provider="heartbeat">
>             <lrm_rsc_op id="Virtfs2_last_0"
> operation_key="Virtfs2_start_0" operation="start"
> crm-debug-origin="do_update_resource" crm_feature_set="3.0.10"
> transition-key="41:3:0:5f2f0724-e33d-4494-90b2-9e06a0e2b0df"
> transition-magic="0:0;41:3:0:5f2f0724-e33d-4494-90b2-9e06a0e2b0df"
> on_node="wirt1v" call-id="35" rc-code="0" op-status="0" interval="0"
> last-run="1473786032" last-rc-change="1473786032" exec-time="1372"
> queue-time="0" op-digest="8dbd904c2115508ebcf3dffe8e7c6d82"/>
>             <lrm_rsc_op id="Virtfs2_monitor_20000"
> operation_key="Virtfs2_monitor_20000" operation="monitor"
> crm-debug-origin="do_update_resource" crm_feature_set="3.0.10"
> transition-key="42:3:0:5f2f0724-e33d-4494-90b2-9e06a0e2b0df"
> transition-magic="0:0;42:3:0:5f2f0724-e33d-4494-90b2-9e06a0e2b0df"
> on_node="wirt1v" call-id="36" rc-code="0" op-status="0" interval="20000"
> last-rc-change="1473786034" exec-time="64" queue-time="0"
> op-digest="051271837d1a8eccc0af38fbd8c406e4"/>
>           </lrm_resource>
>           <lrm_resource id="dlm" type="controld" class="ocf"
> provider="pacemaker">
>             <lrm_rsc_op id="dlm_last_0" operation_key="dlm_start_0"
> operation="start" crm-debug-origin="do_update_resource"
> crm_feature_set="3.0.10"
> transition-key="47:0:0:5f2f0724-e33d-4494-90b2-9e06a0e2b0df"
> transition-magic="0:0;47:0:0:5f2f0724-e33d-4494-90b2-9e06a0e2b0df"
> on_node="wirt1v" call-id="26" rc-code="0" op-status="0" interval="0"
> last-run="1473786030" last-rc-change="1473786030" exec-time="1098"
> queue-time="0" op-digest="f2317cad3d54cec5d7d7aa7d0bf35cf8"/>
>             <lrm_rsc_op id="dlm_monitor_60000"
> operation_key="dlm_monitor_60000" operation="monitor"
> crm-debug-origin="do_update_resource" crm_feature_set="3.0.10"
> transition-key="42:1:0:5f2f0724-e33d-4494-90b2-9e06a0e2b0df"
> transition-magic="0:0;42:1:0:5f2f0724-e33d-4494-90b2-9e06a0e2b0df"
> on_node="wirt1v" call-id="28" rc-code="0" op-status="0" interval="60000"
> last-rc-change="1473786031" exec-time="34" queue-time="0"
> op-digest="4811cef7f7f94e3a35a70be7916cb2fd"/>
>           </lrm_resource>
>         </lrm_resources>
>       </lrm>
>       <transient_attributes id="1">
>         <instance_attributes id="status-1">
>           <nvpair id="status-1-shutdown" name="shutdown" value="0"/>
>           <nvpair id="status-1-probe_complete" name="probe_complete"
> value="true"/>
>           <nvpair id="status-1-master-Drbd2" name="master-Drbd2"
> value="10000"/>
>         </instance_attributes>
>       </transient_attributes>
>     </node_state>
>     <node_state id="2" uname="wirt2v" in_ccm="true" crmd="online"
> crm-debug-origin="do_update_resource" join="member" expected="member">
>       <lrm id="2">
>         <lrm_resources>
>           <lrm_resource id="fencing-idrac1" type="fence_idrac"
> class="stonith">
>             <lrm_rsc_op id="fencing-idrac1_last_0"
> operation_key="fencing-idrac1_monitor_0" operation="monitor"
> crm-debug-origin="do_update_resource" crm_feature_set="3.0.10"
> transition-key="13:0:7:5f2f0724-e33d-4494-90b2-9e06a0e2b0df"
> transition-magic="0:7;13:0:7:5f2f0724-e33d-4494-90b2-9e06a0e2b0df"
> on_node="wirt2v" call-id="20" rc-code="7" op-status="0" interval="0"
> last-run="1473786029" last-rc-change="1473786029" exec-time="3"
> queue-time="0" op-digest="c5f495355c70285327a4ecd128166155"
> op-secure-params=" passwd "
> op-secure-digest="58f15e2aeb9ef41c7d7016ac60c95b3d"/>
>           </lrm_resource>
>           <lrm_resource id="fencing-idrac2" type="fence_idrac"
> class="stonith">
>             <lrm_rsc_op id="fencing-idrac2_last_0"
> operation_key="fencing-idrac2_start_0" operation="start"
> crm-debug-origin="do_update_resource" crm_feature_set="3.0.10"
> transition-key="57:0:0:5f2f0724-e33d-4494-90b2-9e06a0e2b0df"
> transition-magic="0:0;57:0:0:5f2f0724-e33d-4494-90b2-9e06a0e2b0df"
> on_node="wirt2v" call-id="25" rc-code="0" op-status="0" interval="0"
> last-run="1473786030" last-rc-change="1473786030" exec-time="62"
> queue-time="0" op-digest="62957a33f7a67eda09c15e3f933f2d0b"
> op-secure-params=" passwd "
> op-secure-digest="65925748cee98be7e9d827ae5f2eb74f"/>
>             <lrm_rsc_op id="fencing-idrac2_monitor_60000"
> operation_key="fencing-idrac2_monitor_60000" operation="monitor"
> crm-debug-origin="do_update_resource" crm_feature_set="3.0.10"
> transition-key="54:1:0:5f2f0724-e33d-4494-90b2-9e06a0e2b0df"
> transition-magic="0:0;54:1:0:5f2f0724-e33d-4494-90b2-9e06a0e2b0df"
> on_node="wirt2v" call-id="26" rc-code="0" op-status="0" interval="60000"
> last-rc-change="1473786031" exec-time="74" queue-time="0"
> op-digest="02c5ce42002631d918b41adc571d64b8" op-secure-params=" passwd "
> op-secure-digest="65925748cee98be7e9d827ae5f2eb74f"/>
>           </lrm_resource>
>           <lrm_resource id="dlm" type="controld" class="ocf"
> provider="pacemaker">
>             <lrm_rsc_op id="dlm_last_0" operation_key="dlm_start_0"
> operation="start" crm-debug-origin="do_update_resource"
> crm_feature_set="3.0.10"
> transition-key="43:1:0:5f2f0724-e33d-4494-90b2-9e06a0e2b0df"
> transition-magic="0:0;43:1:0:5f2f0724-e33d-4494-90b2-9e06a0e2b0df"
> on_node="wirt2v" call-id="27" rc-code="0" op-status="0" interval="0"
> last-run="1473786031" last-rc-change="1473786031" exec-time="1102"
> queue-time="0" op-digest="f2317cad3d54cec5d7d7aa7d0bf35cf8"/>
>             <lrm_rsc_op id="dlm_monitor_60000"
> operation_key="dlm_monitor_60000" operation="monitor"
> crm-debug-origin="do_update_resource" crm_feature_set="3.0.10"
> transition-key="50:2:0:5f2f0724-e33d-4494-90b2-9e06a0e2b0df"
> transition-magic="0:0;50:2:0:5f2f0724-e33d-4494-90b2-9e06a0e2b0df"
> on_node="wirt2v" call-id="30" rc-code="0" op-status="0" interval="60000"
> last-rc-change="1473786032" exec-time="32" queue-time="0"
> op-digest="4811cef7f7f94e3a35a70be7916cb2fd"/>
>           </lrm_resource>
>           <lrm_resource id="Drbd2" type="drbd" class="ocf"
> provider="linbit">
>             <lrm_rsc_op id="Drbd2_last_0"
> operation_key="Drbd2_promote_0" operation="promote"
> crm-debug-origin="do_update_resource" crm_feature_set="3.0.10"
> transition-key="13:2:0:5f2f0724-e33d-4494-90b2-9e06a0e2b0df"
> transition-magic="0:0;13:2:0:5f2f0724-e33d-4494-90b2-9e06a0e2b0df"
> on_node="wirt2v" call-id="32" rc-code="0" op-status="0" interval="0"
> last-run="1473786032" last-rc-change="1473786032" exec-time="55"
> queue-time="0" op-digest="d0c8a735862843030d8426a5218ceb92"/>
>           </lrm_resource>
>           <lrm_resource id="Virtfs2" type="Filesystem" class="ocf"
> provider="heartbeat">
>             <lrm_rsc_op id="Virtfs2_last_0"
> operation_key="Virtfs2_start_0" operation="start"
> crm-debug-origin="do_update_resource" crm_feature_set="3.0.10"
> transition-key="43:3:0:5f2f0724-e33d-4494-90b2-9e06a0e2b0df"
> transition-magic="0:0;43:3:0:5f2f0724-e33d-4494-90b2-9e06a0e2b0df"
> on_node="wirt2v" call-id="34" rc-code="0" op-status="0" interval="0"
> last-run="1473786032" last-rc-change="1473786032" exec-time="939"
> queue-time="0" op-digest="8dbd904c2115508ebcf3dffe8e7c6d82"/>
>             <lrm_rsc_op id="Virtfs2_monitor_20000"
> operation_key="Virtfs2_monitor_20000" operation="monitor"
> crm-debug-origin="do_update_resource" crm_feature_set="3.0.10"
> transition-key="44:3:0:5f2f0724-e33d-4494-90b2-9e06a0e2b0df"
> transition-magic="0:0;44:3:0:5f2f0724-e33d-4494-90b2-9e06a0e2b0df"
> on_node="wirt2v" call-id="35" rc-code="0" op-status="0" interval="20000"
> last-rc-change="1473786033" exec-time="39" queue-time="0"
> op-digest="051271837d1a8eccc0af38fbd8c406e4"/>
>           </lrm_resource>
>         </lrm_resources>
>       </lrm>
>       <transient_attributes id="2">
>         <instance_attributes id="status-2">
>           <nvpair id="status-2-shutdown" name="shutdown" value="0"/>
>           <nvpair id="status-2-probe_complete" name="probe_complete"
> value="true"/>
>           <nvpair id="status-2-master-Drbd2" name="master-Drbd2"
> value="10000"/>
>         </instance_attributes>
>       </transient_attributes>
>     </node_state>
>   </status>
> </cib>
> 
> #-------- The End --------------------
> 
> ### result:  pcs config  ###
> 
> Cluster Name: klasterek
> Corosync Nodes:
>  wirt1v wirt2v
> Pacemaker Nodes:
>  wirt1v wirt2v
> Resources:
>  Master: Drbd2-clone
>   Meta Attrs: master-max=2 master-node-max=1 clone-max=2
> clone-node-max=1 notify=true globally-unique=false interleave=true
> ordered=true
>   Resource: Drbd2 (class=ocf provider=linbit type=drbd)
>    Attributes: drbd_resource=drbd2
>    Operations: start interval=0s timeout=240 (Drbd2-start-interval-0s)
>                promote interval=0s timeout=90 (Drbd2-promote-interval-0s)
>                demote interval=0s timeout=90 (Drbd2-demote-interval-0s)
>                stop interval=0s timeout=100 (Drbd2-stop-interval-0s)
>                monitor interval=60s (Drbd2-monitor-interval-60s)
>  Clone: Virtfs2-clone
>   Meta Attrs: interleave=true
>   Resource: Virtfs2 (class=ocf provider=heartbeat type=Filesystem)
>    Attributes: device=/dev/drbd2 directory=/virtfs2 fstype=gfs2
>    Operations: start interval=0s timeout=60 (Virtfs2-start-interval-0s)
>                stop interval=0s timeout=60 (Virtfs2-stop-interval-0s)
>                monitor interval=20 timeout=40 (Virtfs2-monitor-interval-20)
>  Clone: dlm-clone
>   Meta Attrs: clone-max=2 clone-node-max=1 interleave=true ordered=true
>   Resource: dlm (class=ocf provider=pacemaker type=controld)
>    Operations: start interval=0s timeout=90 (dlm-start-interval-0s)
>                stop interval=0s timeout=100 (dlm-stop-interval-0s)
>                monitor interval=60s (dlm-monitor-interval-60s)
> Stonith Devices:
>  Resource: fencing-idrac1 (class=stonith type=fence_idrac)
>   Attributes: pcmk_host_list=wirt1v ipaddr=172.31.0.223 lanplus=on
> login=root passwd=my1secret2password3 action=reboot
>   Operations: monitor interval=60 (fencing-idrac1-monitor-interval-60)
>  Resource: fencing-idrac2 (class=stonith type=fence_idrac)
>   Attributes: pcmk_host_list=wirt2v ipaddr=172.31.0.224 lanplus=on
> login=root passwd=my1secret2password3 action=reboot
>   Operations: monitor interval=60 (fencing-idrac2-monitor-interval-60)
> Fencing Levels:
> Location Constraints:
> Ordering Constraints:
>   promote Drbd2-clone then start Virtfs2-clone (kind:Mandatory)
> (id:order-Drbd2-clone-Virtfs2-clone-mandatory)
>   start dlm-clone then start Virtfs2-clone (kind:Mandatory)
> (id:order-dlm-clone-Virtfs2-clone-mandatory)
> Colocation Constraints:
>   Virtfs2-clone with Drbd2-clone (score:INFINITY) (with-rsc-role:Master)
> (id:colocation-Virtfs2-clone-Drbd2-clone-INFINITY)
>   Virtfs2-clone with dlm-clone (score:INFINITY)
> (id:colocation-Virtfs2-clone-dlm-clone-INFINITY)
> Resources Defaults:
>  resource-stickiness: 100
> Operations Defaults:
>  No defaults set
> Cluster Properties:
>  cluster-infrastructure: corosync
>  cluster-name: klasterek
>  dc-version: 1.1.13-10.el7_2.4-44eb2dd
>  have-watchdog: false
>  no-quorum-policy: ignore
>  stonith-enabled: true
>  symmetric-cluster: true
> 
> 
> #---------------------------------
> # /var/log/messages
> 
> Sep 13 22:00:19 wirt1v systemd: Starting Corosync Cluster Engine...
> Sep 13 22:00:19 wirt1v corosync[5720]: [MAIN  ] Corosync Cluster Engine
> ('2.3.4'): started and ready to provide service.
> Sep 13 22:00:19 wirt1v corosync[5720]: [MAIN  ] Corosync built-in
> features: dbus systemd xmlconf snmp pie relro bindnow
> Sep 13 22:00:19 wirt1v corosync[5721]: [TOTEM ] Initializing transport
> (UDP/IP Unicast).
> Sep 13 22:00:19 wirt1v corosync[5721]: [TOTEM ] Initializing
> transmit/receive security (NSS) crypto: none hash: none
> Sep 13 22:00:19 wirt1v corosync[5721]: [TOTEM ] The network interface
> [1.1.1.1] is now up.
> Sep 13 22:00:19 wirt1v corosync[5721]: [SERV  ] Service engine loaded:
> corosync configuration map access [0]
> Sep 13 22:00:19 wirt1v corosync[5721]: [QB    ] server name: cmap
> Sep 13 22:00:19 wirt1v corosync[5721]: [SERV  ] Service engine loaded:
> corosync configuration service [1]
> Sep 13 22:00:19 wirt1v corosync[5721]: [QB    ] server name: cfg
> Sep 13 22:00:19 wirt1v corosync[5721]: [SERV  ] Service engine loaded:
> corosync cluster closed process group service v1.01 [2]
> Sep 13 22:00:19 wirt1v corosync[5721]: [QB    ] server name: cpg
> Sep 13 22:00:19 wirt1v corosync[5721]: [SERV  ] Service engine loaded:
> corosync profile loading service [4]
> Sep 13 22:00:19 wirt1v corosync[5721]: [QUORUM] Using quorum provider
> corosync_votequorum
> Sep 13 22:00:19 wirt1v corosync[5721]: [VOTEQ ] Waiting for all cluster
> members. Current votes: 1 expected_votes: 2
> Sep 13 22:00:19 wirt1v corosync[5721]: [SERV  ] Service engine loaded:
> corosync vote quorum service v1.0 [5]
> Sep 13 22:00:19 wirt1v corosync[5721]: [QB    ] server name: votequorum
> Sep 13 22:00:19 wirt1v corosync[5721]: [SERV  ] Service engine loaded:
> corosync cluster quorum service v0.1 [3]
> Sep 13 22:00:19 wirt1v corosync[5721]: [QB    ] server name: quorum
> Sep 13 22:00:19 wirt1v corosync[5721]: [TOTEM ] adding new UDPU member
> {1.1.1.1}
> Sep 13 22:00:19 wirt1v corosync[5721]: [TOTEM ] adding new UDPU member
> {1.1.1.2}
> Sep 13 22:00:19 wirt1v corosync[5721]: [TOTEM ] A new membership
> (1.1.1.1:708 <http://1.1.1.1:708>) was formed. Members joined: 1
> Sep 13 22:00:19 wirt1v corosync[5721]: [VOTEQ ] Waiting for all cluster
> members. Current votes: 1 expected_votes: 2
> Sep 13 22:00:19 wirt1v corosync[5721]: [VOTEQ ] Waiting for all cluster
> members. Current votes: 1 expected_votes: 2
> Sep 13 22:00:19 wirt1v corosync[5721]: [VOTEQ ] Waiting for all cluster
> members. Current votes: 1 expected_votes: 2
> Sep 13 22:00:19 wirt1v corosync[5721]: [QUORUM] Members[1]: 1
> Sep 13 22:00:19 wirt1v corosync[5721]: [MAIN  ] Completed service
> synchronization, ready to provide service.
> Sep 13 22:00:20 wirt1v corosync: Starting Corosync Cluster Engine
> (corosync): [  OK  ]
> Sep 13 22:00:20 wirt1v systemd: Started Corosync Cluster Engine.
> Sep 13 22:00:20 wirt1v systemd: Started Pacemaker High Availability
> Cluster Manager.
> Sep 13 22:00:20 wirt1v systemd: Starting Pacemaker High Availability
> Cluster Manager...
> Sep 13 22:00:20 wirt1v pacemakerd[5740]:  notice: Additional logging
> available in /var/log/pacemaker.log
> Sep 13 22:00:20 wirt1v pacemakerd[5740]:  notice: Switching to
> /var/log/cluster/corosync.log
> Sep 13 22:00:20 wirt1v pacemakerd[5740]:  notice: Additional logging
> available in /var/log/cluster/corosync.log
> Sep 13 22:00:20 wirt1v pacemakerd[5740]:  notice: Configured corosync to
> accept connections from group 189: OK (1)
> Sep 13 22:00:20 wirt1v pacemakerd[5740]:  notice: Starting Pacemaker
> 1.1.13-10.el7_2.4 (Build: 44eb2dd):  generated-manpages agent-manpages
> ncurses libqb-logging libqb-ipc upstart systemd nagios  corosync-native
> atomic-attrd acls
> Sep 13 22:00:20 wirt1v pacemakerd[5740]:  notice: Tracking existing lrmd
> process (pid=3413)
> Sep 13 22:00:20 wirt1v pacemakerd[5740]:  notice: Tracking existing
> pengine process (pid=3415)
> Sep 13 22:00:20 wirt1v pacemakerd[5740]:  notice: Quorum lost
> Sep 13 22:00:20 wirt1v pacemakerd[5740]:  notice:
> pcmk_quorum_notification: Node wirt1v[1] - state is now member (was (null))
> Sep 13 22:00:20 wirt1v stonith-ng[5742]:  notice: Additional logging
> available in /var/log/cluster/corosync.log
> Sep 13 22:00:20 wirt1v cib[5741]:  notice: Additional logging available
> in /var/log/cluster/corosync.log
> Sep 13 22:00:20 wirt1v stonith-ng[5742]:  notice: Connecting to cluster
> infrastructure: corosync
> Sep 13 22:00:20 wirt1v attrd[5743]:  notice: Additional logging
> available in /var/log/cluster/corosync.log
> Sep 13 22:00:20 wirt1v attrd[5743]:  notice: Connecting to cluster
> infrastructure: corosync
> Sep 13 22:00:20 wirt1v crmd[5744]:  notice: Additional logging available
> in /var/log/cluster/corosync.log
> Sep 13 22:00:20 wirt1v crmd[5744]:  notice: CRM Git Version:
> 1.1.13-10.el7_2.4 (44eb2dd)
> Sep 13 22:00:20 wirt1v cib[5741]:  notice: Connecting to cluster
> infrastructure: corosync
> Sep 13 22:00:20 wirt1v attrd[5743]:  notice: crm_update_peer_proc: Node
> wirt1v[1] - state is now member (was (null))
> Sep 13 22:00:20 wirt1v stonith-ng[5742]:  notice: crm_update_peer_proc:
> Node wirt1v[1] - state is now member (was (null))
> Sep 13 22:00:20 wirt1v cib[5741]:  notice: crm_update_peer_proc: Node
> wirt1v[1] - state is now member (was (null))
> Sep 13 22:00:21 wirt1v crmd[5744]:  notice: Connecting to cluster
> infrastructure: corosync
> Sep 13 22:00:21 wirt1v crmd[5744]:  notice: Quorum lost
> Sep 13 22:00:21 wirt1v stonith-ng[5742]:  notice: Watching for stonith
> topology changes
> Sep 13 22:00:21 wirt1v stonith-ng[5742]:  notice: On loss of CCM Quorum:
> Ignore
> Sep 13 22:00:21 wirt1v crmd[5744]:  notice: pcmk_quorum_notification:
> Node wirt1v[1] - state is now member (was (null))
> Sep 13 22:00:21 wirt1v crmd[5744]:  notice: Notifications disabled
> Sep 13 22:00:21 wirt1v crmd[5744]:  notice: The local CRM is operational
> Sep 13 22:00:21 wirt1v crmd[5744]:  notice: State transition S_STARTING
> -> S_PENDING [ input=I_PENDING cause=C_FSA_INTERNAL origin=do_started ]
> Sep 13 22:00:22 wirt1v stonith-ng[5742]:  notice: Added 'fencing-idrac1'
> to the device list (1 active devices)
> Sep 13 22:00:22 wirt1v stonith-ng[5742]:  notice: Added 'fencing-idrac2'
> to the device list (2 active devices)
> Sep 13 22:00:42 wirt1v crmd[5744]: warning: FSA: Input I_DC_TIMEOUT from
> crm_timer_popped() received in state S_PENDING
> Sep 13 22:00:42 wirt1v crmd[5744]:  notice: State transition S_ELECTION
> -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_TIMER_POPPED
> origin=election_timeout_popped ]
> Sep 13 22:00:42 wirt1v crmd[5744]: warning: FSA: Input I_ELECTION_DC
> from do_election_check() received in state S_INTEGRATION
> Sep 13 22:00:42 wirt1v crmd[5744]:  notice: Notifications disabled
> Sep 13 22:00:42 wirt1v pengine[3415]:  notice: On loss of CCM Quorum: Ignore
> Sep 13 22:00:42 wirt1v pengine[3415]: warning: Scheduling Node wirt2v
> for STONITH
> Sep 13 22:00:42 wirt1v pengine[3415]:  notice: Start   Drbd2:0#011(wirt1v)
> Sep 13 22:00:42 wirt1v pengine[3415]:  notice: Start   dlm:0#011(wirt1v)
> Sep 13 22:00:42 wirt1v pengine[3415]:  notice: Start  
> fencing-idrac1#011(wirt1v)
> Sep 13 22:00:42 wirt1v pengine[3415]:  notice: Start  
> fencing-idrac2#011(wirt1v)
> Sep 13 22:00:42 wirt1v pengine[3415]: warning: Calculated Transition 84:
> /var/lib/pacemaker/pengine/pe-warn-294.bz2
> Sep 13 22:00:42 wirt1v crmd[5744]:  notice: Initiating action 4: monitor
> Drbd2:0_monitor_0 on wirt1v (local)
> Sep 13 22:00:42 wirt1v crmd[5744]:  notice: Initiating action 5: monitor
> Virtfs2:0_monitor_0 on wirt1v (local)
> Sep 13 22:00:42 wirt1v crmd[5744]:  notice: Initiating action 6: monitor
> dlm:0_monitor_0 on wirt1v (local)
> Sep 13 22:00:42 wirt1v crmd[5744]:  notice: Initiating action 7: monitor
> fencing-idrac1_monitor_0 on wirt1v (local)
> Sep 13 22:00:42 wirt1v crmd[5744]:  notice: Initiating action 8: monitor
> fencing-idrac2_monitor_0 on wirt1v (local)
> Sep 13 22:00:42 wirt1v crmd[5744]:  notice: Executing reboot fencing
> operation (50) on wirt2v (timeout=60000)
> Sep 13 22:00:42 wirt1v stonith-ng[5742]:  notice: Client
> crmd.5744.8928b80c wants to fence (reboot) 'wirt2v' with device '(any)'
> Sep 13 22:00:42 wirt1v stonith-ng[5742]:  notice: Initiating remote
> operation reboot for wirt2v: e87b942f-997d-42ad-91ad-dfa501f4ede0 (0)
> Sep 13 22:00:42 wirt1v stonith-ng[5742]:  notice: fencing-idrac1 can not
> fence (reboot) wirt2v: static-list
> Sep 13 22:00:42 wirt1v stonith-ng[5742]:  notice: fencing-idrac2 can
> fence (reboot) wirt2v: static-list
> Sep 13 22:00:42 wirt1v stonith-ng[5742]:  notice: fencing-idrac1 can not
> fence (reboot) wirt2v: static-list
> Sep 13 22:00:42 wirt1v stonith-ng[5742]:  notice: fencing-idrac2 can
> fence (reboot) wirt2v: static-list
> Sep 13 22:00:42 wirt1v Filesystem(Virtfs2)[5753]: WARNING: Couldn't find
> device [/dev/drbd2]. Expected /dev/??? to exist
> Sep 13 22:00:42 wirt1v fence_idrac: Failed: Unable to obtain correct
> plug status or plug is not available
> Sep 13 22:00:43 wirt1v crmd[5744]:  notice: Operation
> fencing-idrac1_monitor_0: not running (node=wirt1v, call=33, rc=7,
> cib-update=31, confirmed=true)
> Sep 13 22:00:43 wirt1v crmd[5744]:  notice: Operation
> fencing-idrac2_monitor_0: not running (node=wirt1v, call=35, rc=7,
> cib-update=32, confirmed=true)
> Sep 13 22:00:43 wirt1v crmd[5744]:  notice: Operation dlm_monitor_0: not
> running (node=wirt1v, call=31, rc=7, cib-update=33, confirmed=true)
> Sep 13 22:00:43 wirt1v crmd[5744]:   error: pcmkRegisterNode: Triggered
> assert at xml.c:594 : node->type == XML_ELEMENT_NODE
> Sep 13 22:00:43 wirt1v crmd[5744]:  notice: Operation Drbd2_monitor_0:
> not running (node=wirt1v, call=27, rc=7, cib-update=34, confirmed=true)
> Sep 13 22:00:43 wirt1v crmd[5744]:  notice: Operation Virtfs2_monitor_0:
> not running (node=wirt1v, call=29, rc=7, cib-update=35, confirmed=true)
> Sep 13 22:00:43 wirt1v crmd[5744]:  notice: Initiating action 3:
> probe_complete probe_complete-wirt1v on wirt1v (local) - no waiting
> Sep 13 22:00:43 wirt1v crmd[5744]:  notice: Transition aborted by
> status-1-probe_complete, probe_complete=true: Transient attribute change
> (create cib=0.69.11, source=abort_unless_down:319,
> path=/cib/status/node_state[@id='1']/transient_attributes[@id='1']/instance_attributes[@id='status-1'],
> 0)
> Sep 13 22:00:43 wirt1v fence_idrac: Failed: Unable to obtain correct
> plug status or plug is not available
> Sep 13 22:00:43 wirt1v stonith-ng[5742]:   error: Operation 'reboot'
> [5849] (call 2 from crmd.5744) for host 'wirt2v' with device
> 'fencing-idrac2' returned: -201 (Generic Pacemaker error)
> Sep 13 22:00:43 wirt1v stonith-ng[5742]: warning: fencing-idrac2:5849 [
> Failed: Unable to obtain correct plug status or plug is not available ]
> Sep 13 22:00:43 wirt1v stonith-ng[5742]: warning: fencing-idrac2:5849 [  ]
> Sep 13 22:00:43 wirt1v stonith-ng[5742]: warning: fencing-idrac2:5849 [  ]
> Sep 13 22:00:43 wirt1v stonith-ng[5742]:  notice: Couldn't find anyone
> to fence (reboot) wirt2v with any device
> Sep 13 22:00:43 wirt1v stonith-ng[5742]:   error: Operation reboot of
> wirt2v by <no-one> for crmd.5744 at wirt1v.e87b942f: No route to host
> Sep 13 22:00:43 wirt1v crmd[5744]:  notice: Stonith operation
> 2/50:84:0:dd848cfe-edbc-41f4-bd55-f0cad5f7204f: No route to host (-113)
> Sep 13 22:00:43 wirt1v crmd[5744]:  notice: Stonith operation 2 for
> wirt2v failed (No route to host): aborting transition.
> Sep 13 22:00:43 wirt1v crmd[5744]:  notice: Peer wirt2v was not
> terminated (reboot) by <anyone> for wirt1v: No route to host
> (ref=e87b942f-997d-42ad-91ad-dfa501f4ede0) by client crmd.5744
> Sep 13 22:00:43 wirt1v crmd[5744]:  notice: Transition 84 (Complete=12,
> Pending=0, Fired=0, Skipped=0, Incomplete=15,
> Source=/var/lib/pacemaker/pengine/pe-warn-294.bz2): Complete
> Sep 13 22:00:43 wirt1v pengine[3415]:  notice: On loss of CCM Quorum: Ignore
> Sep 13 22:00:43 wirt1v pengine[3415]: warning: Scheduling Node wirt2v
> for STONITH
> Sep 13 22:00:43 wirt1v pengine[3415]:  notice: Start   Drbd2:0#011(wirt1v)
> Sep 13 22:00:43 wirt1v pengine[3415]:  notice: Start   dlm:0#011(wirt1v)
> Sep 13 22:00:43 wirt1v pengine[3415]:  notice: Start  
> fencing-idrac1#011(wirt1v)
> Sep 13 22:00:43 wirt1v pengine[3415]:  notice: Start  
> fencing-idrac2#011(wirt1v)
> Sep 13 22:00:43 wirt1v pengine[3415]: warning: Calculated Transition 85:
> /var/lib/pacemaker/pengine/pe-warn-295.bz2
> Sep 13 22:00:43 wirt1v crmd[5744]:  notice: Executing reboot fencing
> operation (45) on wirt2v (timeout=60000)
> Sep 13 22:00:43 wirt1v stonith-ng[5742]:  notice: Client
> crmd.5744.8928b80c wants to fence (reboot) 'wirt2v' with device '(any)'
> Sep 13 22:00:43 wirt1v stonith-ng[5742]:  notice: Initiating remote
> operation reboot for wirt2v: 880b2614-09d2-47df-b740-e1d24732e6c5 (0)
> Sep 13 22:00:43 wirt1v stonith-ng[5742]:  notice: fencing-idrac1 can not
> fence (reboot) wirt2v: static-list
> Sep 13 22:00:43 wirt1v stonith-ng[5742]:  notice: fencing-idrac2 can
> fence (reboot) wirt2v: static-list
> Sep 13 22:00:43 wirt1v stonith-ng[5742]:  notice: fencing-idrac1 can not
> fence (reboot) wirt2v: static-list
> Sep 13 22:00:43 wirt1v stonith-ng[5742]:  notice: fencing-idrac2 can
> fence (reboot) wirt2v: static-list
> Sep 13 22:00:43 wirt1v fence_idrac: Failed: Unable to obtain correct
> plug status or plug is not available
> Sep 13 22:00:44 wirt1v fence_idrac: Failed: Unable to obtain correct
> plug status or plug is not available
> Sep 13 22:00:44 wirt1v stonith-ng[5742]:   error: Operation 'reboot'
> [5879] (call 3 from crmd.5744) for host 'wirt2v' with device
> 'fencing-idrac2' returned: -201 (Generic Pacemaker error)
> Sep 13 22:00:44 wirt1v stonith-ng[5742]: warning: fencing-idrac2:5879 [
> Failed: Unable to obtain correct plug status or plug is not available ]
> Sep 13 22:00:44 wirt1v stonith-ng[5742]: warning: fencing-idrac2:5879 [  ]
> Sep 13 22:00:44 wirt1v stonith-ng[5742]: warning: fencing-idrac2:5879 [  ]
> Sep 13 22:00:44 wirt1v stonith-ng[5742]:  notice: Couldn't find anyone
> to fence (reboot) wirt2v with any device
> Sep 13 22:00:44 wirt1v stonith-ng[5742]:   error: Operation reboot of
> wirt2v by <no-one> for crmd.5744 at wirt1v.880b2614: No route to host
> Sep 13 22:00:44 wirt1v crmd[5744]:  notice: Stonith operation
> 3/45:85:0:dd848cfe-edbc-41f4-bd55-f0cad5f7204f: No route to host (-113)
> Sep 13 22:00:44 wirt1v crmd[5744]:  notice: Stonith operation 3 for
> wirt2v failed (No route to host): aborting transition.
> Sep 13 22:00:44 wirt1v crmd[5744]:  notice: Transition aborted: Stonith
> failed (source=tengine_stonith_callback:733, 0)
> Sep 13 22:00:44 wirt1v crmd[5744]:  notice: Peer wirt2v was not
> terminated (reboot) by <anyone> for wirt1v: No route to host
> (ref=880b2614-09d2-47df-b740-e1d24732e6c5) by client crmd.5744
> Sep 13 22:00:44 wirt1v crmd[5744]:  notice: Transition 85 (Complete=5,
> Pending=0, Fired=0, Skipped=0, Incomplete=15,
> Source=/var/lib/pacemaker/pengine/pe-warn-295.bz2): Complete
> Sep 13 22:00:44 wirt1v pengine[3415]:  notice: On loss of CCM Quorum: Ignore
> Sep 13 22:00:44 wirt1v pengine[3415]: warning: Scheduling Node wirt2v
> for STONITH
> Sep 13 22:00:44 wirt1v pengine[3415]:  notice: Start   Drbd2:0#011(wirt1v)
> Sep 13 22:00:44 wirt1v pengine[3415]:  notice: Start   dlm:0#011(wirt1v)
> Sep 13 22:00:44 wirt1v pengine[3415]:  notice: Start  
> fencing-idrac1#011(wirt1v)
> Sep 13 22:00:44 wirt1v pengine[3415]:  notice: Start  
> fencing-idrac2#011(wirt1v)
> Sep 13 22:00:44 wirt1v pengine[3415]: warning: Calculated Transition 86:
> /var/lib/pacemaker/pengine/pe-warn-295.bz2
> Sep 13 22:00:44 wirt1v crmd[5744]:  notice: Executing reboot fencing
> operation (45) on wirt2v (timeout=60000)
> Sep 13 22:00:44 wirt1v stonith-ng[5742]:  notice: Client
> crmd.5744.8928b80c wants to fence (reboot) 'wirt2v' with device '(any)'
> Sep 13 22:00:44 wirt1v stonith-ng[5742]:  notice: Initiating remote
> operation reboot for wirt2v: 4c7af8ee-ffa6-4381-8d98-073d5abba631 (0)
> Sep 13 22:00:44 wirt1v stonith-ng[5742]:  notice: fencing-idrac1 can not
> fence (reboot) wirt2v: static-list
> Sep 13 22:00:44 wirt1v stonith-ng[5742]:  notice: fencing-idrac2 can
> fence (reboot) wirt2v: static-list
> Sep 13 22:00:44 wirt1v stonith-ng[5742]:  notice: fencing-idrac1 can not
> fence (reboot) wirt2v: static-list
> Sep 13 22:00:44 wirt1v stonith-ng[5742]:  notice: fencing-idrac2 can
> fence (reboot) wirt2v: static-list
> Sep 13 22:00:44 wirt1v fence_idrac: Failed: Unable to obtain correct
> plug status or plug is not available
> Sep 13 22:00:45 wirt1v fence_idrac: Failed: Unable to obtain correct
> plug status or plug is not available
> Sep 13 22:00:45 wirt1v stonith-ng[5742]:   error: Operation 'reboot'
> [5893] (call 4 from crmd.5744) for host 'wirt2v' with device
> 'fencing-idrac2' returned: -201 (Generic Pacemaker error)
> Sep 13 22:00:45 wirt1v stonith-ng[5742]: warning: fencing-idrac2:5893 [
> Failed: Unable to obtain correct plug status or plug is not available ]
> Sep 13 22:00:45 wirt1v stonith-ng[5742]: warning: fencing-idrac2:5893 [  ]
> Sep 13 22:00:45 wirt1v stonith-ng[5742]: warning: fencing-idrac2:5893 [  ]
> Sep 13 22:00:45 wirt1v stonith-ng[5742]:  notice: Couldn't find anyone
> to fence (reboot) wirt2v with any device
> Sep 13 22:00:45 wirt1v stonith-ng[5742]:   error: Operation reboot of
> wirt2v by <no-one> for crmd.5744 at wirt1v.4c7af8ee: No route to host
> Sep 13 22:00:45 wirt1v crmd[5744]:  notice: Stonith operation
> 4/45:86:0:dd848cfe-edbc-41f4-bd55-f0cad5f7204f: No route to host (-113)
> Sep 13 22:00:45 wirt1v crmd[5744]:  notice: Stonith operation 4 for
> wirt2v failed (No route to host): aborting transition.
> Sep 13 22:00:45 wirt1v crmd[5744]:  notice: Transition aborted: Stonith
> failed (source=tengine_stonith_callback:733, 0)
> Sep 13 22:00:45 wirt1v crmd[5744]:  notice: Peer wirt2v was not
> terminated (reboot) by <anyone> for wirt1v: No route to host
> (ref=4c7af8ee-ffa6-4381-8d98-073d5abba631) by client crmd.5744
> Sep 13 22:00:45 wirt1v crmd[5744]:  notice: Transition 86 (Complete=5,
> Pending=0, Fired=0, Skipped=0, Incomplete=15,
> Source=/var/lib/pacemaker/pengine/pe-warn-295.bz2): Complete
> Sep 13 22:00:45 wirt1v pengine[3415]:  notice: On loss of CCM Quorum: Ignore
> Sep 13 22:00:45 wirt1v pengine[3415]: warning: Scheduling Node wirt2v
> for STONITH
> Sep 13 22:00:45 wirt1v pengine[3415]:  notice: Start   Drbd2:0#011(wirt1v)
> Sep 13 22:00:45 wirt1v pengine[3415]:  notice: Start   dlm:0#011(wirt1v)
> Sep 13 22:00:45 wirt1v pengine[3415]:  notice: Start  
> fencing-idrac1#011(wirt1v)
> Sep 13 22:00:45 wirt1v pengine[3415]:  notice: Start  
> fencing-idrac2#011(wirt1v)
> Sep 13 22:00:45 wirt1v pengine[3415]: warning: Calculated Transition 87:
> /var/lib/pacemaker/pengine/pe-warn-295.bz2
> Sep 13 22:00:45 wirt1v crmd[5744]:  notice: Executing reboot fencing
> operation (45) on wirt2v (timeout=60000)
> Sep 13 22:00:45 wirt1v stonith-ng[5742]:  notice: Client
> crmd.5744.8928b80c wants to fence (reboot) 'wirt2v' with device '(any)'
> Sep 13 22:00:45 wirt1v stonith-ng[5742]:  notice: Initiating remote
> operation reboot for wirt2v: 268e4c7b-0340-4cf5-9c88-4f3c203f1499 (0)
> Sep 13 22:00:45 wirt1v stonith-ng[5742]:  notice: fencing-idrac1 can not
> fence (reboot) wirt2v: static-list
> Sep 13 22:00:45 wirt1v stonith-ng[5742]:  notice: fencing-idrac2 can
> fence (reboot) wirt2v: static-list
> Sep 13 22:00:45 wirt1v stonith-ng[5742]:  notice: fencing-idrac1 can not
> fence (reboot) wirt2v: static-list
> Sep 13 22:00:45 wirt1v stonith-ng[5742]:  notice: fencing-idrac2 can
> fence (reboot) wirt2v: static-list
> Sep 13 22:00:46 wirt1v fence_idrac: Failed: Unable to obtain correct
> plug status or plug is not available
> Sep 13 22:00:47 wirt1v fence_idrac: Failed: Unable to obtain correct
> plug status or plug is not available
> Sep 13 22:00:47 wirt1v stonith-ng[5742]:   error: Operation 'reboot'
> [5907] (call 5 from crmd.5744) for host 'wirt2v' with device
> 'fencing-idrac2' returned: -201 (Generic Pacemaker error)
> Sep 13 22:00:47 wirt1v stonith-ng[5742]: warning: fencing-idrac2:5907 [
> Failed: Unable to obtain correct plug status or plug is not available ]
> Sep 13 22:00:47 wirt1v stonith-ng[5742]: warning: fencing-idrac2:5907 [  ]
> Sep 13 22:00:47 wirt1v stonith-ng[5742]: warning: fencing-idrac2:5907 [  ]
> Sep 13 22:00:47 wirt1v stonith-ng[5742]:  notice: Couldn't find anyone
> to fence (reboot) wirt2v with any device
> Sep 13 22:00:47 wirt1v stonith-ng[5742]:   error: Operation reboot of
> wirt2v by <no-one> for crmd.5744 at wirt1v.268e4c7b: No route to host
> Sep 13 22:00:47 wirt1v crmd[5744]:  notice: Stonith operation
> 5/45:87:0:dd848cfe-edbc-41f4-bd55-f0cad5f7204f: No route to host (-113)
> Sep 13 22:00:47 wirt1v crmd[5744]:  notice: Stonith operation 5 for
> wirt2v failed (No route to host): aborting transition.
> Sep 13 22:00:47 wirt1v crmd[5744]:  notice: Transition aborted: Stonith
> failed (source=tengine_stonith_callback:733, 0)
> Sep 13 22:00:47 wirt1v crmd[5744]:  notice: Peer wirt2v was not
> terminated (reboot) by <anyone> for wirt1v: No route to host
> (ref=268e4c7b-0340-4cf5-9c88-4f3c203f1499) by client crmd.5744
> Sep 13 22:00:47 wirt1v crmd[5744]:  notice: Transition 87 (Complete=5,
> Pending=0, Fired=0, Skipped=0, Incomplete=15,
> Source=/var/lib/pacemaker/pengine/pe-warn-295.bz2): Complete
> Sep 13 22:00:47 wirt1v pengine[3415]:  notice: On loss of CCM Quorum: Ignore
> Sep 13 22:00:47 wirt1v pengine[3415]: warning: Scheduling Node wirt2v
> for STONITH
> Sep 13 22:00:47 wirt1v pengine[3415]:  notice: Start   Drbd2:0#011(wirt1v)
> Sep 13 22:00:47 wirt1v pengine[3415]:  notice: Start   dlm:0#011(wirt1v)
> Sep 13 22:00:47 wirt1v pengine[3415]:  notice: Start  
> fencing-idrac1#011(wirt1v)
> Sep 13 22:00:47 wirt1v pengine[3415]:  notice: Start  
> fencing-idrac2#011(wirt1v)
> Sep 13 22:00:47 wirt1v pengine[3415]: warning: Calculated Transition 88:
> /var/lib/pacemaker/pengine/pe-warn-295.bz2
> Sep 13 22:00:47 wirt1v crmd[5744]:  notice: Executing reboot fencing
> operation (45) on wirt2v (timeout=60000)
> Sep 13 22:00:47 wirt1v stonith-ng[5742]:  notice: Client
> crmd.5744.8928b80c wants to fence (reboot) 'wirt2v' with device '(any)'
> Sep 13 22:00:47 wirt1v stonith-ng[5742]:  notice: Initiating remote
> operation reboot for wirt2v: 8c5bf217-030f-400a-b1f8-7aa19918954f (0)
> Sep 13 22:00:47 wirt1v stonith-ng[5742]:  notice: fencing-idrac1 can not
> fence (reboot) wirt2v: static-list
> Sep 13 22:00:47 wirt1v stonith-ng[5742]:  notice: fencing-idrac2 can
> fence (reboot) wirt2v: static-list
> Sep 13 22:00:47 wirt1v stonith-ng[5742]:  notice: fencing-idrac1 can not
> fence (reboot) wirt2v: static-list
> Sep 13 22:00:47 wirt1v stonith-ng[5742]:  notice: fencing-idrac2 can
> fence (reboot) wirt2v: static-list
> Sep 13 22:00:47 wirt1v fence_idrac: Failed: Unable to obtain correct
> plug status or plug is not available
> Sep 13 22:00:48 wirt1v fence_idrac: Failed: Unable to obtain correct
> plug status or plug is not available
> Sep 13 22:00:48 wirt1v stonith-ng[5742]:   error: Operation 'reboot'
> [5921] (call 6 from crmd.5744) for host 'wirt2v' with device
> 'fencing-idrac2' returned: -201 (Generic Pacemaker error)
> Sep 13 22:00:48 wirt1v stonith-ng[5742]: warning: fencing-idrac2:5921 [
> Failed: Unable to obtain correct plug status or plug is not available ]
> Sep 13 22:00:48 wirt1v stonith-ng[5742]: warning: fencing-idrac2:5921 [  ]
> Sep 13 22:00:48 wirt1v stonith-ng[5742]: warning: fencing-idrac2:5921 [
> Failed: Unable to obtain correct plug status or plug is not available ]
> Sep 13 22:00:48 wirt1v stonith-ng[5742]: warning: fencing-idrac2:5921 [  ]
> Sep 13 22:00:48 wirt1v stonith-ng[5742]: warning: fencing-idrac2:5921 [  ]
> Sep 13 22:00:48 wirt1v stonith-ng[5742]:  notice: Couldn't find anyone
> to fence (reboot) wirt2v with any device
> Sep 13 22:00:48 wirt1v stonith-ng[5742]:   error: Operation reboot of
> wirt2v by <no-one> for crmd.5744 at wirt1v.8c5bf217: No route to host
> Sep 13 22:00:48 wirt1v crmd[5744]:  notice: Stonith operation
> 6/45:88:0:dd848cfe-edbc-41f4-bd55-f0cad5f7204f: No route to host (-113)
> Sep 13 22:00:48 wirt1v crmd[5744]:  notice: Stonith operation 6 for
> wirt2v failed (No route to host): aborting transition.
> Sep 13 22:00:48 wirt1v crmd[5744]:  notice: Transition aborted: Stonith
> failed (source=tengine_stonith_callback:733, 0)
> Sep 13 22:00:48 wirt1v crmd[5744]:  notice: Peer wirt2v was not
> terminated (reboot) by <anyone> for wirt1v: No route to host
> (ref=8c5bf217-030f-400a-b1f8-7aa19918954f) by client crmd.5744
> Sep 13 22:00:48 wirt1v crmd[5744]:  notice: Transition 88 (Complete=5,
> Pending=0, Fired=0, Skipped=0, Incomplete=15,
> Source=/var/lib/pacemaker/pengine/pe-warn-295.bz2): Complete
> Sep 13 22:00:48 wirt1v pengine[3415]:  notice: On loss of CCM Quorum: Ignore
> Sep 13 22:00:48 wirt1v pengine[3415]: warning: Scheduling Node wirt2v
> for STONITH
> Sep 13 22:00:48 wirt1v pengine[3415]:  notice: Start   Drbd2:0#011(wirt1v)
> Sep 13 22:00:48 wirt1v pengine[3415]:  notice: Start   dlm:0#011(wirt1v)
> Sep 13 22:00:48 wirt1v pengine[3415]:  notice: Start  
> fencing-idrac1#011(wirt1v)
> Sep 13 22:00:48 wirt1v pengine[3415]:  notice: Start  
> fencing-idrac2#011(wirt1v)
> Sep 13 22:00:48 wirt1v pengine[3415]: warning: Calculated Transition 89:
> /var/lib/pacemaker/pengine/pe-warn-295.bz2
> Sep 13 22:00:48 wirt1v crmd[5744]:  notice: Executing reboot fencing
> operation (45) on wirt2v (timeout=60000)
> Sep 13 22:00:48 wirt1v stonith-ng[5742]:  notice: Client
> crmd.5744.8928b80c wants to fence (reboot) 'wirt2v' with device '(any)'
> Sep 13 22:00:48 wirt1v stonith-ng[5742]:  notice: Initiating remote
> operation reboot for wirt2v: 25e51799-e072-4622-bbb3-1430bdb20536 (0)
> Sep 13 22:00:48 wirt1v stonith-ng[5742]:  notice: fencing-idrac1 can not
> fence (reboot) wirt2v: static-list
> Sep 13 22:00:48 wirt1v stonith-ng[5742]:  notice: fencing-idrac2 can
> fence (reboot) wirt2v: static-list
> Sep 13 22:00:48 wirt1v stonith-ng[5742]:  notice: fencing-idrac1 can not
> fence (reboot) wirt2v: static-list
> Sep 13 22:00:48 wirt1v stonith-ng[5742]:  notice: fencing-idrac2 can
> fence (reboot) wirt2v: static-list
> Sep 13 22:00:48 wirt1v fence_idrac: Failed: Unable to obtain correct
> plug status or plug is not available
> Sep 13 22:00:49 wirt1v fence_idrac: Failed: Unable to obtain correct
> plug status or plug is not available
> Sep 13 22:00:49 wirt1v stonith-ng[5742]:   error: Operation 'reboot'
> [5935] (call 7 from crmd.5744) for host 'wirt2v' with device
> 'fencing-idrac2' returned: -201 (Generic Pacemaker error)
> Sep 13 22:00:49 wirt1v stonith-ng[5742]: warning: fencing-idrac2:5935 [
> Failed: Unable to obtain correct plug status or plug is not available ]
> Sep 13 22:00:49 wirt1v stonith-ng[5742]: warning: fencing-idrac2:5935 [  ]
> Sep 13 22:00:49 wirt1v stonith-ng[5742]: warning: fencing-idrac2:5935 [  ]
> Sep 13 22:00:49 wirt1v stonith-ng[5742]:  notice: Couldn't find anyone
> to fence (reboot) wirt2v with any device
> Sep 13 22:00:49 wirt1v stonith-ng[5742]:   error: Operation reboot of
> wirt2v by <no-one> for crmd.5744 at wirt1v.25e51799: No route to host
> Sep 13 22:00:49 wirt1v crmd[5744]:  notice: Stonith operation
> 7/45:89:0:dd848cfe-edbc-41f4-bd55-f0cad5f7204f: No route to host (-113)
> Sep 13 22:00:49 wirt1v crmd[5744]:  notice: Stonith operation 7 for
> wirt2v failed (No route to host): aborting transition.
> Sep 13 22:00:49 wirt1v crmd[5744]:  notice: Transition aborted: Stonith
> failed (source=tengine_stonith_callback:733, 0)
> Sep 13 22:00:49 wirt1v crmd[5744]:  notice: Peer wirt2v was not
> terminated (reboot) by <anyone> for wirt1v: No route to host
> (ref=25e51799-e072-4622-bbb3-1430bdb20536) by client crmd.5744
> Sep 13 22:00:49 wirt1v crmd[5744]:  notice: Transition 89 (Complete=5,
> Pending=0, Fired=0, Skipped=0, Incomplete=15,
> Source=/var/lib/pacemaker/pengine/pe-warn-295.bz2): Complete
> Sep 13 22:00:49 wirt1v pengine[3415]:  notice: On loss of CCM Quorum: Ignore
> Sep 13 22:00:49 wirt1v pengine[3415]: warning: Scheduling Node wirt2v
> for STONITH
> Sep 13 22:00:49 wirt1v pengine[3415]:  notice: Start   Drbd2:0#011(wirt1v)
> Sep 13 22:00:49 wirt1v pengine[3415]:  notice: Start   dlm:0#011(wirt1v)
> Sep 13 22:00:49 wirt1v pengine[3415]:  notice: Start  
> fencing-idrac1#011(wirt1v)
> Sep 13 22:00:49 wirt1v pengine[3415]:  notice: Start  
> fencing-idrac2#011(wirt1v)
> Sep 13 22:00:49 wirt1v pengine[3415]: warning: Calculated Transition 90:
> /var/lib/pacemaker/pengine/pe-warn-295.bz2
> Sep 13 22:00:49 wirt1v crmd[5744]:  notice: Executing reboot fencing
> operation (45) on wirt2v (timeout=60000)
> Sep 13 22:00:49 wirt1v stonith-ng[5742]:  notice: Client
> crmd.5744.8928b80c wants to fence (reboot) 'wirt2v' with device '(any)'
> Sep 13 22:00:49 wirt1v stonith-ng[5742]:  notice: Initiating remote
> operation reboot for wirt2v: 7f520e61-b613-49e4-9213-1958d8a68c6a (0)
> Sep 13 22:00:49 wirt1v stonith-ng[5742]:  notice: fencing-idrac1 can not
> fence (reboot) wirt2v: static-list
> Sep 13 22:00:49 wirt1v stonith-ng[5742]:  notice: fencing-idrac2 can
> fence (reboot) wirt2v: static-list
> Sep 13 22:00:49 wirt1v stonith-ng[5742]:  notice: fencing-idrac1 can not
> fence (reboot) wirt2v: static-list
> Sep 13 22:00:49 wirt1v stonith-ng[5742]:  notice: fencing-idrac2 can
> fence (reboot) wirt2v: static-list
> Sep 13 22:00:49 wirt1v fence_idrac: Failed: Unable to obtain correct
> plug status or plug is not available
> Sep 13 22:00:50 wirt1v fence_idrac: Failed: Unable to obtain correct
> plug status or plug is not available
> Sep 13 22:00:50 wirt1v stonith-ng[5742]:   error: Operation 'reboot'
> [5949] (call 8 from crmd.5744) for host 'wirt2v' with device
> 'fencing-idrac2' returned: -201 (Generic Pacemaker error)
> Sep 13 22:00:50 wirt1v stonith-ng[5742]: warning: fencing-idrac2:5949 [
> Failed: Unable to obtain correct plug status or plug is not available ]
> Sep 13 22:00:50 wirt1v stonith-ng[5742]: warning: fencing-idrac2:5949 [  ]
> Sep 13 22:00:50 wirt1v stonith-ng[5742]: warning: fencing-idrac2:5949 [  ]
> Sep 13 22:00:50 wirt1v stonith-ng[5742]:  notice: Couldn't find anyone
> to fence (reboot) wirt2v with any device
> Sep 13 22:00:50 wirt1v stonith-ng[5742]:   error: Operation reboot of
> wirt2v by <no-one> for crmd.5744 at wirt1v.7f520e61: No route to host
> Sep 13 22:00:50 wirt1v crmd[5744]:  notice: Stonith operation
> 8/45:90:0:dd848cfe-edbc-41f4-bd55-f0cad5f7204f: No route to host (-113)
> Sep 13 22:00:50 wirt1v crmd[5744]:  notice: Stonith operation 8 for
> wirt2v failed (No route to host): aborting transition.
> Sep 13 22:00:50 wirt1v crmd[5744]:  notice: Transition aborted: Stonith
> failed (source=tengine_stonith_callback:733, 0)
> Sep 13 22:00:50 wirt1v crmd[5744]:  notice: Peer wirt2v was not
> terminated (reboot) by <anyone> for wirt1v: No route to host
> (ref=7f520e61-b613-49e4-9213-1958d8a68c6a) by client crmd.5744
> Sep 13 22:00:50 wirt1v crmd[5744]:  notice: Transition 90 (Complete=5,
> Pending=0, Fired=0, Skipped=0, Incomplete=15,
> Source=/var/lib/pacemaker/pengine/pe-warn-295.bz2): Complete
> Sep 13 22:00:50 wirt1v pengine[3415]:  notice: On loss of CCM Quorum: Ignore
> Sep 13 22:00:50 wirt1v pengine[3415]: warning: Scheduling Node wirt2v
> for STONITH
> Sep 13 22:00:50 wirt1v pengine[3415]:  notice: Start   Drbd2:0#011(wirt1v)
> Sep 13 22:00:50 wirt1v pengine[3415]:  notice: Start   dlm:0#011(wirt1v)
> Sep 13 22:00:50 wirt1v pengine[3415]:  notice: Start  
> fencing-idrac1#011(wirt1v)
> Sep 13 22:00:50 wirt1v pengine[3415]:  notice: Start  
> fencing-idrac2#011(wirt1v)
> Sep 13 22:00:50 wirt1v pengine[3415]: warning: Calculated Transition 91:
> /var/lib/pacemaker/pengine/pe-warn-295.bz2
> Sep 13 22:00:50 wirt1v crmd[5744]:  notice: Executing reboot fencing
> operation (45) on wirt2v (timeout=60000)
> Sep 13 22:00:50 wirt1v stonith-ng[5742]:  notice: Client
> crmd.5744.8928b80c wants to fence (reboot) 'wirt2v' with device '(any)'
> Sep 13 22:00:50 wirt1v stonith-ng[5742]:  notice: Initiating remote
> operation reboot for wirt2v: 25b67d0b-5b8f-4cd8-82c2-4421474c111c (0)
> Sep 13 22:00:50 wirt1v stonith-ng[5742]:  notice: fencing-idrac1 can not
> fence (reboot) wirt2v: static-list
> Sep 13 22:00:50 wirt1v stonith-ng[5742]:  notice: fencing-idrac2 can
> fence (reboot) wirt2v: static-list
> Sep 13 22:00:50 wirt1v stonith-ng[5742]:  notice: fencing-idrac1 can not
> fence (reboot) wirt2v: static-list
> Sep 13 22:00:50 wirt1v stonith-ng[5742]:  notice: fencing-idrac2 can
> fence (reboot) wirt2v: static-list
> Sep 13 22:00:50 wirt1v fence_idrac: Failed: Unable to obtain correct
> plug status or plug is not available
> Sep 13 22:00:51 wirt1v fence_idrac: Failed: Unable to obtain correct
> plug status or plug is not available
> Sep 13 22:00:51 wirt1v stonith-ng[5742]:   error: Operation 'reboot'
> [5963] (call 9 from crmd.5744) for host 'wirt2v' with device
> 'fencing-idrac2' returned: -201 (Generic Pacemaker error)
> Sep 13 22:00:51 wirt1v stonith-ng[5742]: warning: fencing-idrac2:5963 [
> Failed: Unable to obtain correct plug status or plug is not available ]
> Sep 13 22:00:51 wirt1v stonith-ng[5742]: warning: fencing-idrac2:5963 [  ]
> Sep 13 22:00:51 wirt1v stonith-ng[5742]: warning: fencing-idrac2:5963 [  ]
> Sep 13 22:00:51 wirt1v stonith-ng[5742]:  notice: Couldn't find anyone
> to fence (reboot) wirt2v with any device
> Sep 13 22:00:51 wirt1v stonith-ng[5742]:   error: Operation reboot of
> wirt2v by <no-one> for crmd.5744 at wirt1v.25b67d0b: No route to host
> Sep 13 22:00:51 wirt1v crmd[5744]:  notice: Stonith operation
> 9/45:91:0:dd848cfe-edbc-41f4-bd55-f0cad5f7204f: No route to host (-113)
> Sep 13 22:00:51 wirt1v crmd[5744]:  notice: Stonith operation 9 for
> wirt2v failed (No route to host): aborting transition.
> Sep 13 22:00:51 wirt1v crmd[5744]:  notice: Transition aborted: Stonith
> failed (source=tengine_stonith_callback:733, 0)
> Sep 13 22:00:51 wirt1v crmd[5744]:  notice: Peer wirt2v was not
> terminated (reboot) by <anyone> for wirt1v: No route to host
> (ref=25b67d0b-5b8f-4cd8-82c2-4421474c111c) by client crmd.5744
> Sep 13 22:00:51 wirt1v crmd[5744]:  notice: Transition 91 (Complete=5,
> Pending=0, Fired=0, Skipped=0, Incomplete=15,
> Source=/var/lib/pacemaker/pengine/pe-warn-295.bz2): Complete
> Sep 13 22:00:51 wirt1v pengine[3415]:  notice: On loss of CCM Quorum: Ignore
> Sep 13 22:00:51 wirt1v pengine[3415]: warning: Scheduling Node wirt2v
> for STONITH
> Sep 13 22:00:51 wirt1v pengine[3415]:  notice: Start   Drbd2:0#011(wirt1v)
> Sep 13 22:00:51 wirt1v pengine[3415]:  notice: Start   dlm:0#011(wirt1v)
> Sep 13 22:00:51 wirt1v pengine[3415]:  notice: Start  
> fencing-idrac1#011(wirt1v)
> Sep 13 22:00:51 wirt1v pengine[3415]:  notice: Start  
> fencing-idrac2#011(wirt1v)
> Sep 13 22:00:51 wirt1v pengine[3415]:  notice: Start  
> fencing-idrac1#011(wirt1v)
> Sep 13 22:00:51 wirt1v pengine[3415]:  notice: Start  
> fencing-idrac2#011(wirt1v)
> Sep 13 22:00:51 wirt1v pengine[3415]: warning: Calculated Transition 92:
> /var/lib/pacemaker/pengine/pe-warn-295.bz2
> Sep 13 22:00:51 wirt1v crmd[5744]:  notice: Executing reboot fencing
> operation (45) on wirt2v (timeout=60000)
> Sep 13 22:00:51 wirt1v stonith-ng[5742]:  notice: Client
> crmd.5744.8928b80c wants to fence (reboot) 'wirt2v' with device '(any)'
> Sep 13 22:00:51 wirt1v stonith-ng[5742]:  notice: Initiating remote
> operation reboot for wirt2v: 292a57e9-fd1b-4630-8c10-0d48a268fd68 (0)
> Sep 13 22:00:51 wirt1v stonith-ng[5742]:  notice: fencing-idrac1 can not
> fence (reboot) wirt2v: static-list
> Sep 13 22:00:51 wirt1v stonith-ng[5742]:  notice: fencing-idrac2 can
> fence (reboot) wirt2v: static-list
> Sep 13 22:00:51 wirt1v stonith-ng[5742]:  notice: fencing-idrac1 can not
> fence (reboot) wirt2v: static-list
> Sep 13 22:00:51 wirt1v stonith-ng[5742]:  notice: fencing-idrac2 can
> fence (reboot) wirt2v: static-list
> Sep 13 22:00:51 wirt1v fence_idrac: Failed: Unable to obtain correct
> plug status or plug is not available
> Sep 13 22:00:52 wirt1v fence_idrac: Failed: Unable to obtain correct
> plug status or plug is not available
> Sep 13 22:00:52 wirt1v stonith-ng[5742]:   error: Operation 'reboot'
> [5977] (call 10 from crmd.5744) for host 'wirt2v' with device
> 'fencing-idrac2' returned: -201 (Generic Pacemaker error)
> Sep 13 22:00:52 wirt1v stonith-ng[5742]: warning: fencing-idrac2:5977 [
> Failed: Unable to obtain correct plug status or plug is not available ]
> Sep 13 22:00:52 wirt1v stonith-ng[5742]: warning: fencing-idrac2:5977 [  ]
> Sep 13 22:00:52 wirt1v stonith-ng[5742]: warning: fencing-idrac2:5977 [  ]
> Sep 13 22:00:52 wirt1v stonith-ng[5742]:  notice: Couldn't find anyone
> to fence (reboot) wirt2v with any device
> Sep 13 22:00:52 wirt1v stonith-ng[5742]:   error: Operation reboot of
> wirt2v by <no-one> for crmd.5744 at wirt1v.292a57e9: No route to host
> Sep 13 22:00:52 wirt1v crmd[5744]:  notice: Stonith operation
> 10/45:92:0:dd848cfe-edbc-41f4-bd55-f0cad5f7204f: No route to host (-113)
> Sep 13 22:00:52 wirt1v crmd[5744]:  notice: Stonith operation 10 for
> wirt2v failed (No route to host): aborting transition.
> Sep 13 22:00:52 wirt1v crmd[5744]:  notice: Transition aborted: Stonith
> failed (source=tengine_stonith_callback:733, 0)
> Sep 13 22:00:52 wirt1v crmd[5744]:  notice: Peer wirt2v was not
> terminated (reboot) by <anyone> for wirt1v: No route to host
> (ref=292a57e9-fd1b-4630-8c10-0d48a268fd68) by client crmd.5744
> Sep 13 22:00:52 wirt1v crmd[5744]:  notice: Transition 92 (Complete=5,
> Pending=0, Fired=0, Skipped=0, Incomplete=15,
> Source=/var/lib/pacemaker/pengine/pe-warn-295.bz2): Complete
> Sep 13 22:00:52 wirt1v pengine[3415]:  notice: On loss of CCM Quorum: Ignore
> Sep 13 22:00:52 wirt1v pengine[3415]: warning: Scheduling Node wirt2v
> for STONITH
> Sep 13 22:00:52 wirt1v pengine[3415]:  notice: Start   Drbd2:0#011(wirt1v)
> Sep 13 22:00:52 wirt1v pengine[3415]:  notice: Start   dlm:0#011(wirt1v)
> Sep 13 22:00:52 wirt1v pengine[3415]:  notice: Start  
> fencing-idrac1#011(wirt1v)
> Sep 13 22:00:52 wirt1v pengine[3415]:  notice: Start  
> fencing-idrac2#011(wirt1v)
> Sep 13 22:00:52 wirt1v pengine[3415]: warning: Calculated Transition 93:
> /var/lib/pacemaker/pengine/pe-warn-295.bz2
> Sep 13 22:00:52 wirt1v crmd[5744]:  notice: Executing reboot fencing
> operation (45) on wirt2v (timeout=60000)
> Sep 13 22:00:52 wirt1v stonith-ng[5742]:  notice: Client
> crmd.5744.8928b80c wants to fence (reboot) 'wirt2v' with device '(any)'
> Sep 13 22:00:52 wirt1v stonith-ng[5742]:  notice: Initiating remote
> operation reboot for wirt2v: f324baad-ef9b-44e6-9e09-02176fa447ef (0)
> Sep 13 22:00:52 wirt1v stonith-ng[5742]:  notice: fencing-idrac1 can not
> fence (reboot) wirt2v: static-list
> Sep 13 22:00:52 wirt1v stonith-ng[5742]:  notice: fencing-idrac2 can
> fence (reboot) wirt2v: static-list
> Sep 13 22:00:52 wirt1v stonith-ng[5742]:  notice: fencing-idrac1 can not
> fence (reboot) wirt2v: static-list
> Sep 13 22:00:52 wirt1v stonith-ng[5742]:  notice: fencing-idrac2 can
> fence (reboot) wirt2v: static-list
> Sep 13 22:00:53 wirt1v fence_idrac: Failed: Unable to obtain correct
> plug status or plug is not available
> Sep 13 22:00:54 wirt1v fence_idrac: Failed: Unable to obtain correct
> plug status or plug is not available
> Sep 13 22:00:54 wirt1v stonith-ng[5742]:   error: Operation 'reboot'
> [5991] (call 11 from crmd.5744) for host 'wirt2v' with device
> 'fencing-idrac2' returned: -201 (Generic Pacemaker error)
> Sep 13 22:00:54 wirt1v stonith-ng[5742]: warning: fencing-idrac2:5991 [
> Failed: Unable to obtain correct plug status or plug is not available ]
> Sep 13 22:00:54 wirt1v stonith-ng[5742]: warning: fencing-idrac2:5991 [  ]
> Sep 13 22:00:54 wirt1v stonith-ng[5742]: warning: fencing-idrac2:5991 [  ]
> Sep 13 22:00:54 wirt1v stonith-ng[5742]:  notice: Couldn't find anyone
> to fence (reboot) wirt2v with any device
> Sep 13 22:00:54 wirt1v stonith-ng[5742]:   error: Operation reboot of
> wirt2v by <no-one> for crmd.5744 at wirt1v.f324baad: No route to host
> Sep 13 22:00:54 wirt1v crmd[5744]:  notice: Stonith operation
> 11/45:93:0:dd848cfe-edbc-41f4-bd55-f0cad5f7204f: No route to host (-113)
> Sep 13 22:00:54 wirt1v crmd[5744]:  notice: Stonith operation 11 for
> wirt2v failed (No route to host): aborting transition.
> Sep 13 22:00:54 wirt1v crmd[5744]:  notice: Transition aborted: Stonith
> failed (source=tengine_stonith_callback:733, 0)
> Sep 13 22:00:54 wirt1v crmd[5744]:  notice: Peer wirt2v was not
> terminated (reboot) by <anyone> for wirt1v: No route to host
> (ref=f324baad-ef9b-44e6-9e09-02176fa447ef) by client crmd.5744
> Sep 13 22:00:54 wirt1v crmd[5744]:  notice: Transition 93 (Complete=5,
> Pending=0, Fired=0, Skipped=0, Incomplete=15,
> Source=/var/lib/pacemaker/pengine/pe-warn-295.bz2): Complete
> Sep 13 22:00:54 wirt1v pengine[3415]:  notice: On loss of CCM Quorum: Ignore
> Sep 13 22:00:54 wirt1v pengine[3415]: warning: Scheduling Node wirt2v
> for STONITH
> Sep 13 22:00:54 wirt1v pengine[3415]:  notice: Start   Drbd2:0#011(wirt1v)
> Sep 13 22:00:54 wirt1v pengine[3415]:  notice: Start   dlm:0#011(wirt1v)
> Sep 13 22:00:54 wirt1v pengine[3415]:  notice: Start  
> fencing-idrac1#011(wirt1v)
> Sep 13 22:00:54 wirt1v pengine[3415]:  notice: Start  
> fencing-idrac2#011(wirt1v)
> Sep 13 22:00:54 wirt1v pengine[3415]: warning: Calculated Transition 94:
> /var/lib/pacemaker/pengine/pe-warn-295.bz2
> Sep 13 22:00:54 wirt1v crmd[5744]:  notice: Executing reboot fencing
> operation (45) on wirt2v (timeout=60000)
> Sep 13 22:00:54 wirt1v stonith-ng[5742]:  notice: Client
> crmd.5744.8928b80c wants to fence (reboot) 'wirt2v' with device '(any)'
> Sep 13 22:00:54 wirt1v stonith-ng[5742]:  notice: Initiating remote
> operation reboot for wirt2v: 61af386a-ce3f-438f-b83b-90dee4bdb1c6 (0)
> Sep 13 22:00:54 wirt1v stonith-ng[5742]:  notice: fencing-idrac1 can not
> fence (reboot) wirt2v: static-list
> Sep 13 22:00:54 wirt1v stonith-ng[5742]:  notice: fencing-idrac2 can
> fence (reboot) wirt2v: static-list
> Sep 13 22:00:54 wirt1v stonith-ng[5742]:  notice: fencing-idrac1 can not
> fence (reboot) wirt2v: static-list
> Sep 13 22:00:54 wirt1v stonith-ng[5742]:  notice: fencing-idrac2 can
> fence (reboot) wirt2v: static-list
> Sep 13 22:00:54 wirt1v fence_idrac: Failed: Unable to obtain correct
> plug status or plug is not available
> Sep 13 22:00:55 wirt1v fence_idrac: Failed: Unable to obtain correct
> plug status or plug is not available
> Sep 13 22:00:55 wirt1v stonith-ng[5742]:   error: Operation 'reboot'
> [6005] (call 12 from crmd.5744) for host 'wirt2v' with device
> 'fencing-idrac2' returned: -201 (Generic Pacemaker error)
> Sep 13 22:00:55 wirt1v stonith-ng[5742]: warning: fencing-idrac2:6005 [
> Failed: Unable to obtain correct plug status or plug is not available ]
> Sep 13 22:00:55 wirt1v stonith-ng[5742]: warning: fencing-idrac2:6005 [  ]
> Sep 13 22:00:55 wirt1v stonith-ng[5742]: warning: fencing-idrac2:6005 [  ]
> Sep 13 22:00:55 wirt1v stonith-ng[5742]:  notice: Couldn't find anyone
> to fence (reboot) wirt2v with any device
> Sep 13 22:00:55 wirt1v stonith-ng[5742]:   error: Operation reboot of
> wirt2v by <no-one> for crmd.5744 at wirt1v.61af386a: No route to host
> Sep 13 22:00:55 wirt1v crmd[5744]:  notice: Stonith operation
> 12/45:94:0:dd848cfe-edbc-41f4-bd55-f0cad5f7204f: No route to host (-113)
> Sep 13 22:00:55 wirt1v crmd[5744]:  notice: Stonith operation 12 for
> wirt2v failed (No route to host): aborting transition.
> Sep 13 22:00:55 wirt1v crmd[5744]:  notice: Transition aborted: Stonith
> failed (source=tengine_stonith_callback:733, 0)
> Sep 13 22:00:55 wirt1v crmd[5744]:  notice: Peer wirt2v was not
> terminated (reboot) by <anyone> for wirt1v: No route to host
> (ref=61af386a-ce3f-438f-b83b-90dee4bdb1c6) by client crmd.5744
> Sep 13 22:00:55 wirt1v crmd[5744]:  notice: Transition 94 (Complete=5,
> Pending=0, Fired=0, Skipped=0, Incomplete=15,
> Source=/var/lib/pacemaker/pengine/pe-warn-295.bz2): Complete
> Sep 13 22:00:55 wirt1v crmd[5744]:  notice: Too many failures to fence
> wirt2v (11), giving up
> Sep 13 22:00:55 wirt1v crmd[5744]:  notice: State transition
> S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL
> origin=notify_crmd ]
> 
> # -------------------- end of /var/log/messages
> 
> 
> 
> 
> 
> _______________________________________________
> Users mailing list: Users at clusterlabs.org
> http://clusterlabs.org/mailman/listinfo/users
> 
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
> 





More information about the Users mailing list