<div dir="ltr"><div><div>Hi,<br><br>I'm still testing (before production running) the solution with pacemaker+corosync+drbd+dlm+gfs2 on Centos7 with double-primary config.<br><br>I have two nodes: wirt1v and wirt2v - each node contains LVM partition  with DRBD (/dev/drbd2) and filesystem mounted as /virtfs2. Filesystems /virtfs2 contain the images of virtual machines.<br><br>My problem is so - I can't start the cluster and the resources on one node only (cold start) when the second node is completely powered off.<br><br>Is it in such configuration at all posssible - is it posible to start one node only?<br><br>Could you help me, please?<br><br>The  configs and log (during cold start)  are attached. <br><br>Thanks in advance,<br>Gienek Nowacki<br><br>==============================================================<br><br>#---------------------------------<br>### result:  cat /etc/redhat-release  ###<br><br>CentOS Linux release 7.2.1511 (Core)<br><br>#---------------------------------<br>### result:  uname -a  ###<br><br>Linux <a href="http://wirt1v.example.com">wirt1v.example.com</a> 3.10.0-327.28.3.el7.x86_64 #1 SMP Thu Aug 18 19:05:49 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux<br><br>#---------------------------------<br>### result:  cat /etc/hosts  ###<br><br>127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4<br>172.31.0.23     <a href="http://wirt1.example.com">wirt1.example.com</a> wirt1<br>172.31.0.24     <a href="http://wirt2.example.com">wirt2.example.com</a> wirt2<br>1.1.1.1         <a href="http://wirt1v.example.com">wirt1v.example.com</a> wirt1v<br>1.1.1.2         <a href="http://wirt2v.example.com">wirt2v.example.com</a> wirt2v<br><br>#---------------------------------<br>### result:  cat /etc/drbd.conf  ###<br><br>include "drbd.d/global_common.conf";<br>include "drbd.d/*.res";<br><br>#---------------------------------<br>### result:  cat /etc/drbd.d/global_common.conf  ###<br><br>common {<br>        protocol C;<br>        syncer {<br>                verify-alg sha1;<br>        }<br>        startup {<br>                become-primary-on both;<br>                wfc-timeout 30;<br>                outdated-wfc-timeout 20;<br>                degr-wfc-timeout 30;<br>        }<br>        disk {<br>                fencing resource-and-stonith;<br>        }<br>        handlers {<br>                fence-peer "/usr/lib/drbd/crm-fence-peer.sh";<br>                after-resync-target "/usr/lib/drbd/crm-unfence-peer.sh";<br>                split-brain             "/usr/lib/drbd/notify-split-brain.sh <a href="mailto:linuxadmin@example.com">linuxadmin@example.com</a>";<br>                pri-lost-after-sb       "/usr/lib/drbd/notify-split-brain.sh <a href="mailto:linuxadmin@example.com">linuxadmin@example.com</a>";<br>                out-of-sync             "/usr/lib/drbd/notify-out-of-sync.sh <a href="mailto:linuxadmin@example.com">linuxadmin@example.com</a>";<br>                local-io-error          "/usr/lib/drbd/notify-io-error.sh    <a href="mailto:linuxadmin@example.com">linuxadmin@example.com</a>";<br>        }<br>        net {<br>                allow-two-primaries;<br>                after-sb-0pri discard-zero-changes;<br>                after-sb-1pri discard-secondary;<br>                after-sb-2pri disconnect;<br>        }<br>}<br><br>#---------------------------------<br>### result:  cat /etc/drbd.d/drbd2.res  ###<br><br>resource drbd2 {<br>        meta-disk internal;<br>        device /dev/drbd2;<br>        on <a href="http://wirt1v.example.com">wirt1v.example.com</a> {<br>                disk /dev/vg02/drbd2;<br>                address <a href="http://1.1.1.1:7782">1.1.1.1:7782</a>;<br>        }<br>        on <a href="http://wirt2v.example.com">wirt2v.example.com</a> {<br>                disk /dev/vg02/drbd2;<br>                address <a href="http://1.1.1.2:7782">1.1.1.2:7782</a>;<br>        }<br>}<br><br>#---------------------------------<br>### result:  cat /etc/corosync/corosync.conf  ###<br><br>totem {<br>    version: 2<br>    secauth: off<br>    cluster_name: klasterek<br>    transport: udpu<br>}<br>nodelist {<br>    node {<br>        ring0_addr: wirt1v<br>        nodeid: 1<br>    }<br>    node {<br>        ring0_addr: wirt2v<br>        nodeid: 2<br>    }<br>}<br>quorum {<br>    provider: corosync_votequorum<br>    two_node: 1<br>}<br>logging {<br>    to_logfile: yes<br>    logfile: /var/log/cluster/corosync.log<br>    to_syslog: yes<br>}<br><br>#---------------------------------<br>### result:  mount | grep virtfs2  ###<br><br>/dev/drbd2 on /virtfs2 type gfs2 (rw,relatime,seclabel)<br><br>#---------------------------------<br>### result:  pcs status  ###<br><br>Cluster name: klasterek<br>Last updated: Tue Sep 13 20:01:40 2016          Last change: Tue Sep 13 18:31:33 2016 by root via crm_resource on wirt1v<br>Stack: corosync<br>Current DC: wirt1v (version 1.1.13-10.el7_2.4-44eb2dd) - partition with quorum<br>2 nodes and 8 resources configured<br>Online: [ wirt1v wirt2v ]<br>Full list of resources:<br> Master/Slave Set: Drbd2-clone [Drbd2]<br>     Masters: [ wirt1v wirt2v ]<br> Clone Set: Virtfs2-clone [Virtfs2]<br>     Started: [ wirt1v wirt2v ]<br> Clone Set: dlm-clone [dlm]<br>     Started: [ wirt1v wirt2v ]<br> fencing-idrac1 (stonith:fence_idrac):  Started wirt1v<br> fencing-idrac2 (stonith:fence_idrac):  Started wirt2v<br>PCSD Status:<br>  wirt1v: Online<br>  wirt2v: Online<br>Daemon Status:<br>  corosync: active/disabled<br>  pacemaker: active/disabled<br>  pcsd: active/enabled<br><br>#---------------------------------<br>### result:  pcs property  ###<br><br>Cluster Properties:<br> cluster-infrastructure: corosync<br> cluster-name: klasterek<br> dc-version: 1.1.13-10.el7_2.4-44eb2dd<br> have-watchdog: false<br> no-quorum-policy: ignore<br> stonith-enabled: true<br> symmetric-cluster: true<br><br>#---------------------------------<br>### result:  pcs cluster cib  ###<br><br><cib crm_feature_set="3.0.10" validate-with="pacemaker-2.3" epoch="69" num_updates="38" admin_epoch="0" cib-last-written="Tue Sep 13 18:31:33 2016" update-origin="wirt1v" update-client="crm_resource" update-user="root" have-quorum="1" dc-uuid="1"><br>  <configuration><br>    <crm_config><br>      <cluster_property_set id="cib-bootstrap-options"><br>        <nvpair id="cib-bootstrap-options-have-watchdog" name="have-watchdog" value="false"/><br>        <nvpair id="cib-bootstrap-options-dc-version" name="dc-version" value="1.1.13-10.el7_2.4-44eb2dd"/><br>        <nvpair id="cib-bootstrap-options-cluster-infrastructure" name="cluster-infrastructure" value="corosync"/><br>        <nvpair id="cib-bootstrap-options-cluster-name" name="cluster-name" value="klasterek"/><br>        <nvpair id="cib-bootstrap-options-no-quorum-policy" name="no-quorum-policy" value="ignore"/><br>        <nvpair id="cib-bootstrap-options-symmetric-cluster" name="symmetric-cluster" value="true"/><br>        <nvpair id="cib-bootstrap-options-stonith-enabled" name="stonith-enabled" value="true"/><br>      </cluster_property_set><br>    </crm_config><br>    <nodes><br>      <node id="1" uname="wirt1v"/><br>      <node id="2" uname="wirt2v"/><br>    </nodes><br>    <resources><br>      <master id="Drbd2-clone"><br>        <primitive class="ocf" id="Drbd2" provider="linbit" type="drbd"><br>          <instance_attributes id="Drbd2-instance_attributes"><br>            <nvpair id="Drbd2-instance_attributes-drbd_resource" name="drbd_resource" value="drbd2"/><br>          </instance_attributes><br>          <operations><br>            <op id="Drbd2-start-interval-0s" interval="0s" name="start" timeout="240"/><br>            <op id="Drbd2-promote-interval-0s" interval="0s" name="promote" timeout="90"/><br>            <op id="Drbd2-demote-interval-0s" interval="0s" name="demote" timeout="90"/><br>            <op id="Drbd2-stop-interval-0s" interval="0s" name="stop" timeout="100"/><br>            <op id="Drbd2-monitor-interval-60s" interval="60s" name="monitor"/><br>          </operations><br>        </primitive><br>        <meta_attributes id="Drbd2-clone-meta_attributes"><br>          <nvpair id="Drbd2-clone-meta_attributes-master-max" name="master-max" value="2"/><br>          <nvpair id="Drbd2-clone-meta_attributes-master-node-max" name="master-node-max" value="1"/><br>          <nvpair id="Drbd2-clone-meta_attributes-clone-max" name="clone-max" value="2"/><br>          <nvpair id="Drbd2-clone-meta_attributes-clone-node-max" name="clone-node-max" value="1"/><br>          <nvpair id="Drbd2-clone-meta_attributes-notify" name="notify" value="true"/><br>          <nvpair id="Drbd2-clone-meta_attributes-globally-unique" name="globally-unique" value="false"/><br>          <nvpair id="Drbd2-clone-meta_attributes-interleave" name="interleave" value="true"/><br>          <nvpair id="Drbd2-clone-meta_attributes-ordered" name="ordered" value="true"/><br>        </meta_attributes><br>      </master><br><br>      <clone id="Virtfs2-clone"><br>        <primitive class="ocf" id="Virtfs2" provider="heartbeat" type="Filesystem"><br>          <instance_attributes id="Virtfs2-instance_attributes"><br>            <nvpair id="Virtfs2-instance_attributes-device" name="device" value="/dev/drbd2"/><br>            <nvpair id="Virtfs2-instance_attributes-directory" name="directory" value="/virtfs2"/><br>            <nvpair id="Virtfs2-instance_attributes-fstype" name="fstype" value="gfs2"/><br>          </instance_attributes><br>          <operations><br>            <op id="Virtfs2-start-interval-0s" interval="0s" name="start" timeout="60"/><br>            <op id="Virtfs2-stop-interval-0s" interval="0s" name="stop" timeout="60"/><br>            <op id="Virtfs2-monitor-interval-20" interval="20" name="monitor" timeout="40"/><br>          </operations><br>        </primitive><br>        <meta_attributes id="Virtfs2-clone-meta_attributes"><br>          <nvpair id="Virtfs2-interleave" name="interleave" value="true"/><br>        </meta_attributes><br>      </clone><br>      <clone id="dlm-clone"><br>        <primitive class="ocf" id="dlm" provider="pacemaker" type="controld"><br>          <instance_attributes id="dlm-instance_attributes"/><br>          <operations><br>            <op id="dlm-start-interval-0s" interval="0s" name="start" timeout="90"/><br>            <op id="dlm-stop-interval-0s" interval="0s" name="stop" timeout="100"/><br>            <op id="dlm-monitor-interval-60s" interval="60s" name="monitor"/><br>          </operations><br>        </primitive><br>        <meta_attributes id="dlm-clone-meta_attributes"><br>          <nvpair id="dlm-clone-max" name="clone-max" value="2"/><br>          <nvpair id="dlm-clone-node-max" name="clone-node-max" value="1"/><br>          <nvpair id="dlm-interleave" name="interleave" value="true"/><br>          <nvpair id="dlm-ordered" name="ordered" value="true"/><br>        </meta_attributes><br>      </clone><br>      <primitive class="stonith" id="fencing-idrac1" type="fence_idrac"><br>        <instance_attributes id="fencing-idrac1-instance_attributes"><br>          <nvpair id="fencing-idrac1-instance_attributes-pcmk_host_list" name="pcmk_host_list" value="wirt1v"/><br>          <nvpair id="fencing-idrac1-instance_attributes-ipaddr" name="ipaddr" value="172.31.0.223"/><br>          <nvpair id="fencing-idrac1-instance_attributes-lanplus" name="lanplus" value="on"/><br>          <nvpair id="fencing-idrac1-instance_attributes-login" name="login" value="root"/><br>          <nvpair id="fencing-idrac1-instance_attributes-passwd" name="passwd" value="my1secret2password3"/><br>          <nvpair id="fencing-idrac1-instance_attributes-action" name="action" value="reboot"/><br>        </instance_attributes><br>        <operations><br>          <op id="fencing-idrac1-monitor-interval-60" interval="60" name="monitor"/><br>        </operations><br>      </primitive><br>      <primitive class="stonith" id="fencing-idrac2" type="fence_idrac"><br>        <instance_attributes id="fencing-idrac2-instance_attributes"><br>          <nvpair id="fencing-idrac2-instance_attributes-pcmk_host_list" name="pcmk_host_list" value="wirt2v"/><br>          <nvpair id="fencing-idrac2-instance_attributes-ipaddr" name="ipaddr" value="172.31.0.224"/><br>          <nvpair id="fencing-idrac2-instance_attributes-lanplus" name="lanplus" value="on"/><br>          <nvpair id="fencing-idrac2-instance_attributes-login" name="login" value="root"/><br>          <nvpair id="fencing-idrac2-instance_attributes-passwd" name="passwd" value="my1secret2password3"/><br>          <nvpair id="fencing-idrac2-instance_attributes-action" name="action" value="reboot"/><br>        </instance_attributes><br>        <operations><br>          <op id="fencing-idrac2-monitor-interval-60" interval="60" name="monitor"/><br>        </operations><br>      </primitive><br>    </resources><br>    <constraints><br>      <rsc_colocation id="colocation-Virtfs2-clone-Drbd2-clone-INFINITY" rsc="Virtfs2-clone" score="INFINITY" with-rsc="Drbd2-clone" with-rsc-role="Master"/><br>      <rsc_order first="Drbd2-clone" first-action="promote" id="order-Drbd2-clone-Virtfs2-clone-mandatory" then="Virtfs2-clone" then-action="start"/><br>      <rsc_order first="dlm-clone" first-action="start" id="order-dlm-clone-Virtfs2-clone-mandatory" then="Virtfs2-clone" then-action="start"/><br>      <rsc_colocation id="colocation-Virtfs2-clone-dlm-clone-INFINITY" rsc="Virtfs2-clone" score="INFINITY" with-rsc="dlm-clone"/><br>    </constraints><br>    <rsc_defaults><br>      <meta_attributes id="rsc_defaults-options"><br>        <nvpair id="rsc_defaults-options-resource-stickiness" name="resource-stickiness" value="100"/><br>      </meta_attributes><br>    </rsc_defaults><br>  </configuration><br>  <status><br>    <node_state id="1" uname="wirt1v" in_ccm="true" crmd="online" crm-debug-origin="do_update_resource" join="member" expected="member"><br>      <lrm id="1"><br>        <lrm_resources><br>          <lrm_resource id="fencing-idrac1" type="fence_idrac" class="stonith"><br>            <lrm_rsc_op id="fencing-idrac1_last_0" operation_key="fencing-idrac1_start_0" operation="start" crm-debug-origin="do_update_resource" crm_feature_set="3.0.10" transition-key="55:0:0:5f2f0724-e33d-4494-90b2-9e06a0e2b0df" transition-magic="0:0;55:0:0:5f2f0724-e33d-4494-90b2-9e06a0e2b0df" on_node="wirt1v" call-id="27" rc-code="0" op-status="0" interval="0" last-run="1473786030" last-rc-change="1473786030" exec-time="54" queue-time="0" op-digest="c5f495355c70285327a4ecd128166155" op-secure-params=" passwd " op-secure-digest="58f15e2aeb9ef41c7d7016ac60c95b3d"/><br>            <lrm_rsc_op id="fencing-idrac1_monitor_60000" operation_key="fencing-idrac1_monitor_60000" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.10" transition-key="51:1:0:5f2f0724-e33d-4494-90b2-9e06a0e2b0df" transition-magic="0:0;51:1:0:5f2f0724-e33d-4494-90b2-9e06a0e2b0df" on_node="wirt1v" call-id="29" rc-code="0" op-status="0" interval="60000" last-rc-change="1473786031" exec-time="54" queue-time="0" op-digest="2c3a04590a892a02a6109a0e8bd4b89a" op-secure-params=" passwd " op-secure-digest="58f15e2aeb9ef41c7d7016ac60c95b3d"/><br>          </lrm_resource><br>          <lrm_resource id="fencing-idrac2" type="fence_idrac" class="stonith"><br>            <lrm_rsc_op id="fencing-idrac2_last_0" operation_key="fencing-idrac2_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.10" transition-key="8:0:7:5f2f0724-e33d-4494-90b2-9e06a0e2b0df" transition-magic="0:7;8:0:7:5f2f0724-e33d-4494-90b2-9e06a0e2b0df" on_node="wirt1v" call-id="24" rc-code="7" op-status="0" interval="0" last-run="1473786029" last-rc-change="1473786029" exec-time="0" queue-time="0" op-digest="62957a33f7a67eda09c15e3f933f2d0b" op-secure-params=" passwd " op-secure-digest="65925748cee98be7e9d827ae5f2eb74f"/><br>          </lrm_resource><br>          <lrm_resource id="Drbd2" type="drbd" class="ocf" provider="linbit"><br>            <lrm_rsc_op id="Drbd2_last_0" operation_key="Drbd2_promote_0" operation="promote" crm-debug-origin="do_update_resource" crm_feature_set="3.0.10" transition-key="10:2:0:5f2f0724-e33d-4494-90b2-9e06a0e2b0df" transition-magic="0:0;10:2:0:5f2f0724-e33d-4494-90b2-9e06a0e2b0df" on_node="wirt1v" call-id="33" rc-code="0" op-status="0" interval="0" last-run="1473786032" last-rc-change="1473786032" exec-time="64" queue-time="1" op-digest="d0c8a735862843030d8426a5218ceb92"/><br>          </lrm_resource><br>          <lrm_resource id="Virtfs2" type="Filesystem" class="ocf" provider="heartbeat"><br>            <lrm_rsc_op id="Virtfs2_last_0" operation_key="Virtfs2_start_0" operation="start" crm-debug-origin="do_update_resource" crm_feature_set="3.0.10" transition-key="41:3:0:5f2f0724-e33d-4494-90b2-9e06a0e2b0df" transition-magic="0:0;41:3:0:5f2f0724-e33d-4494-90b2-9e06a0e2b0df" on_node="wirt1v" call-id="35" rc-code="0" op-status="0" interval="0" last-run="1473786032" last-rc-change="1473786032" exec-time="1372" queue-time="0" op-digest="8dbd904c2115508ebcf3dffe8e7c6d82"/><br>            <lrm_rsc_op id="Virtfs2_monitor_20000" operation_key="Virtfs2_monitor_20000" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.10" transition-key="42:3:0:5f2f0724-e33d-4494-90b2-9e06a0e2b0df" transition-magic="0:0;42:3:0:5f2f0724-e33d-4494-90b2-9e06a0e2b0df" on_node="wirt1v" call-id="36" rc-code="0" op-status="0" interval="20000" last-rc-change="1473786034" exec-time="64" queue-time="0" op-digest="051271837d1a8eccc0af38fbd8c406e4"/><br>          </lrm_resource><br>          <lrm_resource id="dlm" type="controld" class="ocf" provider="pacemaker"><br>            <lrm_rsc_op id="dlm_last_0" operation_key="dlm_start_0" operation="start" crm-debug-origin="do_update_resource" crm_feature_set="3.0.10" transition-key="47:0:0:5f2f0724-e33d-4494-90b2-9e06a0e2b0df" transition-magic="0:0;47:0:0:5f2f0724-e33d-4494-90b2-9e06a0e2b0df" on_node="wirt1v" call-id="26" rc-code="0" op-status="0" interval="0" last-run="1473786030" last-rc-change="1473786030" exec-time="1098" queue-time="0" op-digest="f2317cad3d54cec5d7d7aa7d0bf35cf8"/><br>            <lrm_rsc_op id="dlm_monitor_60000" operation_key="dlm_monitor_60000" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.10" transition-key="42:1:0:5f2f0724-e33d-4494-90b2-9e06a0e2b0df" transition-magic="0:0;42:1:0:5f2f0724-e33d-4494-90b2-9e06a0e2b0df" on_node="wirt1v" call-id="28" rc-code="0" op-status="0" interval="60000" last-rc-change="1473786031" exec-time="34" queue-time="0" op-digest="4811cef7f7f94e3a35a70be7916cb2fd"/><br>          </lrm_resource><br>        </lrm_resources><br>      </lrm><br>      <transient_attributes id="1"><br>        <instance_attributes id="status-1"><br>          <nvpair id="status-1-shutdown" name="shutdown" value="0"/><br>          <nvpair id="status-1-probe_complete" name="probe_complete" value="true"/><br>          <nvpair id="status-1-master-Drbd2" name="master-Drbd2" value="10000"/><br>        </instance_attributes><br>      </transient_attributes><br>    </node_state><br>    <node_state id="2" uname="wirt2v" in_ccm="true" crmd="online" crm-debug-origin="do_update_resource" join="member" expected="member"><br>      <lrm id="2"><br>        <lrm_resources><br>          <lrm_resource id="fencing-idrac1" type="fence_idrac" class="stonith"><br>            <lrm_rsc_op id="fencing-idrac1_last_0" operation_key="fencing-idrac1_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.10" transition-key="13:0:7:5f2f0724-e33d-4494-90b2-9e06a0e2b0df" transition-magic="0:7;13:0:7:5f2f0724-e33d-4494-90b2-9e06a0e2b0df" on_node="wirt2v" call-id="20" rc-code="7" op-status="0" interval="0" last-run="1473786029" last-rc-change="1473786029" exec-time="3" queue-time="0" op-digest="c5f495355c70285327a4ecd128166155" op-secure-params=" passwd " op-secure-digest="58f15e2aeb9ef41c7d7016ac60c95b3d"/><br>          </lrm_resource><br>          <lrm_resource id="fencing-idrac2" type="fence_idrac" class="stonith"><br>            <lrm_rsc_op id="fencing-idrac2_last_0" operation_key="fencing-idrac2_start_0" operation="start" crm-debug-origin="do_update_resource" crm_feature_set="3.0.10" transition-key="57:0:0:5f2f0724-e33d-4494-90b2-9e06a0e2b0df" transition-magic="0:0;57:0:0:5f2f0724-e33d-4494-90b2-9e06a0e2b0df" on_node="wirt2v" call-id="25" rc-code="0" op-status="0" interval="0" last-run="1473786030" last-rc-change="1473786030" exec-time="62" queue-time="0" op-digest="62957a33f7a67eda09c15e3f933f2d0b" op-secure-params=" passwd " op-secure-digest="65925748cee98be7e9d827ae5f2eb74f"/><br>            <lrm_rsc_op id="fencing-idrac2_monitor_60000" operation_key="fencing-idrac2_monitor_60000" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.10" transition-key="54:1:0:5f2f0724-e33d-4494-90b2-9e06a0e2b0df" transition-magic="0:0;54:1:0:5f2f0724-e33d-4494-90b2-9e06a0e2b0df" on_node="wirt2v" call-id="26" rc-code="0" op-status="0" interval="60000" last-rc-change="1473786031" exec-time="74" queue-time="0" op-digest="02c5ce42002631d918b41adc571d64b8" op-secure-params=" passwd " op-secure-digest="65925748cee98be7e9d827ae5f2eb74f"/><br>          </lrm_resource><br>          <lrm_resource id="dlm" type="controld" class="ocf" provider="pacemaker"><br>            <lrm_rsc_op id="dlm_last_0" operation_key="dlm_start_0" operation="start" crm-debug-origin="do_update_resource" crm_feature_set="3.0.10" transition-key="43:1:0:5f2f0724-e33d-4494-90b2-9e06a0e2b0df" transition-magic="0:0;43:1:0:5f2f0724-e33d-4494-90b2-9e06a0e2b0df" on_node="wirt2v" call-id="27" rc-code="0" op-status="0" interval="0" last-run="1473786031" last-rc-change="1473786031" exec-time="1102" queue-time="0" op-digest="f2317cad3d54cec5d7d7aa7d0bf35cf8"/><br>            <lrm_rsc_op id="dlm_monitor_60000" operation_key="dlm_monitor_60000" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.10" transition-key="50:2:0:5f2f0724-e33d-4494-90b2-9e06a0e2b0df" transition-magic="0:0;50:2:0:5f2f0724-e33d-4494-90b2-9e06a0e2b0df" on_node="wirt2v" call-id="30" rc-code="0" op-status="0" interval="60000" last-rc-change="1473786032" exec-time="32" queue-time="0" op-digest="4811cef7f7f94e3a35a70be7916cb2fd"/><br>          </lrm_resource><br>          <lrm_resource id="Drbd2" type="drbd" class="ocf" provider="linbit"><br>            <lrm_rsc_op id="Drbd2_last_0" operation_key="Drbd2_promote_0" operation="promote" crm-debug-origin="do_update_resource" crm_feature_set="3.0.10" transition-key="13:2:0:5f2f0724-e33d-4494-90b2-9e06a0e2b0df" transition-magic="0:0;13:2:0:5f2f0724-e33d-4494-90b2-9e06a0e2b0df" on_node="wirt2v" call-id="32" rc-code="0" op-status="0" interval="0" last-run="1473786032" last-rc-change="1473786032" exec-time="55" queue-time="0" op-digest="d0c8a735862843030d8426a5218ceb92"/><br>          </lrm_resource><br>          <lrm_resource id="Virtfs2" type="Filesystem" class="ocf" provider="heartbeat"><br>            <lrm_rsc_op id="Virtfs2_last_0" operation_key="Virtfs2_start_0" operation="start" crm-debug-origin="do_update_resource" crm_feature_set="3.0.10" transition-key="43:3:0:5f2f0724-e33d-4494-90b2-9e06a0e2b0df" transition-magic="0:0;43:3:0:5f2f0724-e33d-4494-90b2-9e06a0e2b0df" on_node="wirt2v" call-id="34" rc-code="0" op-status="0" interval="0" last-run="1473786032" last-rc-change="1473786032" exec-time="939" queue-time="0" op-digest="8dbd904c2115508ebcf3dffe8e7c6d82"/><br>            <lrm_rsc_op id="Virtfs2_monitor_20000" operation_key="Virtfs2_monitor_20000" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.10" transition-key="44:3:0:5f2f0724-e33d-4494-90b2-9e06a0e2b0df" transition-magic="0:0;44:3:0:5f2f0724-e33d-4494-90b2-9e06a0e2b0df" on_node="wirt2v" call-id="35" rc-code="0" op-status="0" interval="20000" last-rc-change="1473786033" exec-time="39" queue-time="0" op-digest="051271837d1a8eccc0af38fbd8c406e4"/><br>          </lrm_resource><br>        </lrm_resources><br>      </lrm><br>      <transient_attributes id="2"><br>        <instance_attributes id="status-2"><br>          <nvpair id="status-2-shutdown" name="shutdown" value="0"/><br>          <nvpair id="status-2-probe_complete" name="probe_complete" value="true"/><br>          <nvpair id="status-2-master-Drbd2" name="master-Drbd2" value="10000"/><br>        </instance_attributes><br>      </transient_attributes><br>    </node_state><br>  </status><br></cib><br><br>#-------- The End --------------------<br><br>### result:  pcs config  ###<br><br>Cluster Name: klasterek<br>Corosync Nodes:<br> wirt1v wirt2v<br>Pacemaker Nodes:<br> wirt1v wirt2v<br>Resources:<br> Master: Drbd2-clone<br>  Meta Attrs: master-max=2 master-node-max=1 clone-max=2 clone-node-max=1 notify=true globally-unique=false interleave=true ordered=true<br>  Resource: Drbd2 (class=ocf provider=linbit type=drbd)<br>   Attributes: drbd_resource=drbd2<br>   Operations: start interval=0s timeout=240 (Drbd2-start-interval-0s)<br>               promote interval=0s timeout=90 (Drbd2-promote-interval-0s)<br>               demote interval=0s timeout=90 (Drbd2-demote-interval-0s)<br>               stop interval=0s timeout=100 (Drbd2-stop-interval-0s)<br>               monitor interval=60s (Drbd2-monitor-interval-60s)<br> Clone: Virtfs2-clone<br>  Meta Attrs: interleave=true<br>  Resource: Virtfs2 (class=ocf provider=heartbeat type=Filesystem)<br>   Attributes: device=/dev/drbd2 directory=/virtfs2 fstype=gfs2<br>   Operations: start interval=0s timeout=60 (Virtfs2-start-interval-0s)<br>               stop interval=0s timeout=60 (Virtfs2-stop-interval-0s)<br>               monitor interval=20 timeout=40 (Virtfs2-monitor-interval-20)<br> Clone: dlm-clone<br>  Meta Attrs: clone-max=2 clone-node-max=1 interleave=true ordered=true<br>  Resource: dlm (class=ocf provider=pacemaker type=controld)<br>   Operations: start interval=0s timeout=90 (dlm-start-interval-0s)<br>               stop interval=0s timeout=100 (dlm-stop-interval-0s)<br>               monitor interval=60s (dlm-monitor-interval-60s)<br>Stonith Devices:<br> Resource: fencing-idrac1 (class=stonith type=fence_idrac)<br>  Attributes: pcmk_host_list=wirt1v ipaddr=172.31.0.223 lanplus=on login=root passwd=my1secret2password3 action=reboot<br>  Operations: monitor interval=60 (fencing-idrac1-monitor-interval-60)<br> Resource: fencing-idrac2 (class=stonith type=fence_idrac)<br>  Attributes: pcmk_host_list=wirt2v ipaddr=172.31.0.224 lanplus=on login=root passwd=my1secret2password3 action=reboot<br>  Operations: monitor interval=60 (fencing-idrac2-monitor-interval-60)<br>Fencing Levels:<br>Location Constraints:<br>Ordering Constraints:<br>  promote Drbd2-clone then start Virtfs2-clone (kind:Mandatory) (id:order-Drbd2-clone-Virtfs2-clone-mandatory)<br>  start dlm-clone then start Virtfs2-clone (kind:Mandatory) (id:order-dlm-clone-Virtfs2-clone-mandatory)<br>Colocation Constraints:<br>  Virtfs2-clone with Drbd2-clone (score:INFINITY) (with-rsc-role:Master) (id:colocation-Virtfs2-clone-Drbd2-clone-INFINITY)<br>  Virtfs2-clone with dlm-clone (score:INFINITY) (id:colocation-Virtfs2-clone-dlm-clone-INFINITY)<br>Resources Defaults:<br> resource-stickiness: 100<br>Operations Defaults:<br> No defaults set<br>Cluster Properties:<br> cluster-infrastructure: corosync<br> cluster-name: klasterek<br> dc-version: 1.1.13-10.el7_2.4-44eb2dd<br> have-watchdog: false<br> no-quorum-policy: ignore<br> stonith-enabled: true<br> symmetric-cluster: true<br><br><br>#---------------------------------<br></div># /var/log/messages<br><br>Sep 13 22:00:19 wirt1v systemd: Starting Corosync Cluster Engine...<br>Sep 13 22:00:19 wirt1v corosync[5720]: [MAIN  ] Corosync Cluster Engine ('2.3.4'): started and ready to provide service.<br>Sep 13 22:00:19 wirt1v corosync[5720]: [MAIN  ] Corosync built-in features: dbus systemd xmlconf snmp pie relro bindnow<br>Sep 13 22:00:19 wirt1v corosync[5721]: [TOTEM ] Initializing transport (UDP/IP Unicast).<br>Sep 13 22:00:19 wirt1v corosync[5721]: [TOTEM ] Initializing transmit/receive security (NSS) crypto: none hash: none<br>Sep 13 22:00:19 wirt1v corosync[5721]: [TOTEM ] The network interface [1.1.1.1] is now up.<br>Sep 13 22:00:19 wirt1v corosync[5721]: [SERV  ] Service engine loaded: corosync configuration map access [0]<br>Sep 13 22:00:19 wirt1v corosync[5721]: [QB    ] server name: cmap<br>Sep 13 22:00:19 wirt1v corosync[5721]: [SERV  ] Service engine loaded: corosync configuration service [1]<br>Sep 13 22:00:19 wirt1v corosync[5721]: [QB    ] server name: cfg<br>Sep 13 22:00:19 wirt1v corosync[5721]: [SERV  ] Service engine loaded: corosync cluster closed process group service v1.01 [2]<br>Sep 13 22:00:19 wirt1v corosync[5721]: [QB    ] server name: cpg<br>Sep 13 22:00:19 wirt1v corosync[5721]: [SERV  ] Service engine loaded: corosync profile loading service [4]<br>Sep 13 22:00:19 wirt1v corosync[5721]: [QUORUM] Using quorum provider corosync_votequorum<br>Sep 13 22:00:19 wirt1v corosync[5721]: [VOTEQ ] Waiting for all cluster members. Current votes: 1 expected_votes: 2<br>Sep 13 22:00:19 wirt1v corosync[5721]: [SERV  ] Service engine loaded: corosync vote quorum service v1.0 [5]<br>Sep 13 22:00:19 wirt1v corosync[5721]: [QB    ] server name: votequorum<br>Sep 13 22:00:19 wirt1v corosync[5721]: [SERV  ] Service engine loaded: corosync cluster quorum service v0.1 [3]<br>Sep 13 22:00:19 wirt1v corosync[5721]: [QB    ] server name: quorum<br>Sep 13 22:00:19 wirt1v corosync[5721]: [TOTEM ] adding new UDPU member {1.1.1.1}<br>Sep 13 22:00:19 wirt1v corosync[5721]: [TOTEM ] adding new UDPU member {1.1.1.2}<br>Sep 13 22:00:19 wirt1v corosync[5721]: [TOTEM ] A new membership (<a href="http://1.1.1.1:708">1.1.1.1:708</a>) was formed. Members joined: 1<br>Sep 13 22:00:19 wirt1v corosync[5721]: [VOTEQ ] Waiting for all cluster members. Current votes: 1 expected_votes: 2<br>Sep 13 22:00:19 wirt1v corosync[5721]: [VOTEQ ] Waiting for all cluster members. Current votes: 1 expected_votes: 2<br>Sep 13 22:00:19 wirt1v corosync[5721]: [VOTEQ ] Waiting for all cluster members. Current votes: 1 expected_votes: 2<br>Sep 13 22:00:19 wirt1v corosync[5721]: [QUORUM] Members[1]: 1<br>Sep 13 22:00:19 wirt1v corosync[5721]: [MAIN  ] Completed service synchronization, ready to provide service.<br>Sep 13 22:00:20 wirt1v corosync: Starting Corosync Cluster Engine (corosync): [  OK  ]<br>Sep 13 22:00:20 wirt1v systemd: Started Corosync Cluster Engine.<br>Sep 13 22:00:20 wirt1v systemd: Started Pacemaker High Availability Cluster Manager.<br>Sep 13 22:00:20 wirt1v systemd: Starting Pacemaker High Availability Cluster Manager...<br>Sep 13 22:00:20 wirt1v pacemakerd[5740]:  notice: Additional logging available in /var/log/pacemaker.log<br>Sep 13 22:00:20 wirt1v pacemakerd[5740]:  notice: Switching to /var/log/cluster/corosync.log<br>Sep 13 22:00:20 wirt1v pacemakerd[5740]:  notice: Additional logging available in /var/log/cluster/corosync.log<br>Sep 13 22:00:20 wirt1v pacemakerd[5740]:  notice: Configured corosync to accept connections from group 189: OK (1)<br>Sep 13 22:00:20 wirt1v pacemakerd[5740]:  notice: Starting Pacemaker 1.1.13-10.el7_2.4 (Build: 44eb2dd):  generated-manpages agent-manpages ncurses libqb-logging libqb-ipc upstart systemd nagios  corosync-native atomic-attrd acls<br>Sep 13 22:00:20 wirt1v pacemakerd[5740]:  notice: Tracking existing lrmd process (pid=3413)<br>Sep 13 22:00:20 wirt1v pacemakerd[5740]:  notice: Tracking existing pengine process (pid=3415)<br>Sep 13 22:00:20 wirt1v pacemakerd[5740]:  notice: Quorum lost<br>Sep 13 22:00:20 wirt1v pacemakerd[5740]:  notice: pcmk_quorum_notification: Node wirt1v[1] - state is now member (was (null))<br>Sep 13 22:00:20 wirt1v stonith-ng[5742]:  notice: Additional logging available in /var/log/cluster/corosync.log<br>Sep 13 22:00:20 wirt1v cib[5741]:  notice: Additional logging available in /var/log/cluster/corosync.log<br>Sep 13 22:00:20 wirt1v stonith-ng[5742]:  notice: Connecting to cluster infrastructure: corosync<br>Sep 13 22:00:20 wirt1v attrd[5743]:  notice: Additional logging available in /var/log/cluster/corosync.log<br>Sep 13 22:00:20 wirt1v attrd[5743]:  notice: Connecting to cluster infrastructure: corosync<br>Sep 13 22:00:20 wirt1v crmd[5744]:  notice: Additional logging available in /var/log/cluster/corosync.log<br>Sep 13 22:00:20 wirt1v crmd[5744]:  notice: CRM Git Version: 1.1.13-10.el7_2.4 (44eb2dd)<br>Sep 13 22:00:20 wirt1v cib[5741]:  notice: Connecting to cluster infrastructure: corosync<br>Sep 13 22:00:20 wirt1v attrd[5743]:  notice: crm_update_peer_proc: Node wirt1v[1] - state is now member (was (null))<br>Sep 13 22:00:20 wirt1v stonith-ng[5742]:  notice: crm_update_peer_proc: Node wirt1v[1] - state is now member (was (null))<br>Sep 13 22:00:20 wirt1v cib[5741]:  notice: crm_update_peer_proc: Node wirt1v[1] - state is now member (was (null))<br>Sep 13 22:00:21 wirt1v crmd[5744]:  notice: Connecting to cluster infrastructure: corosync<br>Sep 13 22:00:21 wirt1v crmd[5744]:  notice: Quorum lost<br>Sep 13 22:00:21 wirt1v stonith-ng[5742]:  notice: Watching for stonith topology changes<br>Sep 13 22:00:21 wirt1v stonith-ng[5742]:  notice: On loss of CCM Quorum: Ignore<br>Sep 13 22:00:21 wirt1v crmd[5744]:  notice: pcmk_quorum_notification: Node wirt1v[1] - state is now member (was (null))<br>Sep 13 22:00:21 wirt1v crmd[5744]:  notice: Notifications disabled<br>Sep 13 22:00:21 wirt1v crmd[5744]:  notice: The local CRM is operational<br>Sep 13 22:00:21 wirt1v crmd[5744]:  notice: State transition S_STARTING -> S_PENDING [ input=I_PENDING cause=C_FSA_INTERNAL origin=do_started ]<br>Sep 13 22:00:22 wirt1v stonith-ng[5742]:  notice: Added 'fencing-idrac1' to the device list (1 active devices)<br>Sep 13 22:00:22 wirt1v stonith-ng[5742]:  notice: Added 'fencing-idrac2' to the device list (2 active devices)<br>Sep 13 22:00:42 wirt1v crmd[5744]: warning: FSA: Input I_DC_TIMEOUT from crm_timer_popped() received in state S_PENDING<br>Sep 13 22:00:42 wirt1v crmd[5744]:  notice: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_TIMER_POPPED origin=election_timeout_popped ]<br>Sep 13 22:00:42 wirt1v crmd[5744]: warning: FSA: Input I_ELECTION_DC from do_election_check() received in state S_INTEGRATION<br>Sep 13 22:00:42 wirt1v crmd[5744]:  notice: Notifications disabled<br>Sep 13 22:00:42 wirt1v pengine[3415]:  notice: On loss of CCM Quorum: Ignore<br>Sep 13 22:00:42 wirt1v pengine[3415]: warning: Scheduling Node wirt2v for STONITH<br>Sep 13 22:00:42 wirt1v pengine[3415]:  notice: Start   Drbd2:0#011(wirt1v)<br>Sep 13 22:00:42 wirt1v pengine[3415]:  notice: Start   dlm:0#011(wirt1v)<br>Sep 13 22:00:42 wirt1v pengine[3415]:  notice: Start   fencing-idrac1#011(wirt1v)<br>Sep 13 22:00:42 wirt1v pengine[3415]:  notice: Start   fencing-idrac2#011(wirt1v)<br>Sep 13 22:00:42 wirt1v pengine[3415]: warning: Calculated Transition 84: /var/lib/pacemaker/pengine/pe-warn-294.bz2<br>Sep 13 22:00:42 wirt1v crmd[5744]:  notice: Initiating action 4: monitor Drbd2:0_monitor_0 on wirt1v (local)<br>Sep 13 22:00:42 wirt1v crmd[5744]:  notice: Initiating action 5: monitor Virtfs2:0_monitor_0 on wirt1v (local)<br>Sep 13 22:00:42 wirt1v crmd[5744]:  notice: Initiating action 6: monitor dlm:0_monitor_0 on wirt1v (local)<br>Sep 13 22:00:42 wirt1v crmd[5744]:  notice: Initiating action 7: monitor fencing-idrac1_monitor_0 on wirt1v (local)<br>Sep 13 22:00:42 wirt1v crmd[5744]:  notice: Initiating action 8: monitor fencing-idrac2_monitor_0 on wirt1v (local)<br>Sep 13 22:00:42 wirt1v crmd[5744]:  notice: Executing reboot fencing operation (50) on wirt2v (timeout=60000)<br>Sep 13 22:00:42 wirt1v stonith-ng[5742]:  notice: Client crmd.5744.8928b80c wants to fence (reboot) 'wirt2v' with device '(any)'<br>Sep 13 22:00:42 wirt1v stonith-ng[5742]:  notice: Initiating remote operation reboot for wirt2v: e87b942f-997d-42ad-91ad-dfa501f4ede0 (0)<br>Sep 13 22:00:42 wirt1v stonith-ng[5742]:  notice: fencing-idrac1 can not fence (reboot) wirt2v: static-list<br>Sep 13 22:00:42 wirt1v stonith-ng[5742]:  notice: fencing-idrac2 can fence (reboot) wirt2v: static-list<br>Sep 13 22:00:42 wirt1v stonith-ng[5742]:  notice: fencing-idrac1 can not fence (reboot) wirt2v: static-list<br>Sep 13 22:00:42 wirt1v stonith-ng[5742]:  notice: fencing-idrac2 can fence (reboot) wirt2v: static-list<br>Sep 13 22:00:42 wirt1v Filesystem(Virtfs2)[5753]: WARNING: Couldn't find device [/dev/drbd2]. Expected /dev/??? to exist<br>Sep 13 22:00:42 wirt1v fence_idrac: Failed: Unable to obtain correct plug status or plug is not available<br>Sep 13 22:00:43 wirt1v crmd[5744]:  notice: Operation fencing-idrac1_monitor_0: not running (node=wirt1v, call=33, rc=7, cib-update=31, confirmed=true)<br>Sep 13 22:00:43 wirt1v crmd[5744]:  notice: Operation fencing-idrac2_monitor_0: not running (node=wirt1v, call=35, rc=7, cib-update=32, confirmed=true)<br>Sep 13 22:00:43 wirt1v crmd[5744]:  notice: Operation dlm_monitor_0: not running (node=wirt1v, call=31, rc=7, cib-update=33, confirmed=true)<br>Sep 13 22:00:43 wirt1v crmd[5744]:   error: pcmkRegisterNode: Triggered assert at xml.c:594 : node->type == XML_ELEMENT_NODE<br>Sep 13 22:00:43 wirt1v crmd[5744]:  notice: Operation Drbd2_monitor_0: not running (node=wirt1v, call=27, rc=7, cib-update=34, confirmed=true)<br>Sep 13 22:00:43 wirt1v crmd[5744]:  notice: Operation Virtfs2_monitor_0: not running (node=wirt1v, call=29, rc=7, cib-update=35, confirmed=true)<br>Sep 13 22:00:43 wirt1v crmd[5744]:  notice: Initiating action 3: probe_complete probe_complete-wirt1v on wirt1v (local) - no waiting<br>Sep 13 22:00:43 wirt1v crmd[5744]:  notice: Transition aborted by status-1-probe_complete, probe_complete=true: Transient attribute change (create cib=0.69.11, source=abort_unless_down:319, path=/cib/status/node_state[@id='1']/transient_attributes[@id='1']/instance_attributes[@id='status-1'], 0)<br>Sep 13 22:00:43 wirt1v fence_idrac: Failed: Unable to obtain correct plug status or plug is not available<br>Sep 13 22:00:43 wirt1v stonith-ng[5742]:   error: Operation 'reboot' [5849] (call 2 from crmd.5744) for host 'wirt2v' with device 'fencing-idrac2' returned: -201 (Generic Pacemaker error)<br>Sep 13 22:00:43 wirt1v stonith-ng[5742]: warning: fencing-idrac2:5849 [ Failed: Unable to obtain correct plug status or plug is not available ]<br>Sep 13 22:00:43 wirt1v stonith-ng[5742]: warning: fencing-idrac2:5849 [  ]<br>Sep 13 22:00:43 wirt1v stonith-ng[5742]: warning: fencing-idrac2:5849 [  ]<br>Sep 13 22:00:43 wirt1v stonith-ng[5742]:  notice: Couldn't find anyone to fence (reboot) wirt2v with any device<br>Sep 13 22:00:43 wirt1v stonith-ng[5742]:   error: Operation reboot of wirt2v by <no-one> for crmd.5744@wirt1v.e87b942f: No route to host<br>Sep 13 22:00:43 wirt1v crmd[5744]:  notice: Stonith operation 2/50:84:0:dd848cfe-edbc-41f4-bd55-f0cad5f7204f: No route to host (-113)<br>Sep 13 22:00:43 wirt1v crmd[5744]:  notice: Stonith operation 2 for wirt2v failed (No route to host): aborting transition.<br>Sep 13 22:00:43 wirt1v crmd[5744]:  notice: Peer wirt2v was not terminated (reboot) by <anyone> for wirt1v: No route to host (ref=e87b942f-997d-42ad-91ad-dfa501f4ede0) by client crmd.5744<br>Sep 13 22:00:43 wirt1v crmd[5744]:  notice: Transition 84 (Complete=12, Pending=0, Fired=0, Skipped=0, Incomplete=15, Source=/var/lib/pacemaker/pengine/pe-warn-294.bz2): Complete<br>Sep 13 22:00:43 wirt1v pengine[3415]:  notice: On loss of CCM Quorum: Ignore<br>Sep 13 22:00:43 wirt1v pengine[3415]: warning: Scheduling Node wirt2v for STONITH<br>Sep 13 22:00:43 wirt1v pengine[3415]:  notice: Start   Drbd2:0#011(wirt1v)<br>Sep 13 22:00:43 wirt1v pengine[3415]:  notice: Start   dlm:0#011(wirt1v)<br>Sep 13 22:00:43 wirt1v pengine[3415]:  notice: Start   fencing-idrac1#011(wirt1v)<br>Sep 13 22:00:43 wirt1v pengine[3415]:  notice: Start   fencing-idrac2#011(wirt1v)<br>Sep 13 22:00:43 wirt1v pengine[3415]: warning: Calculated Transition 85: /var/lib/pacemaker/pengine/pe-warn-295.bz2<br>Sep 13 22:00:43 wirt1v crmd[5744]:  notice: Executing reboot fencing operation (45) on wirt2v (timeout=60000)<br>Sep 13 22:00:43 wirt1v stonith-ng[5742]:  notice: Client crmd.5744.8928b80c wants to fence (reboot) 'wirt2v' with device '(any)'<br>Sep 13 22:00:43 wirt1v stonith-ng[5742]:  notice: Initiating remote operation reboot for wirt2v: 880b2614-09d2-47df-b740-e1d24732e6c5 (0)<br>Sep 13 22:00:43 wirt1v stonith-ng[5742]:  notice: fencing-idrac1 can not fence (reboot) wirt2v: static-list<br>Sep 13 22:00:43 wirt1v stonith-ng[5742]:  notice: fencing-idrac2 can fence (reboot) wirt2v: static-list<br>Sep 13 22:00:43 wirt1v stonith-ng[5742]:  notice: fencing-idrac1 can not fence (reboot) wirt2v: static-list<br>Sep 13 22:00:43 wirt1v stonith-ng[5742]:  notice: fencing-idrac2 can fence (reboot) wirt2v: static-list<br>Sep 13 22:00:43 wirt1v fence_idrac: Failed: Unable to obtain correct plug status or plug is not available<br>Sep 13 22:00:44 wirt1v fence_idrac: Failed: Unable to obtain correct plug status or plug is not available<br>Sep 13 22:00:44 wirt1v stonith-ng[5742]:   error: Operation 'reboot' [5879] (call 3 from crmd.5744) for host 'wirt2v' with device 'fencing-idrac2' returned: -201 (Generic Pacemaker error)<br>Sep 13 22:00:44 wirt1v stonith-ng[5742]: warning: fencing-idrac2:5879 [ Failed: Unable to obtain correct plug status or plug is not available ]<br>Sep 13 22:00:44 wirt1v stonith-ng[5742]: warning: fencing-idrac2:5879 [  ]<br>Sep 13 22:00:44 wirt1v stonith-ng[5742]: warning: fencing-idrac2:5879 [  ]<br>Sep 13 22:00:44 wirt1v stonith-ng[5742]:  notice: Couldn't find anyone to fence (reboot) wirt2v with any device<br>Sep 13 22:00:44 wirt1v stonith-ng[5742]:   error: Operation reboot of wirt2v by <no-one> for crmd.5744@wirt1v.880b2614: No route to host<br>Sep 13 22:00:44 wirt1v crmd[5744]:  notice: Stonith operation 3/45:85:0:dd848cfe-edbc-41f4-bd55-f0cad5f7204f: No route to host (-113)<br>Sep 13 22:00:44 wirt1v crmd[5744]:  notice: Stonith operation 3 for wirt2v failed (No route to host): aborting transition.<br>Sep 13 22:00:44 wirt1v crmd[5744]:  notice: Transition aborted: Stonith failed (source=tengine_stonith_callback:733, 0)<br>Sep 13 22:00:44 wirt1v crmd[5744]:  notice: Peer wirt2v was not terminated (reboot) by <anyone> for wirt1v: No route to host (ref=880b2614-09d2-47df-b740-e1d24732e6c5) by client crmd.5744<br>Sep 13 22:00:44 wirt1v crmd[5744]:  notice: Transition 85 (Complete=5, Pending=0, Fired=0, Skipped=0, Incomplete=15, Source=/var/lib/pacemaker/pengine/pe-warn-295.bz2): Complete<br>Sep 13 22:00:44 wirt1v pengine[3415]:  notice: On loss of CCM Quorum: Ignore<br>Sep 13 22:00:44 wirt1v pengine[3415]: warning: Scheduling Node wirt2v for STONITH<br>Sep 13 22:00:44 wirt1v pengine[3415]:  notice: Start   Drbd2:0#011(wirt1v)<br>Sep 13 22:00:44 wirt1v pengine[3415]:  notice: Start   dlm:0#011(wirt1v)<br>Sep 13 22:00:44 wirt1v pengine[3415]:  notice: Start   fencing-idrac1#011(wirt1v)<br>Sep 13 22:00:44 wirt1v pengine[3415]:  notice: Start   fencing-idrac2#011(wirt1v)<br>Sep 13 22:00:44 wirt1v pengine[3415]: warning: Calculated Transition 86: /var/lib/pacemaker/pengine/pe-warn-295.bz2<br>Sep 13 22:00:44 wirt1v crmd[5744]:  notice: Executing reboot fencing operation (45) on wirt2v (timeout=60000)<br>Sep 13 22:00:44 wirt1v stonith-ng[5742]:  notice: Client crmd.5744.8928b80c wants to fence (reboot) 'wirt2v' with device '(any)'<br>Sep 13 22:00:44 wirt1v stonith-ng[5742]:  notice: Initiating remote operation reboot for wirt2v: 4c7af8ee-ffa6-4381-8d98-073d5abba631 (0)<br>Sep 13 22:00:44 wirt1v stonith-ng[5742]:  notice: fencing-idrac1 can not fence (reboot) wirt2v: static-list<br>Sep 13 22:00:44 wirt1v stonith-ng[5742]:  notice: fencing-idrac2 can fence (reboot) wirt2v: static-list<br>Sep 13 22:00:44 wirt1v stonith-ng[5742]:  notice: fencing-idrac1 can not fence (reboot) wirt2v: static-list<br>Sep 13 22:00:44 wirt1v stonith-ng[5742]:  notice: fencing-idrac2 can fence (reboot) wirt2v: static-list<br>Sep 13 22:00:44 wirt1v fence_idrac: Failed: Unable to obtain correct plug status or plug is not available<br>Sep 13 22:00:45 wirt1v fence_idrac: Failed: Unable to obtain correct plug status or plug is not available<br>Sep 13 22:00:45 wirt1v stonith-ng[5742]:   error: Operation 'reboot' [5893] (call 4 from crmd.5744) for host 'wirt2v' with device 'fencing-idrac2' returned: -201 (Generic Pacemaker error)<br>Sep 13 22:00:45 wirt1v stonith-ng[5742]: warning: fencing-idrac2:5893 [ Failed: Unable to obtain correct plug status or plug is not available ]<br>Sep 13 22:00:45 wirt1v stonith-ng[5742]: warning: fencing-idrac2:5893 [  ]<br>Sep 13 22:00:45 wirt1v stonith-ng[5742]: warning: fencing-idrac2:5893 [  ]<br>Sep 13 22:00:45 wirt1v stonith-ng[5742]:  notice: Couldn't find anyone to fence (reboot) wirt2v with any device<br>Sep 13 22:00:45 wirt1v stonith-ng[5742]:   error: Operation reboot of wirt2v by <no-one> for crmd.5744@wirt1v.4c7af8ee: No route to host<br>Sep 13 22:00:45 wirt1v crmd[5744]:  notice: Stonith operation 4/45:86:0:dd848cfe-edbc-41f4-bd55-f0cad5f7204f: No route to host (-113)<br>Sep 13 22:00:45 wirt1v crmd[5744]:  notice: Stonith operation 4 for wirt2v failed (No route to host): aborting transition.<br>Sep 13 22:00:45 wirt1v crmd[5744]:  notice: Transition aborted: Stonith failed (source=tengine_stonith_callback:733, 0)<br>Sep 13 22:00:45 wirt1v crmd[5744]:  notice: Peer wirt2v was not terminated (reboot) by <anyone> for wirt1v: No route to host (ref=4c7af8ee-ffa6-4381-8d98-073d5abba631) by client crmd.5744<br>Sep 13 22:00:45 wirt1v crmd[5744]:  notice: Transition 86 (Complete=5, Pending=0, Fired=0, Skipped=0, Incomplete=15, Source=/var/lib/pacemaker/pengine/pe-warn-295.bz2): Complete<br>Sep 13 22:00:45 wirt1v pengine[3415]:  notice: On loss of CCM Quorum: Ignore<br>Sep 13 22:00:45 wirt1v pengine[3415]: warning: Scheduling Node wirt2v for STONITH<br>Sep 13 22:00:45 wirt1v pengine[3415]:  notice: Start   Drbd2:0#011(wirt1v)<br>Sep 13 22:00:45 wirt1v pengine[3415]:  notice: Start   dlm:0#011(wirt1v)<br>Sep 13 22:00:45 wirt1v pengine[3415]:  notice: Start   fencing-idrac1#011(wirt1v)<br>Sep 13 22:00:45 wirt1v pengine[3415]:  notice: Start   fencing-idrac2#011(wirt1v)<br>Sep 13 22:00:45 wirt1v pengine[3415]: warning: Calculated Transition 87: /var/lib/pacemaker/pengine/pe-warn-295.bz2<br>Sep 13 22:00:45 wirt1v crmd[5744]:  notice: Executing reboot fencing operation (45) on wirt2v (timeout=60000)<br>Sep 13 22:00:45 wirt1v stonith-ng[5742]:  notice: Client crmd.5744.8928b80c wants to fence (reboot) 'wirt2v' with device '(any)'<br>Sep 13 22:00:45 wirt1v stonith-ng[5742]:  notice: Initiating remote operation reboot for wirt2v: 268e4c7b-0340-4cf5-9c88-4f3c203f1499 (0)<br>Sep 13 22:00:45 wirt1v stonith-ng[5742]:  notice: fencing-idrac1 can not fence (reboot) wirt2v: static-list<br>Sep 13 22:00:45 wirt1v stonith-ng[5742]:  notice: fencing-idrac2 can fence (reboot) wirt2v: static-list<br>Sep 13 22:00:45 wirt1v stonith-ng[5742]:  notice: fencing-idrac1 can not fence (reboot) wirt2v: static-list<br>Sep 13 22:00:45 wirt1v stonith-ng[5742]:  notice: fencing-idrac2 can fence (reboot) wirt2v: static-list<br>Sep 13 22:00:46 wirt1v fence_idrac: Failed: Unable to obtain correct plug status or plug is not available<br>Sep 13 22:00:47 wirt1v fence_idrac: Failed: Unable to obtain correct plug status or plug is not available<br>Sep 13 22:00:47 wirt1v stonith-ng[5742]:   error: Operation 'reboot' [5907] (call 5 from crmd.5744) for host 'wirt2v' with device 'fencing-idrac2' returned: -201 (Generic Pacemaker error)<br>Sep 13 22:00:47 wirt1v stonith-ng[5742]: warning: fencing-idrac2:5907 [ Failed: Unable to obtain correct plug status or plug is not available ]<br>Sep 13 22:00:47 wirt1v stonith-ng[5742]: warning: fencing-idrac2:5907 [  ]<br>Sep 13 22:00:47 wirt1v stonith-ng[5742]: warning: fencing-idrac2:5907 [  ]<br>Sep 13 22:00:47 wirt1v stonith-ng[5742]:  notice: Couldn't find anyone to fence (reboot) wirt2v with any device<br>Sep 13 22:00:47 wirt1v stonith-ng[5742]:   error: Operation reboot of wirt2v by <no-one> for crmd.5744@wirt1v.268e4c7b: No route to host<br>Sep 13 22:00:47 wirt1v crmd[5744]:  notice: Stonith operation 5/45:87:0:dd848cfe-edbc-41f4-bd55-f0cad5f7204f: No route to host (-113)<br>Sep 13 22:00:47 wirt1v crmd[5744]:  notice: Stonith operation 5 for wirt2v failed (No route to host): aborting transition.<br>Sep 13 22:00:47 wirt1v crmd[5744]:  notice: Transition aborted: Stonith failed (source=tengine_stonith_callback:733, 0)<br>Sep 13 22:00:47 wirt1v crmd[5744]:  notice: Peer wirt2v was not terminated (reboot) by <anyone> for wirt1v: No route to host (ref=268e4c7b-0340-4cf5-9c88-4f3c203f1499) by client crmd.5744<br>Sep 13 22:00:47 wirt1v crmd[5744]:  notice: Transition 87 (Complete=5, Pending=0, Fired=0, Skipped=0, Incomplete=15, Source=/var/lib/pacemaker/pengine/pe-warn-295.bz2): Complete<br>Sep 13 22:00:47 wirt1v pengine[3415]:  notice: On loss of CCM Quorum: Ignore<br>Sep 13 22:00:47 wirt1v pengine[3415]: warning: Scheduling Node wirt2v for STONITH<br>Sep 13 22:00:47 wirt1v pengine[3415]:  notice: Start   Drbd2:0#011(wirt1v)<br>Sep 13 22:00:47 wirt1v pengine[3415]:  notice: Start   dlm:0#011(wirt1v)<br>Sep 13 22:00:47 wirt1v pengine[3415]:  notice: Start   fencing-idrac1#011(wirt1v)<br>Sep 13 22:00:47 wirt1v pengine[3415]:  notice: Start   fencing-idrac2#011(wirt1v)<br>Sep 13 22:00:47 wirt1v pengine[3415]: warning: Calculated Transition 88: /var/lib/pacemaker/pengine/pe-warn-295.bz2<br>Sep 13 22:00:47 wirt1v crmd[5744]:  notice: Executing reboot fencing operation (45) on wirt2v (timeout=60000)<br>Sep 13 22:00:47 wirt1v stonith-ng[5742]:  notice: Client crmd.5744.8928b80c wants to fence (reboot) 'wirt2v' with device '(any)'<br>Sep 13 22:00:47 wirt1v stonith-ng[5742]:  notice: Initiating remote operation reboot for wirt2v: 8c5bf217-030f-400a-b1f8-7aa19918954f (0)<br>Sep 13 22:00:47 wirt1v stonith-ng[5742]:  notice: fencing-idrac1 can not fence (reboot) wirt2v: static-list<br>Sep 13 22:00:47 wirt1v stonith-ng[5742]:  notice: fencing-idrac2 can fence (reboot) wirt2v: static-list<br>Sep 13 22:00:47 wirt1v stonith-ng[5742]:  notice: fencing-idrac1 can not fence (reboot) wirt2v: static-list<br>Sep 13 22:00:47 wirt1v stonith-ng[5742]:  notice: fencing-idrac2 can fence (reboot) wirt2v: static-list<br>Sep 13 22:00:47 wirt1v fence_idrac: Failed: Unable to obtain correct plug status or plug is not available<br>Sep 13 22:00:48 wirt1v fence_idrac: Failed: Unable to obtain correct plug status or plug is not available<br>Sep 13 22:00:48 wirt1v stonith-ng[5742]:   error: Operation 'reboot' [5921] (call 6 from crmd.5744) for host 'wirt2v' with device 'fencing-idrac2' returned: -201 (Generic Pacemaker error)<br>Sep 13 22:00:48 wirt1v stonith-ng[5742]: warning: fencing-idrac2:5921 [ Failed: Unable to obtain correct plug status or plug is not available ]<br>Sep 13 22:00:48 wirt1v stonith-ng[5742]: warning: fencing-idrac2:5921 [  ]<br>Sep 13 22:00:48 wirt1v stonith-ng[5742]: warning: fencing-idrac2:5921 [ Failed: Unable to obtain correct plug status or plug is not available ]<br>Sep 13 22:00:48 wirt1v stonith-ng[5742]: warning: fencing-idrac2:5921 [  ]<br>Sep 13 22:00:48 wirt1v stonith-ng[5742]: warning: fencing-idrac2:5921 [  ]<br>Sep 13 22:00:48 wirt1v stonith-ng[5742]:  notice: Couldn't find anyone to fence (reboot) wirt2v with any device<br>Sep 13 22:00:48 wirt1v stonith-ng[5742]:   error: Operation reboot of wirt2v by <no-one> for crmd.5744@wirt1v.8c5bf217: No route to host<br>Sep 13 22:00:48 wirt1v crmd[5744]:  notice: Stonith operation 6/45:88:0:dd848cfe-edbc-41f4-bd55-f0cad5f7204f: No route to host (-113)<br>Sep 13 22:00:48 wirt1v crmd[5744]:  notice: Stonith operation 6 for wirt2v failed (No route to host): aborting transition.<br>Sep 13 22:00:48 wirt1v crmd[5744]:  notice: Transition aborted: Stonith failed (source=tengine_stonith_callback:733, 0)<br>Sep 13 22:00:48 wirt1v crmd[5744]:  notice: Peer wirt2v was not terminated (reboot) by <anyone> for wirt1v: No route to host (ref=8c5bf217-030f-400a-b1f8-7aa19918954f) by client crmd.5744<br>Sep 13 22:00:48 wirt1v crmd[5744]:  notice: Transition 88 (Complete=5, Pending=0, Fired=0, Skipped=0, Incomplete=15, Source=/var/lib/pacemaker/pengine/pe-warn-295.bz2): Complete<br>Sep 13 22:00:48 wirt1v pengine[3415]:  notice: On loss of CCM Quorum: Ignore<br>Sep 13 22:00:48 wirt1v pengine[3415]: warning: Scheduling Node wirt2v for STONITH<br>Sep 13 22:00:48 wirt1v pengine[3415]:  notice: Start   Drbd2:0#011(wirt1v)<br>Sep 13 22:00:48 wirt1v pengine[3415]:  notice: Start   dlm:0#011(wirt1v)<br>Sep 13 22:00:48 wirt1v pengine[3415]:  notice: Start   fencing-idrac1#011(wirt1v)<br>Sep 13 22:00:48 wirt1v pengine[3415]:  notice: Start   fencing-idrac2#011(wirt1v)<br>Sep 13 22:00:48 wirt1v pengine[3415]: warning: Calculated Transition 89: /var/lib/pacemaker/pengine/pe-warn-295.bz2<br>Sep 13 22:00:48 wirt1v crmd[5744]:  notice: Executing reboot fencing operation (45) on wirt2v (timeout=60000)<br>Sep 13 22:00:48 wirt1v stonith-ng[5742]:  notice: Client crmd.5744.8928b80c wants to fence (reboot) 'wirt2v' with device '(any)'<br>Sep 13 22:00:48 wirt1v stonith-ng[5742]:  notice: Initiating remote operation reboot for wirt2v: 25e51799-e072-4622-bbb3-1430bdb20536 (0)<br>Sep 13 22:00:48 wirt1v stonith-ng[5742]:  notice: fencing-idrac1 can not fence (reboot) wirt2v: static-list<br>Sep 13 22:00:48 wirt1v stonith-ng[5742]:  notice: fencing-idrac2 can fence (reboot) wirt2v: static-list<br>Sep 13 22:00:48 wirt1v stonith-ng[5742]:  notice: fencing-idrac1 can not fence (reboot) wirt2v: static-list<br>Sep 13 22:00:48 wirt1v stonith-ng[5742]:  notice: fencing-idrac2 can fence (reboot) wirt2v: static-list<br>Sep 13 22:00:48 wirt1v fence_idrac: Failed: Unable to obtain correct plug status or plug is not available<br>Sep 13 22:00:49 wirt1v fence_idrac: Failed: Unable to obtain correct plug status or plug is not available<br>Sep 13 22:00:49 wirt1v stonith-ng[5742]:   error: Operation 'reboot' [5935] (call 7 from crmd.5744) for host 'wirt2v' with device 'fencing-idrac2' returned: -201 (Generic Pacemaker error)<br>Sep 13 22:00:49 wirt1v stonith-ng[5742]: warning: fencing-idrac2:5935 [ Failed: Unable to obtain correct plug status or plug is not available ]<br>Sep 13 22:00:49 wirt1v stonith-ng[5742]: warning: fencing-idrac2:5935 [  ]<br>Sep 13 22:00:49 wirt1v stonith-ng[5742]: warning: fencing-idrac2:5935 [  ]<br>Sep 13 22:00:49 wirt1v stonith-ng[5742]:  notice: Couldn't find anyone to fence (reboot) wirt2v with any device<br>Sep 13 22:00:49 wirt1v stonith-ng[5742]:   error: Operation reboot of wirt2v by <no-one> for crmd.5744@wirt1v.25e51799: No route to host<br>Sep 13 22:00:49 wirt1v crmd[5744]:  notice: Stonith operation 7/45:89:0:dd848cfe-edbc-41f4-bd55-f0cad5f7204f: No route to host (-113)<br>Sep 13 22:00:49 wirt1v crmd[5744]:  notice: Stonith operation 7 for wirt2v failed (No route to host): aborting transition.<br>Sep 13 22:00:49 wirt1v crmd[5744]:  notice: Transition aborted: Stonith failed (source=tengine_stonith_callback:733, 0)<br>Sep 13 22:00:49 wirt1v crmd[5744]:  notice: Peer wirt2v was not terminated (reboot) by <anyone> for wirt1v: No route to host (ref=25e51799-e072-4622-bbb3-1430bdb20536) by client crmd.5744<br>Sep 13 22:00:49 wirt1v crmd[5744]:  notice: Transition 89 (Complete=5, Pending=0, Fired=0, Skipped=0, Incomplete=15, Source=/var/lib/pacemaker/pengine/pe-warn-295.bz2): Complete<br>Sep 13 22:00:49 wirt1v pengine[3415]:  notice: On loss of CCM Quorum: Ignore<br>Sep 13 22:00:49 wirt1v pengine[3415]: warning: Scheduling Node wirt2v for STONITH<br>Sep 13 22:00:49 wirt1v pengine[3415]:  notice: Start   Drbd2:0#011(wirt1v)<br>Sep 13 22:00:49 wirt1v pengine[3415]:  notice: Start   dlm:0#011(wirt1v)<br>Sep 13 22:00:49 wirt1v pengine[3415]:  notice: Start   fencing-idrac1#011(wirt1v)<br>Sep 13 22:00:49 wirt1v pengine[3415]:  notice: Start   fencing-idrac2#011(wirt1v)<br>Sep 13 22:00:49 wirt1v pengine[3415]: warning: Calculated Transition 90: /var/lib/pacemaker/pengine/pe-warn-295.bz2<br>Sep 13 22:00:49 wirt1v crmd[5744]:  notice: Executing reboot fencing operation (45) on wirt2v (timeout=60000)<br>Sep 13 22:00:49 wirt1v stonith-ng[5742]:  notice: Client crmd.5744.8928b80c wants to fence (reboot) 'wirt2v' with device '(any)'<br>Sep 13 22:00:49 wirt1v stonith-ng[5742]:  notice: Initiating remote operation reboot for wirt2v: 7f520e61-b613-49e4-9213-1958d8a68c6a (0)<br>Sep 13 22:00:49 wirt1v stonith-ng[5742]:  notice: fencing-idrac1 can not fence (reboot) wirt2v: static-list<br>Sep 13 22:00:49 wirt1v stonith-ng[5742]:  notice: fencing-idrac2 can fence (reboot) wirt2v: static-list<br>Sep 13 22:00:49 wirt1v stonith-ng[5742]:  notice: fencing-idrac1 can not fence (reboot) wirt2v: static-list<br>Sep 13 22:00:49 wirt1v stonith-ng[5742]:  notice: fencing-idrac2 can fence (reboot) wirt2v: static-list<br>Sep 13 22:00:49 wirt1v fence_idrac: Failed: Unable to obtain correct plug status or plug is not available<br>Sep 13 22:00:50 wirt1v fence_idrac: Failed: Unable to obtain correct plug status or plug is not available<br>Sep 13 22:00:50 wirt1v stonith-ng[5742]:   error: Operation 'reboot' [5949] (call 8 from crmd.5744) for host 'wirt2v' with device 'fencing-idrac2' returned: -201 (Generic Pacemaker error)<br>Sep 13 22:00:50 wirt1v stonith-ng[5742]: warning: fencing-idrac2:5949 [ Failed: Unable to obtain correct plug status or plug is not available ]<br>Sep 13 22:00:50 wirt1v stonith-ng[5742]: warning: fencing-idrac2:5949 [  ]<br>Sep 13 22:00:50 wirt1v stonith-ng[5742]: warning: fencing-idrac2:5949 [  ]<br>Sep 13 22:00:50 wirt1v stonith-ng[5742]:  notice: Couldn't find anyone to fence (reboot) wirt2v with any device<br>Sep 13 22:00:50 wirt1v stonith-ng[5742]:   error: Operation reboot of wirt2v by <no-one> for crmd.5744@wirt1v.7f520e61: No route to host<br>Sep 13 22:00:50 wirt1v crmd[5744]:  notice: Stonith operation 8/45:90:0:dd848cfe-edbc-41f4-bd55-f0cad5f7204f: No route to host (-113)<br>Sep 13 22:00:50 wirt1v crmd[5744]:  notice: Stonith operation 8 for wirt2v failed (No route to host): aborting transition.<br>Sep 13 22:00:50 wirt1v crmd[5744]:  notice: Transition aborted: Stonith failed (source=tengine_stonith_callback:733, 0)<br>Sep 13 22:00:50 wirt1v crmd[5744]:  notice: Peer wirt2v was not terminated (reboot) by <anyone> for wirt1v: No route to host (ref=7f520e61-b613-49e4-9213-1958d8a68c6a) by client crmd.5744<br>Sep 13 22:00:50 wirt1v crmd[5744]:  notice: Transition 90 (Complete=5, Pending=0, Fired=0, Skipped=0, Incomplete=15, Source=/var/lib/pacemaker/pengine/pe-warn-295.bz2): Complete<br>Sep 13 22:00:50 wirt1v pengine[3415]:  notice: On loss of CCM Quorum: Ignore<br>Sep 13 22:00:50 wirt1v pengine[3415]: warning: Scheduling Node wirt2v for STONITH<br>Sep 13 22:00:50 wirt1v pengine[3415]:  notice: Start   Drbd2:0#011(wirt1v)<br>Sep 13 22:00:50 wirt1v pengine[3415]:  notice: Start   dlm:0#011(wirt1v)<br>Sep 13 22:00:50 wirt1v pengine[3415]:  notice: Start   fencing-idrac1#011(wirt1v)<br>Sep 13 22:00:50 wirt1v pengine[3415]:  notice: Start   fencing-idrac2#011(wirt1v)<br>Sep 13 22:00:50 wirt1v pengine[3415]: warning: Calculated Transition 91: /var/lib/pacemaker/pengine/pe-warn-295.bz2<br>Sep 13 22:00:50 wirt1v crmd[5744]:  notice: Executing reboot fencing operation (45) on wirt2v (timeout=60000)<br>Sep 13 22:00:50 wirt1v stonith-ng[5742]:  notice: Client crmd.5744.8928b80c wants to fence (reboot) 'wirt2v' with device '(any)'<br>Sep 13 22:00:50 wirt1v stonith-ng[5742]:  notice: Initiating remote operation reboot for wirt2v: 25b67d0b-5b8f-4cd8-82c2-4421474c111c (0)<br>Sep 13 22:00:50 wirt1v stonith-ng[5742]:  notice: fencing-idrac1 can not fence (reboot) wirt2v: static-list<br>Sep 13 22:00:50 wirt1v stonith-ng[5742]:  notice: fencing-idrac2 can fence (reboot) wirt2v: static-list<br>Sep 13 22:00:50 wirt1v stonith-ng[5742]:  notice: fencing-idrac1 can not fence (reboot) wirt2v: static-list<br>Sep 13 22:00:50 wirt1v stonith-ng[5742]:  notice: fencing-idrac2 can fence (reboot) wirt2v: static-list<br>Sep 13 22:00:50 wirt1v fence_idrac: Failed: Unable to obtain correct plug status or plug is not available<br>Sep 13 22:00:51 wirt1v fence_idrac: Failed: Unable to obtain correct plug status or plug is not available<br>Sep 13 22:00:51 wirt1v stonith-ng[5742]:   error: Operation 'reboot' [5963] (call 9 from crmd.5744) for host 'wirt2v' with device 'fencing-idrac2' returned: -201 (Generic Pacemaker error)<br>Sep 13 22:00:51 wirt1v stonith-ng[5742]: warning: fencing-idrac2:5963 [ Failed: Unable to obtain correct plug status or plug is not available ]<br>Sep 13 22:00:51 wirt1v stonith-ng[5742]: warning: fencing-idrac2:5963 [  ]<br>Sep 13 22:00:51 wirt1v stonith-ng[5742]: warning: fencing-idrac2:5963 [  ]<br>Sep 13 22:00:51 wirt1v stonith-ng[5742]:  notice: Couldn't find anyone to fence (reboot) wirt2v with any device<br>Sep 13 22:00:51 wirt1v stonith-ng[5742]:   error: Operation reboot of wirt2v by <no-one> for crmd.5744@wirt1v.25b67d0b: No route to host<br>Sep 13 22:00:51 wirt1v crmd[5744]:  notice: Stonith operation 9/45:91:0:dd848cfe-edbc-41f4-bd55-f0cad5f7204f: No route to host (-113)<br>Sep 13 22:00:51 wirt1v crmd[5744]:  notice: Stonith operation 9 for wirt2v failed (No route to host): aborting transition.<br>Sep 13 22:00:51 wirt1v crmd[5744]:  notice: Transition aborted: Stonith failed (source=tengine_stonith_callback:733, 0)<br>Sep 13 22:00:51 wirt1v crmd[5744]:  notice: Peer wirt2v was not terminated (reboot) by <anyone> for wirt1v: No route to host (ref=25b67d0b-5b8f-4cd8-82c2-4421474c111c) by client crmd.5744<br>Sep 13 22:00:51 wirt1v crmd[5744]:  notice: Transition 91 (Complete=5, Pending=0, Fired=0, Skipped=0, Incomplete=15, Source=/var/lib/pacemaker/pengine/pe-warn-295.bz2): Complete<br>Sep 13 22:00:51 wirt1v pengine[3415]:  notice: On loss of CCM Quorum: Ignore<br>Sep 13 22:00:51 wirt1v pengine[3415]: warning: Scheduling Node wirt2v for STONITH<br>Sep 13 22:00:51 wirt1v pengine[3415]:  notice: Start   Drbd2:0#011(wirt1v)<br>Sep 13 22:00:51 wirt1v pengine[3415]:  notice: Start   dlm:0#011(wirt1v)<br>Sep 13 22:00:51 wirt1v pengine[3415]:  notice: Start   fencing-idrac1#011(wirt1v)<br>Sep 13 22:00:51 wirt1v pengine[3415]:  notice: Start   fencing-idrac2#011(wirt1v)<br>Sep 13 22:00:51 wirt1v pengine[3415]:  notice: Start   fencing-idrac1#011(wirt1v)<br>Sep 13 22:00:51 wirt1v pengine[3415]:  notice: Start   fencing-idrac2#011(wirt1v)<br>Sep 13 22:00:51 wirt1v pengine[3415]: warning: Calculated Transition 92: /var/lib/pacemaker/pengine/pe-warn-295.bz2<br>Sep 13 22:00:51 wirt1v crmd[5744]:  notice: Executing reboot fencing operation (45) on wirt2v (timeout=60000)<br>Sep 13 22:00:51 wirt1v stonith-ng[5742]:  notice: Client crmd.5744.8928b80c wants to fence (reboot) 'wirt2v' with device '(any)'<br>Sep 13 22:00:51 wirt1v stonith-ng[5742]:  notice: Initiating remote operation reboot for wirt2v: 292a57e9-fd1b-4630-8c10-0d48a268fd68 (0)<br>Sep 13 22:00:51 wirt1v stonith-ng[5742]:  notice: fencing-idrac1 can not fence (reboot) wirt2v: static-list<br>Sep 13 22:00:51 wirt1v stonith-ng[5742]:  notice: fencing-idrac2 can fence (reboot) wirt2v: static-list<br>Sep 13 22:00:51 wirt1v stonith-ng[5742]:  notice: fencing-idrac1 can not fence (reboot) wirt2v: static-list<br>Sep 13 22:00:51 wirt1v stonith-ng[5742]:  notice: fencing-idrac2 can fence (reboot) wirt2v: static-list<br>Sep 13 22:00:51 wirt1v fence_idrac: Failed: Unable to obtain correct plug status or plug is not available<br>Sep 13 22:00:52 wirt1v fence_idrac: Failed: Unable to obtain correct plug status or plug is not available<br>Sep 13 22:00:52 wirt1v stonith-ng[5742]:   error: Operation 'reboot' [5977] (call 10 from crmd.5744) for host 'wirt2v' with device 'fencing-idrac2' returned: -201 (Generic Pacemaker error)<br>Sep 13 22:00:52 wirt1v stonith-ng[5742]: warning: fencing-idrac2:5977 [ Failed: Unable to obtain correct plug status or plug is not available ]<br>Sep 13 22:00:52 wirt1v stonith-ng[5742]: warning: fencing-idrac2:5977 [  ]<br>Sep 13 22:00:52 wirt1v stonith-ng[5742]: warning: fencing-idrac2:5977 [  ]<br>Sep 13 22:00:52 wirt1v stonith-ng[5742]:  notice: Couldn't find anyone to fence (reboot) wirt2v with any device<br>Sep 13 22:00:52 wirt1v stonith-ng[5742]:   error: Operation reboot of wirt2v by <no-one> for crmd.5744@wirt1v.292a57e9: No route to host<br>Sep 13 22:00:52 wirt1v crmd[5744]:  notice: Stonith operation 10/45:92:0:dd848cfe-edbc-41f4-bd55-f0cad5f7204f: No route to host (-113)<br>Sep 13 22:00:52 wirt1v crmd[5744]:  notice: Stonith operation 10 for wirt2v failed (No route to host): aborting transition.<br>Sep 13 22:00:52 wirt1v crmd[5744]:  notice: Transition aborted: Stonith failed (source=tengine_stonith_callback:733, 0)<br>Sep 13 22:00:52 wirt1v crmd[5744]:  notice: Peer wirt2v was not terminated (reboot) by <anyone> for wirt1v: No route to host (ref=292a57e9-fd1b-4630-8c10-0d48a268fd68) by client crmd.5744<br>Sep 13 22:00:52 wirt1v crmd[5744]:  notice: Transition 92 (Complete=5, Pending=0, Fired=0, Skipped=0, Incomplete=15, Source=/var/lib/pacemaker/pengine/pe-warn-295.bz2): Complete<br>Sep 13 22:00:52 wirt1v pengine[3415]:  notice: On loss of CCM Quorum: Ignore<br>Sep 13 22:00:52 wirt1v pengine[3415]: warning: Scheduling Node wirt2v for STONITH<br>Sep 13 22:00:52 wirt1v pengine[3415]:  notice: Start   Drbd2:0#011(wirt1v)<br>Sep 13 22:00:52 wirt1v pengine[3415]:  notice: Start   dlm:0#011(wirt1v)<br>Sep 13 22:00:52 wirt1v pengine[3415]:  notice: Start   fencing-idrac1#011(wirt1v)<br>Sep 13 22:00:52 wirt1v pengine[3415]:  notice: Start   fencing-idrac2#011(wirt1v)<br>Sep 13 22:00:52 wirt1v pengine[3415]: warning: Calculated Transition 93: /var/lib/pacemaker/pengine/pe-warn-295.bz2<br>Sep 13 22:00:52 wirt1v crmd[5744]:  notice: Executing reboot fencing operation (45) on wirt2v (timeout=60000)<br>Sep 13 22:00:52 wirt1v stonith-ng[5742]:  notice: Client crmd.5744.8928b80c wants to fence (reboot) 'wirt2v' with device '(any)'<br>Sep 13 22:00:52 wirt1v stonith-ng[5742]:  notice: Initiating remote operation reboot for wirt2v: f324baad-ef9b-44e6-9e09-02176fa447ef (0)<br>Sep 13 22:00:52 wirt1v stonith-ng[5742]:  notice: fencing-idrac1 can not fence (reboot) wirt2v: static-list<br>Sep 13 22:00:52 wirt1v stonith-ng[5742]:  notice: fencing-idrac2 can fence (reboot) wirt2v: static-list<br>Sep 13 22:00:52 wirt1v stonith-ng[5742]:  notice: fencing-idrac1 can not fence (reboot) wirt2v: static-list<br>Sep 13 22:00:52 wirt1v stonith-ng[5742]:  notice: fencing-idrac2 can fence (reboot) wirt2v: static-list<br>Sep 13 22:00:53 wirt1v fence_idrac: Failed: Unable to obtain correct plug status or plug is not available<br>Sep 13 22:00:54 wirt1v fence_idrac: Failed: Unable to obtain correct plug status or plug is not available<br>Sep 13 22:00:54 wirt1v stonith-ng[5742]:   error: Operation 'reboot' [5991] (call 11 from crmd.5744) for host 'wirt2v' with device 'fencing-idrac2' returned: -201 (Generic Pacemaker error)<br>Sep 13 22:00:54 wirt1v stonith-ng[5742]: warning: fencing-idrac2:5991 [ Failed: Unable to obtain correct plug status or plug is not available ]<br>Sep 13 22:00:54 wirt1v stonith-ng[5742]: warning: fencing-idrac2:5991 [  ]<br>Sep 13 22:00:54 wirt1v stonith-ng[5742]: warning: fencing-idrac2:5991 [  ]<br>Sep 13 22:00:54 wirt1v stonith-ng[5742]:  notice: Couldn't find anyone to fence (reboot) wirt2v with any device<br>Sep 13 22:00:54 wirt1v stonith-ng[5742]:   error: Operation reboot of wirt2v by <no-one> for crmd.5744@wirt1v.f324baad: No route to host<br>Sep 13 22:00:54 wirt1v crmd[5744]:  notice: Stonith operation 11/45:93:0:dd848cfe-edbc-41f4-bd55-f0cad5f7204f: No route to host (-113)<br>Sep 13 22:00:54 wirt1v crmd[5744]:  notice: Stonith operation 11 for wirt2v failed (No route to host): aborting transition.<br>Sep 13 22:00:54 wirt1v crmd[5744]:  notice: Transition aborted: Stonith failed (source=tengine_stonith_callback:733, 0)<br>Sep 13 22:00:54 wirt1v crmd[5744]:  notice: Peer wirt2v was not terminated (reboot) by <anyone> for wirt1v: No route to host (ref=f324baad-ef9b-44e6-9e09-02176fa447ef) by client crmd.5744<br>Sep 13 22:00:54 wirt1v crmd[5744]:  notice: Transition 93 (Complete=5, Pending=0, Fired=0, Skipped=0, Incomplete=15, Source=/var/lib/pacemaker/pengine/pe-warn-295.bz2): Complete<br>Sep 13 22:00:54 wirt1v pengine[3415]:  notice: On loss of CCM Quorum: Ignore<br>Sep 13 22:00:54 wirt1v pengine[3415]: warning: Scheduling Node wirt2v for STONITH<br>Sep 13 22:00:54 wirt1v pengine[3415]:  notice: Start   Drbd2:0#011(wirt1v)<br>Sep 13 22:00:54 wirt1v pengine[3415]:  notice: Start   dlm:0#011(wirt1v)<br>Sep 13 22:00:54 wirt1v pengine[3415]:  notice: Start   fencing-idrac1#011(wirt1v)<br>Sep 13 22:00:54 wirt1v pengine[3415]:  notice: Start   fencing-idrac2#011(wirt1v)<br>Sep 13 22:00:54 wirt1v pengine[3415]: warning: Calculated Transition 94: /var/lib/pacemaker/pengine/pe-warn-295.bz2<br>Sep 13 22:00:54 wirt1v crmd[5744]:  notice: Executing reboot fencing operation (45) on wirt2v (timeout=60000)<br>Sep 13 22:00:54 wirt1v stonith-ng[5742]:  notice: Client crmd.5744.8928b80c wants to fence (reboot) 'wirt2v' with device '(any)'<br>Sep 13 22:00:54 wirt1v stonith-ng[5742]:  notice: Initiating remote operation reboot for wirt2v: 61af386a-ce3f-438f-b83b-90dee4bdb1c6 (0)<br>Sep 13 22:00:54 wirt1v stonith-ng[5742]:  notice: fencing-idrac1 can not fence (reboot) wirt2v: static-list<br>Sep 13 22:00:54 wirt1v stonith-ng[5742]:  notice: fencing-idrac2 can fence (reboot) wirt2v: static-list<br>Sep 13 22:00:54 wirt1v stonith-ng[5742]:  notice: fencing-idrac1 can not fence (reboot) wirt2v: static-list<br>Sep 13 22:00:54 wirt1v stonith-ng[5742]:  notice: fencing-idrac2 can fence (reboot) wirt2v: static-list<br>Sep 13 22:00:54 wirt1v fence_idrac: Failed: Unable to obtain correct plug status or plug is not available<br>Sep 13 22:00:55 wirt1v fence_idrac: Failed: Unable to obtain correct plug status or plug is not available<br>Sep 13 22:00:55 wirt1v stonith-ng[5742]:   error: Operation 'reboot' [6005] (call 12 from crmd.5744) for host 'wirt2v' with device 'fencing-idrac2' returned: -201 (Generic Pacemaker error)<br>Sep 13 22:00:55 wirt1v stonith-ng[5742]: warning: fencing-idrac2:6005 [ Failed: Unable to obtain correct plug status or plug is not available ]<br>Sep 13 22:00:55 wirt1v stonith-ng[5742]: warning: fencing-idrac2:6005 [  ]<br>Sep 13 22:00:55 wirt1v stonith-ng[5742]: warning: fencing-idrac2:6005 [  ]<br>Sep 13 22:00:55 wirt1v stonith-ng[5742]:  notice: Couldn't find anyone to fence (reboot) wirt2v with any device<br>Sep 13 22:00:55 wirt1v stonith-ng[5742]:   error: Operation reboot of wirt2v by <no-one> for crmd.5744@wirt1v.61af386a: No route to host<br>Sep 13 22:00:55 wirt1v crmd[5744]:  notice: Stonith operation 12/45:94:0:dd848cfe-edbc-41f4-bd55-f0cad5f7204f: No route to host (-113)<br>Sep 13 22:00:55 wirt1v crmd[5744]:  notice: Stonith operation 12 for wirt2v failed (No route to host): aborting transition.<br>Sep 13 22:00:55 wirt1v crmd[5744]:  notice: Transition aborted: Stonith failed (source=tengine_stonith_callback:733, 0)<br>Sep 13 22:00:55 wirt1v crmd[5744]:  notice: Peer wirt2v was not terminated (reboot) by <anyone> for wirt1v: No route to host (ref=61af386a-ce3f-438f-b83b-90dee4bdb1c6) by client crmd.5744<br>Sep 13 22:00:55 wirt1v crmd[5744]:  notice: Transition 94 (Complete=5, Pending=0, Fired=0, Skipped=0, Incomplete=15, Source=/var/lib/pacemaker/pengine/pe-warn-295.bz2): Complete<br>Sep 13 22:00:55 wirt1v crmd[5744]:  notice: Too many failures to fence wirt2v (11), giving up<br>Sep 13 22:00:55 wirt1v crmd[5744]:  notice: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]<br></div><br># -------------------- end of /var/log/messages<br><br><div><br><div><br></div></div></div>