[ClusterLabs] Resources restart
Gienek Nowacki
nowackig at gmail.com
Sun Sep 11 16:28:49 UTC 2016
Hi,
I have a problem with my cluster.
There are two nodes: wirt1v and wirt2v with pacemaker, corosync, dlm,
drbd (/dev/drbd2) and filesystem mounted as /virtfs2 with gfs2.
Each node contains LVM partition on wihich is DRBD.
The situation is as follow:
pcs cluster standby wirt2v
...all is OK, there is possible to use /virtfs2 on wirt1v node
pcs cluster unstandby wirt2v
This situation causes restarting/remounting Drbd2/Virtfs2 on the wirt1v
node.
And my question is as follow - why the resources restart on the wirt1v
node?
I'm going to use /virtfs2 filesystem to install virtual machines,
so in such situation they will restart also.
Couuld you advise me what to do with this problem?
The logs and configs are below..
Thanks in advance,
Gienek Nowacki
======================================
#---------------------------------
### result: cat /etc/redhat-release ###
CentOS Linux release 7.2.1511 (Core)
#---------------------------------
### result: uname -a ###
Linux wirt1v 3.10.0-327.28.3.el7.x86_64 #1 SMP Thu Aug 18 19:05:49 UTC 2016
x86_64 x86_64 x86_64 GNU/Linux
#---------------------------------
### result: cat /etc/hosts ###
127.0.0.1 localhost localhost.localdomain localhost4
localhost4.localdomain4
172.31.0.23 wirt1
172.31.0.24 wirt2
1.1.1.1 wirt1v
1.1.1.2 wirt2v
#---------------------------------
### result: cat /etc/drbd.conf ###
include "drbd.d/global_common.conf";
include "drbd.d/*.res";
#---------------------------------
### result: cat /etc/drbd.d/global_common.conf ###
common {
protocol C;
syncer {
verify-alg sha1;
}
net {
allow-two-primaries;
}
}
#---------------------------------
### result: cat /etc/drbd.d/drbd2.res ###
resource drbd2 {
meta-disk internal;
device /dev/drbd2;
on wirt1v {
disk /dev/vg02/drbd2;
address 1.1.1.1:7782;
}
on wirt2v {
disk /dev/vg02/drbd2;
address 1.1.1.2:7782;
}
}
#---------------------------------
### result: cat /proc/drbd ###
version: 8.4.7-1 (api:1/proto:86-101)
GIT-hash: 3a6a769340ef93b1ba2792c6461250790795db49 build by phil at Build64R7,
2016-01-12 14:29:40
2: cs:Connected ro:Primary/Primary ds:UpToDate/UpToDate C r-----
ns:132180 nr:44 dw:132224 dr:301948 al:33 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1
wo:f oos:0
#---------------------------------
### result: cat /etc/corosync/corosync.conf ###
totem {
version: 2
secauth: off
cluster_name: klasterek
transport: udpu
}
nodelist {
node {
ring0_addr: wirt1v
nodeid: 1
}
node {
ring0_addr: wirt2v
nodeid: 2
}
}
quorum {
provider: corosync_votequorum
two_node: 1
}
logging {
to_logfile: yes
logfile: /var/log/cluster/corosync.log
to_syslog: yes
}
#---------------------------------
### result: mnt ###
#---------------------------------
### result: mount | grep virtfs2 ###
/dev/drbd2 on /virtfs2 type gfs2 (rw,relatime,seclabel)
#---------------------------------
### result: pcs config ###
Cluster Name: klasterek
Corosync Nodes:
wirt1v wirt2v
Pacemaker Nodes:
wirt1v wirt2v
Resources:
Clone: dlm-clone
Meta Attrs: clone-max=2 clone-node-max=1
Resource: dlm (class=ocf provider=pacemaker type=controld)
Operations: start interval=0s timeout=90 (dlm-start-interval-0s)
stop interval=0s timeout=100 (dlm-stop-interval-0s)
monitor interval=60s (dlm-monitor-interval-60s)
Master: Drbd2-clone
Meta Attrs: master-max=2 master-node-max=1 clone-max=2 clone-node-max=1
notify=true
Resource: Drbd2 (class=ocf provider=linbit type=drbd)
Attributes: drbd_resource=drbd2
Operations: start interval=0s timeout=240 (Drbd2-start-interval-0s)
promote interval=0s timeout=90 (Drbd2-promote-interval-0s)
demote interval=0s timeout=90 (Drbd2-demote-interval-0s)
stop interval=0s timeout=100 (Drbd2-stop-interval-0s)
monitor interval=60s (Drbd2-monitor-interval-60s)
Clone: Virtfs2-clone
Resource: Virtfs2 (class=ocf provider=heartbeat type=Filesystem)
Attributes: device=/dev/drbd2 directory=/virtfs2 fstype=gfs2
Operations: start interval=0s timeout=60 (Virtfs2-start-interval-0s)
stop interval=0s timeout=60 (Virtfs2-stop-interval-0s)
monitor interval=20 timeout=40 (Virtfs2-monitor-interval-20)
Stonith Devices:
Resource: fencing-idrac1 (class=stonith type=fence_idrac)
Attributes: pcmk_host_list=wirt1v ipaddr=172.31.0.223 lanplus=on
login=root passwd=my1secret2password3
Operations: monitor interval=60 (fencing-idrac1-monitor-interval-60)
Resource: fencing-idrac2 (class=stonith type=fence_idrac)
Attributes: pcmk_host_list=wirt2v ipaddr=172.31.0.224 lanplus=on
login=root passwd=my1secret2password3
Operations: monitor interval=60 (fencing-idrac2-monitor-interval-60)
Fencing Levels:
Location Constraints:
Ordering Constraints:
start dlm-clone then start Virtfs2-clone (kind:Mandatory)
(id:order-dlm-clone-Virtfs2-clone-mandatory)
promote Drbd2-clone then start Virtfs2-clone (kind:Mandatory)
(id:order-Drbd2-clone-Virtfs2-clone-mandatory)
Colocation Constraints:
Virtfs2-clone with Drbd2-clone (score:INFINITY) (with-rsc-role:Master)
(id:colocation-Virtfs2-clone-Drbd2-clone-INFINITY)
Virtfs2-clone with dlm-clone (score:INFINITY)
(id:colocation-Virtfs2-clone-dlm-clone-INFINITY)
Resources Defaults:
resource-stickiness: 100
Operations Defaults:
No defaults set
Cluster Properties:
cluster-infrastructure: corosync
cluster-name: klasterek
dc-version: 1.1.13-10.el7_2.4-44eb2dd
have-watchdog: false
no-quorum-policy: ignore
stonith-enabled: true
symmetric-cluster: true
#---------------------------------
### result: pcs status ###
Cluster name: klasterek
Last updated: Sun Sep 11 16:02:15 2016 Last change: Sun Sep 11
15:01:05 2016 by root via crm_attribute on wirt2v
Stack: corosync
Current DC: wirt1v (version 1.1.13-10.el7_2.4-44eb2dd) - partition with
quorum
2 nodes and 8 resources configured
Online: [ wirt1v wirt2v ]
Full list of resources:
fencing-idrac1 (stonith:fence_idrac): Started wirt1v
fencing-idrac2 (stonith:fence_idrac): Started wirt1v
Clone Set: dlm-clone [dlm]
Started: [ wirt1v wirt2v ]
Master/Slave Set: Drbd2-clone [Drbd2]
Masters: [ wirt1v wirt2v ]
Clone Set: Virtfs2-clone [Virtfs2]
Started: [ wirt1v wirt2v ]
PCSD Status:
wirt1v: Online
wirt2v: Online
Daemon Status:
corosync: active/disabled
pacemaker: active/disabled
pcsd: active/enabled
#---------------------------------
### result: pcs property ###
Cluster Properties:
cluster-infrastructure: corosync
cluster-name: klasterek
dc-version: 1.1.13-10.el7_2.4-44eb2dd
have-watchdog: false
no-quorum-policy: ignore
stonith-enabled: true
symmetric-cluster: true
#---------------------------------
### result: pcs cluster cib ###
<cib crm_feature_set="3.0.10" validate-with="pacemaker-2.3" epoch="105"
num_updates="11" admin_epoch="0" cib-last-written="Sun Sep 11 15:01:05
2016" update-origin="wirt2v" update-client="crm_attribute"
update-user="root" have-quorum="1" dc-uuid="1">
<configuration>
<crm_config>
<cluster_property_set id="cib-bootstrap-options">
<nvpair id="cib-bootstrap-options-have-watchdog"
name="have-watchdog" value="false"/>
<nvpair id="cib-bootstrap-options-dc-version" name="dc-version"
value="1.1.13-10.el7_2.4-44eb2dd"/>
<nvpair id="cib-bootstrap-options-cluster-infrastructure"
name="cluster-infrastructure" value="corosync"/>
<nvpair id="cib-bootstrap-options-cluster-name" name="cluster-name"
value="klasterek"/>
<nvpair id="cib-bootstrap-options-stonith-enabled"
name="stonith-enabled" value="true"/>
<nvpair id="cib-bootstrap-options-symmetric-cluster"
name="symmetric-cluster" value="true"/>
<nvpair id="cib-bootstrap-options-no-quorum-policy"
name="no-quorum-policy" value="ignore"/>
</cluster_property_set>
</crm_config>
<nodes>
<node id="1" uname="wirt1v">
<instance_attributes id="nodes-1"/>
</node>
<node id="2" uname="wirt2v">
<instance_attributes id="nodes-2"/>
</node>
</nodes>
<resources>
<primitive class="stonith" id="fencing-idrac1" type="fence_idrac">
<instance_attributes id="fencing-idrac1-instance_attributes">
<nvpair id="fencing-idrac1-instance_attributes-pcmk_host_list"
name="pcmk_host_list" value="wirt1v"/>
<nvpair id="fencing-idrac1-instance_attributes-ipaddr"
name="ipaddr" value="172.31.0.223"/>
<nvpair id="fencing-idrac1-instance_attributes-lanplus"
name="lanplus" value="on"/>
<nvpair id="fencing-idrac1-instance_attributes-login"
name="login" value="root"/>
<nvpair id="fencing-idrac1-instance_attributes-passwd"
name="passwd" value="my1secret2password3"/>
</instance_attributes>
<operations>
<op id="fencing-idrac1-monitor-interval-60" interval="60"
name="monitor"/>
</operations>
<meta_attributes id="fencing-idrac1-meta_attributes"/>
</primitive>
<primitive class="stonith" id="fencing-idrac2" type="fence_idrac">
<instance_attributes id="fencing-idrac2-instance_attributes">
<nvpair id="fencing-idrac2-instance_attributes-pcmk_host_list"
name="pcmk_host_list" value="wirt2v"/>
<nvpair id="fencing-idrac2-instance_attributes-ipaddr"
name="ipaddr" value="172.31.0.224"/>
<nvpair id="fencing-idrac2-instance_attributes-lanplus"
name="lanplus" value="on"/>
<nvpair id="fencing-idrac2-instance_attributes-login"
name="login" value="root"/>
<nvpair id="fencing-idrac2-instance_attributes-passwd"
name="passwd" value="my1secret2password3"/>
</instance_attributes>
<operations>
<op id="fencing-idrac2-monitor-interval-60" interval="60"
name="monitor"/>
</operations>
<meta_attributes id="fencing-idrac2-meta_attributes"/>
</primitive>
<clone id="dlm-clone">
<primitive class="ocf" id="dlm" provider="pacemaker"
type="controld">
<instance_attributes id="dlm-instance_attributes"/>
<operations>
<op id="dlm-start-interval-0s" interval="0s" name="start"
timeout="90"/>
<op id="dlm-stop-interval-0s" interval="0s" name="stop"
timeout="100"/>
<op id="dlm-monitor-interval-60s" interval="60s"
name="monitor"/>
</operations>
</primitive>
<meta_attributes id="dlm-clone-meta_attributes">
<nvpair id="dlm-clone-max" name="clone-max" value="2"/>
<nvpair id="dlm-clone-node-max" name="clone-node-max" value="1"/>
</meta_attributes>
</clone>
<primitive class="ocf" id="Drbd2" provider="linbit" type="drbd">
<instance_attributes id="Drbd2-instance_attributes">
<nvpair id="Drbd2-instance_attributes-drbd_resource"
name="drbd_resource" value="drbd2"/>
</instance_attributes>
<operations>
<op id="Drbd2-start-interval-0s" interval="0s" name="start"
timeout="240"/>
<op id="Drbd2-promote-interval-0s" interval="0s" name="promote"
timeout="90"/>
<op id="Drbd2-demote-interval-0s" interval="0s" name="demote"
timeout="90"/>
<op id="Drbd2-stop-interval-0s" interval="0s" name="stop"
timeout="100"/>
<op id="Drbd2-monitor-interval-60s" interval="60s"
name="monitor"/>
</operations>
</primitive>
<meta_attributes id="Drbd2-clone-meta_attributes">
<nvpair id="Drbd2-clone-meta_attributes-master-max"
name="master-max" value="2"/>
<nvpair id="Drbd2-clone-meta_attributes-master-node-max"
name="master-node-max" value="1"/>
<nvpair id="Drbd2-clone-meta_attributes-clone-max"
name="clone-max" value="2"/>
<nvpair id="Drbd2-clone-meta_attributes-clone-node-max"
name="clone-node-max" value="1"/>
<nvpair id="Drbd2-clone-meta_attributes-notify" name="notify"
value="true"/>
</meta_attributes>
</master>
<clone id="Virtfs2-clone">
<primitive class="ocf" id="Virtfs2" provider="heartbeat"
type="Filesystem">
<instance_attributes id="Virtfs2-instance_attributes">
<nvpair id="Virtfs2-instance_attributes-device" name="device"
value="/dev/drbd2"/>
<nvpair id="Virtfs2-instance_attributes-directory"
name="directory" value="/virtfs2"/>
<nvpair id="Virtfs2-instance_attributes-fstype" name="fstype"
value="gfs2"/>
</instance_attributes>
<operations>
<op id="Virtfs2-start-interval-0s" interval="0s" name="start"
timeout="60"/>
<op id="Virtfs2-stop-interval-0s" interval="0s" name="stop"
timeout="60"/>
<op id="Virtfs2-monitor-interval-20" interval="20"
name="monitor" timeout="40"/>
</operations>
</primitive>
<meta_attributes id="Virtfs2-clone-meta_attributes"/>
</clone>
</resources>
<constraints>
<rsc_colocation id="colocation-Virtfs2-clone-Drbd2-clone-INFINITY"
rsc="Virtfs2-clone" score="INFINITY" with-rsc="Drbd2-clone"
with-rsc-role="Master"/>
<rsc_colocation id="colocation-Virtfs2-clone-dlm-clone-INFINITY"
rsc="Virtfs2-clone" score="INFINITY" with-rsc="dlm-clone"/>
<rsc_order first="dlm-clone" first-action="start"
id="order-dlm-clone-Virtfs2-clone-mandatory" then="Virtfs2-clone"
then-action="start"/>
<rsc_order first="Drbd2-clone" first-action="promote"
id="order-Drbd2-clone-Virtfs2-clone-mandatory" then="Virtfs2-clone"
then-action="start"/>
</constraints>
<rsc_defaults>
<meta_attributes id="rsc_defaults-options">
<nvpair id="rsc_defaults-options-resource-stickiness"
name="resource-stickiness" value="100"/>
</meta_attributes>
</rsc_defaults>
</configuration>
<status>
<node_state id="1" uname="wirt1v" in_ccm="true" crmd="online"
crm-debug-origin="do_update_resource" join="member" expected="member">
<transient_attributes id="1">
<instance_attributes id="status-1">
<nvpair id="status-1-shutdown" name="shutdown" value="0"/>
<nvpair id="status-1-probe_complete" name="probe_complete"
value="true"/>
<nvpair id="status-1-master-Drbd2" name="master-Drbd2"
value="10000"/>
</instance_attributes>
</transient_attributes>
<lrm id="1">
<lrm_resources>
<lrm_resource id="Virtfs2" type="Filesystem" class="ocf"
provider="heartbeat">
<lrm_rsc_op id="Virtfs2_last_0" operation_key="Virtfs2_start_0"
operation="start" crm-debug-origin="do_update_resource"
crm_feature_set="3.0.10"
transition-key="55:85:0:bfa57406-efca-4ca9-bdb5-01a121d172d8"
transition-magic="0:0;55:85:0:bfa57406-efca-4ca9-bdb5-01a121d172d8"
on_node="wirt1v" call-id="96" rc-code="0" op-status="0" interval="0"
last-run="1473598870" last-rc-change="1473598870" exec-time="639"
queue-time="1" op-digest="8dbd904c2115508ebcf3dffe8e7c6d82"/>
<lrm_rsc_op id="Virtfs2_monitor_20000"
operation_key="Virtfs2_monitor_20000" operation="monitor"
crm-debug-origin="do_update_resource" crm_feature_set="3.0.10"
transition-key="56:85:0:bfa57406-efca-4ca9-bdb5-01a121d172d8"
transition-magic="0:0;56:85:0:bfa57406-efca-4ca9-bdb5-01a121d172d8"
on_node="wirt1v" call-id="97" rc-code="0" op-status="0" interval="20000"
last-rc-change="1473598871" exec-time="41" queue-time="0"
op-digest="051271837d1a8eccc0af38fbd8c406e4"/>
</lrm_resource>
<lrm_resource id="fencing-idrac1" type="fence_idrac"
class="stonith">
<lrm_rsc_op id="fencing-idrac1_last_0"
operation_key="fencing-idrac1_start_0" operation="start"
crm-debug-origin="do_update_resource" crm_feature_set="3.0.10"
transition-key="9:58:0:bfa57406-efca-4ca9-bdb5-01a121d172d8"
transition-magic="0:0;9:58:0:bfa57406-efca-4ca9-bdb5-01a121d172d8"
on_node="wirt1v" call-id="58" rc-code="0" op-status="0" interval="0"
last-run="1473588685" last-rc-change="1473588685" exec-time="1045"
queue-time="0" op-digest="23a748cdf02f6f0fd03ac9823fc9bd52"
op-secure-params=" passwd "
op-secure-digest="2a5376722a6d891302b4e811e4de5c5a"/>
<lrm_rsc_op id="fencing-idrac1_monitor_60000"
operation_key="fencing-idrac1_monitor_60000" operation="monitor"
crm-debug-origin="do_update_resource" crm_feature_set="3.0.10"
transition-key="11:59:0:bfa57406-efca-4ca9-bdb5-01a121d172d8"
transition-magic="0:0;11:59:0:bfa57406-efca-4ca9-bdb5-01a121d172d8"
on_node="wirt1v" call-id="59" rc-code="0" op-status="0" interval="60000"
last-rc-change="1473588687" exec-time="84" queue-time="0"
op-digest="592f6bfb8f36e6645a6221de49f6f3b3" op-secure-params=" passwd "
op-secure-digest="2a5376722a6d891302b4e811e4de5c5a"/>
</lrm_resource>
<lrm_resource id="fencing-idrac2" type="fence_idrac"
class="stonith">
<lrm_rsc_op id="fencing-idrac2_last_0"
operation_key="fencing-idrac2_start_0" operation="start"
crm-debug-origin="do_update_resource" crm_feature_set="3.0.10"
transition-key="14:62:0:bfa57406-efca-4ca9-bdb5-01a121d172d8"
transition-magic="0:0;14:62:0:bfa57406-efca-4ca9-bdb5-01a121d172d8"
on_node="wirt1v" call-id="61" rc-code="0" op-status="0" interval="0"
last-run="1473590528" last-rc-change="1473590528" exec-time="80"
queue-time="0" op-digest="268b7ef79bdf7a09609aa321d3d18a61"
op-secure-params=" passwd "
op-secure-digest="f22e287dc4906f866a82eac0ab75d217"/>
<lrm_rsc_op id="fencing-idrac2_monitor_60000"
operation_key="fencing-idrac2_monitor_60000" operation="monitor"
crm-debug-origin="do_update_resource" crm_feature_set="3.0.10"
transition-key="15:62:0:bfa57406-efca-4ca9-bdb5-01a121d172d8"
transition-magic="0:0;15:62:0:bfa57406-efca-4ca9-bdb5-01a121d172d8"
on_node="wirt1v" call-id="62" rc-code="0" op-status="0" interval="60000"
last-rc-change="1473590529" exec-time="75" queue-time="1"
op-digest="40430ed0cd93e10fcba03a5e867b2af3" op-secure-params=" passwd "
op-secure-digest="f22e287dc4906f866a82eac0ab75d217"/>
</lrm_resource>
<lrm_resource id="dlm" type="controld" class="ocf"
provider="pacemaker">
<lrm_rsc_op id="dlm_last_0" operation_key="dlm_start_0"
operation="start" crm-debug-origin="build_active_RAs"
crm_feature_set="3.0.10"
transition-key="14:15:0:29cf445c-f17d-4274-89e9-a869e4783c46"
transition-magic="0:0;14:15:0:29cf445c-f17d-4274-89e9-a869e4783c46"
on_node="wirt1v" call-id="25" rc-code="0" op-status="0" interval="0"
last-run="1473544487" last-rc-change="1473544487" exec-time="1082"
queue-time="0" op-digest="f2317cad3d54cec5d7d7aa7d0bf35cf8"/>
<lrm_rsc_op id="dlm_monitor_60000"
operation_key="dlm_monitor_60000" operation="monitor"
crm-debug-origin="build_active_RAs" crm_feature_set="3.0.10"
transition-key="8:16:0:29cf445c-f17d-4274-89e9-a869e4783c46"
transition-magic="0:0;8:16:0:29cf445c-f17d-4274-89e9-a869e4783c46"
on_node="wirt1v" call-id="28" rc-code="0" op-status="0" interval="60000"
last-rc-change="1473544489" exec-time="38" queue-time="0"
op-digest="4811cef7f7f94e3a35a70be7916cb2fd"/>
</lrm_resource>
<lrm_resource id="Drbd2" type="drbd" class="ocf"
provider="linbit">
<lrm_rsc_op id="Drbd2_last_0" operation_key="Drbd2_promote_0"
operation="promote" crm-debug-origin="build_active_RAs"
crm_feature_set="3.0.10"
transition-key="17:16:0:29cf445c-f17d-4274-89e9-a869e4783c46"
transition-magic="0:0;17:16:0:29cf445c-f17d-4274-89e9-a869e4783c46"
on_node="wirt1v" call-id="30" rc-code="0" op-status="0" interval="0"
last-run="1473544489" last-rc-change="1473544489" exec-time="58"
queue-time="0" op-digest="d0c8a735862843030d8426a5218ceb92"/>
</lrm_resource>
</lrm_resources>
</lrm>
</node_state>
<node_state id="2" uname="wirt2v" in_ccm="true" crmd="online"
crm-debug-origin="do_update_resource" join="member" expected="member">
<transient_attributes id="2">
<instance_attributes id="status-2">
<nvpair id="status-2-shutdown" name="shutdown" value="0"/>
<nvpair id="status-2-probe_complete" name="probe_complete"
value="true"/>
<nvpair id="status-2-master-Drbd2" name="master-Drbd2"
value="10000"/>
</instance_attributes>
</transient_attributes>
<lrm id="2">
<lrm_resources>
<lrm_resource id="fencing-idrac1" type="fence_idrac"
class="stonith">
<lrm_rsc_op id="fencing-idrac1_last_0"
operation_key="fencing-idrac1_monitor_0" operation="monitor"
crm-debug-origin="do_update_resource" crm_feature_set="3.0.10"
transition-key="7:1:7:bfa57406-efca-4ca9-bdb5-01a121d172d8"
transition-magic="0:7;7:1:7:bfa57406-efca-4ca9-bdb5-01a121d172d8"
on_node="wirt2v" call-id="5" rc-code="7" op-status="0" interval="0"
last-run="1473544754" last-rc-change="1473544754" exec-time="1"
queue-time="0" op-digest="23a748cdf02f6f0fd03ac9823fc9bd52"
op-secure-params=" passwd "
op-secure-digest="2a5376722a6d891302b4e811e4de5c5a"/>
</lrm_resource>
<lrm_resource id="fencing-idrac2" type="fence_idrac"
class="stonith">
<lrm_rsc_op id="fencing-idrac2_last_0"
operation_key="fencing-idrac2_stop_0" operation="stop"
crm-debug-origin="do_update_resource" crm_feature_set="3.0.10"
transition-key="13:62:0:bfa57406-efca-4ca9-bdb5-01a121d172d8"
transition-magic="0:0;13:62:0:bfa57406-efca-4ca9-bdb5-01a121d172d8"
on_node="wirt2v" call-id="55" rc-code="0" op-status="0" interval="0"
last-run="1473590528" last-rc-change="1473590528" exec-time="0"
queue-time="0" op-digest="268b7ef79bdf7a09609aa321d3d18a61"
op-secure-params=" passwd "
op-secure-digest="f22e287dc4906f866a82eac0ab75d217"/>
<lrm_rsc_op id="fencing-idrac2_monitor_60000"
operation_key="fencing-idrac2_monitor_60000" operation="monitor"
crm-debug-origin="do_update_resource" crm_feature_set="3.0.10"
transition-key="13:59:0:bfa57406-efca-4ca9-bdb5-01a121d172d8"
transition-magic="0:0;13:59:0:bfa57406-efca-4ca9-bdb5-01a121d172d8"
on_node="wirt2v" call-id="53" rc-code="0" op-status="0" interval="60000"
last-rc-change="1473588689" exec-time="63" queue-time="0"
op-digest="40430ed0cd93e10fcba03a5e867b2af3" op-secure-params=" passwd "
op-secure-digest="f22e287dc4906f866a82eac0ab75d217"/>
</lrm_resource>
<lrm_resource id="dlm" type="controld" class="ocf"
provider="pacemaker">
<lrm_rsc_op id="dlm_last_0" operation_key="dlm_start_0"
operation="start" crm-debug-origin="do_update_resource"
crm_feature_set="3.0.10"
transition-key="15:83:0:bfa57406-efca-4ca9-bdb5-01a121d172d8"
transition-magic="0:0;15:83:0:bfa57406-efca-4ca9-bdb5-01a121d172d8"
on_node="wirt2v" call-id="101" rc-code="0" op-status="0" interval="0"
last-run="1473598865" last-rc-change="1473598865" exec-time="1116"
queue-time="0" op-digest="f2317cad3d54cec5d7d7aa7d0bf35cf8"/>
<lrm_rsc_op id="dlm_monitor_60000"
operation_key="dlm_monitor_60000" operation="monitor"
crm-debug-origin="do_update_resource" crm_feature_set="3.0.10"
transition-key="16:84:0:bfa57406-efca-4ca9-bdb5-01a121d172d8"
transition-magic="0:0;16:84:0:bfa57406-efca-4ca9-bdb5-01a121d172d8"
on_node="wirt2v" call-id="104" rc-code="0" op-status="0" interval="60000"
last-rc-change="1473598870" exec-time="47" queue-time="0"
op-digest="4811cef7f7f94e3a35a70be7916cb2fd"/>
</lrm_resource>
<lrm_resource id="Drbd2" type="drbd" class="ocf"
provider="linbit">
<lrm_rsc_op id="Drbd2_last_0" operation_key="Drbd2_promote_0"
operation="promote" crm-debug-origin="do_update_resource"
crm_feature_set="3.0.10"
transition-key="27:84:0:bfa57406-efca-4ca9-bdb5-01a121d172d8"
transition-magic="0:0;27:84:0:bfa57406-efca-4ca9-bdb5-01a121d172d8"
on_node="wirt2v" call-id="106" rc-code="0" op-status="0" interval="0"
last-run="1473598870" last-rc-change="1473598870" exec-time="69"
queue-time="0" op-digest="d0c8a735862843030d8426a5218ceb92"/>
</lrm_resource>
<lrm_resource id="Virtfs2" type="Filesystem" class="ocf"
provider="heartbeat">
<lrm_rsc_op id="Virtfs2_last_0" operation_key="Virtfs2_start_0"
operation="start" crm-debug-origin="do_update_resource"
crm_feature_set="3.0.10"
transition-key="53:85:0:bfa57406-efca-4ca9-bdb5-01a121d172d8"
transition-magic="0:0;53:85:0:bfa57406-efca-4ca9-bdb5-01a121d172d8"
on_node="wirt2v" call-id="108" rc-code="0" op-status="0" interval="0"
last-run="1473598870" last-rc-change="1473598870" exec-time="859"
queue-time="0" op-digest="8dbd904c2115508ebcf3dffe8e7c6d82"/>
<lrm_rsc_op id="Virtfs2_monitor_20000"
operation_key="Virtfs2_monitor_20000" operation="monitor"
crm-debug-origin="do_update_resource" crm_feature_set="3.0.10"
transition-key="54:85:0:bfa57406-efca-4ca9-bdb5-01a121d172d8"
transition-magic="0:0;54:85:0:bfa57406-efca-4ca9-bdb5-01a121d172d8"
on_node="wirt2v" call-id="109" rc-code="0" op-status="0" interval="20000"
last-rc-change="1473598871" exec-time="50" queue-time="0"
op-digest="051271837d1a8eccc0af38fbd8c406e4"/>
</lrm_resource>
</lrm_resources>
</lrm>
</node_state>
</status>
</cib>
# =====================================================================
# wirt1v-during_pcs-cluster-standby-wirt2v.log
#
Sep 11 14:58:06 wirt1v crmd[31951]: notice: State transition S_IDLE ->
S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL
origin=abort_transition_graph ]
Sep 11 14:58:06 wirt1v pengine[31950]: notice: On loss of CCM Quorum:
Ignore
Sep 11 14:58:06 wirt1v pengine[31950]: notice: Stop dlm:1#011(wirt2v)
Sep 11 14:58:06 wirt1v pengine[31950]: notice: Demote Drbd2:1#011(Master
-> Stopped wirt2v)
Sep 11 14:58:06 wirt1v pengine[31950]: notice: Stop
Virtfs2:1#011(wirt2v)
Sep 11 14:58:06 wirt1v pengine[31950]: notice: Calculated Transition 81:
/var/lib/pacemaker/pengine/pe-input-2852.bz2
Sep 11 14:58:06 wirt1v crmd[31951]: notice: Initiating action 71: notify
Drbd2_pre_notify_demote_0 on wirt1v (local)
Sep 11 14:58:06 wirt1v crmd[31951]: notice: Initiating action 73: notify
Drbd2_pre_notify_demote_0 on wirt2v
Sep 11 14:58:06 wirt1v crmd[31951]: notice: Initiating action 54: stop
Virtfs2_stop_0 on wirt2v
Sep 11 14:58:06 wirt1v crmd[31951]: notice: Operation Drbd2_notify_0: ok
(node=wirt1v, call=86, rc=0, cib-update=0, confirmed=true)
Sep 11 14:58:06 wirt1v kernel: GFS2: fsid=klasterek:drbd2.0: recover
generation 2 done
Sep 11 14:58:06 wirt1v crmd[31951]: notice: Initiating action 17: stop
dlm_stop_0 on wirt2v
Sep 11 14:58:06 wirt1v crmd[31951]: notice: Initiating action 26: demote
Drbd2_demote_0 on wirt2v
Sep 11 14:58:06 wirt1v kernel: block drbd2: peer( Primary -> Secondary )
Sep 11 14:58:06 wirt1v crmd[31951]: notice: Initiating action 72: notify
Drbd2_post_notify_demote_0 on wirt1v (local)
Sep 11 14:58:06 wirt1v crmd[31951]: notice: Initiating action 74: notify
Drbd2_post_notify_demote_0 on wirt2v
Sep 11 14:58:06 wirt1v crmd[31951]: notice: Operation Drbd2_notify_0: ok
(node=wirt1v, call=87, rc=0, cib-update=0, confirmed=true)
Sep 11 14:58:06 wirt1v crmd[31951]: notice: Initiating action 66: notify
Drbd2_pre_notify_stop_0 on wirt1v (local)
Sep 11 14:58:06 wirt1v crmd[31951]: notice: Initiating action 68: notify
Drbd2_pre_notify_stop_0 on wirt2v
Sep 11 14:58:06 wirt1v crmd[31951]: notice: Operation Drbd2_notify_0: ok
(node=wirt1v, call=88, rc=0, cib-update=0, confirmed=true)
Sep 11 14:58:06 wirt1v crmd[31951]: notice: Initiating action 27: stop
Drbd2_stop_0 on wirt2v
Sep 11 14:58:06 wirt1v kernel: drbd drbd2: peer( Secondary -> Unknown )
conn( Connected -> TearDown ) pdsk( UpToDate -> DUnknown )
Sep 11 14:58:06 wirt1v kernel: drbd drbd2: ack_receiver terminated
Sep 11 14:58:06 wirt1v kernel: drbd drbd2: Terminating drbd_a_drbd2
Sep 11 14:58:06 wirt1v kernel: block drbd2: new current UUID
12027A6DEC39CCB7:9A297D737BE3FBC7:214339307E5385FF:214239307E5385FF
Sep 11 14:58:06 wirt1v kernel: drbd drbd2: Connection closed
Sep 11 14:58:06 wirt1v kernel: drbd drbd2: conn( TearDown -> Unconnected )
Sep 11 14:58:06 wirt1v kernel: drbd drbd2: receiver terminated
Sep 11 14:58:06 wirt1v kernel: drbd drbd2: Restarting receiver thread
Sep 11 14:58:06 wirt1v kernel: drbd drbd2: receiver (re)started
Sep 11 14:58:06 wirt1v kernel: drbd drbd2: conn( Unconnected ->
WFConnection )
Sep 11 14:58:06 wirt1v crmd[31951]: warning: No match for shutdown action
on 2
Sep 11 14:58:06 wirt1v crmd[31951]: notice: Transition aborted by deletion
of nvpair[@id='status-2-master-Drbd2']: Transient attribute change
(cib=0.104.3, source=abort_unless_down:333,
path=/cib/status/node_state[@id='2']/transient_attributes[@id='2']/instance_attributes[@id='status-2']/nvpair[@id='status-2-master-Drbd2'],
0)
Sep 11 14:58:06 wirt1v crmd[31951]: notice: Initiating action 67: notify
Drbd2_post_notify_stop_0 on wirt1v (local)
Sep 11 14:58:06 wirt1v crmd[31951]: notice: Operation Drbd2_notify_0: ok
(node=wirt1v, call=89, rc=0, cib-update=0, confirmed=true)
Sep 11 14:58:08 wirt1v crmd[31951]: notice: Transition 81 (Complete=28,
Pending=0, Fired=0, Skipped=0, Incomplete=0,
Source=/var/lib/pacemaker/pengine/pe-input-2852.bz2): Complete
Sep 11 14:58:08 wirt1v pengine[31950]: notice: On loss of CCM Quorum:
Ignore
Sep 11 14:58:08 wirt1v pengine[31950]: notice: Calculated Transition 82:
/var/lib/pacemaker/pengine/pe-input-2853.bz2
Sep 11 14:58:08 wirt1v crmd[31951]: notice: Transition 82 (Complete=0,
Pending=0, Fired=0, Skipped=0, Incomplete=0,
Source=/var/lib/pacemaker/pengine/pe-input-2853.bz2): Complete
Sep 11 14:58:08 wirt1v crmd[31951]: notice: State transition
S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL
origin=notify_crmd ]
# =====================================================================
# wirt1v-during_pcs-cluster-unstandby-wirt2v.log
#
Sep 11 15:01:01 wirt1v systemd: Started Session 59 of user root.
Sep 11 15:01:01 wirt1v systemd: Starting Session 59 of user root.
Sep 11 15:01:05 wirt1v crmd[31951]: notice: State transition S_IDLE ->
S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL
origin=abort_transition_graph ]
Sep 11 15:01:05 wirt1v pengine[31950]: notice: On loss of CCM Quorum:
Ignore
Sep 11 15:01:05 wirt1v pengine[31950]: notice: Start dlm:1#011(wirt2v)
Sep 11 15:01:05 wirt1v pengine[31950]: notice: Start Drbd2:1#011(wirt2v)
Sep 11 15:01:05 wirt1v pengine[31950]: notice: Restart
Virtfs2:0#011(Started wirt1v)
Sep 11 15:01:05 wirt1v pengine[31950]: notice: Calculated Transition 83:
/var/lib/pacemaker/pengine/pe-input-2854.bz2
Sep 11 15:01:05 wirt1v crmd[31951]: notice: Initiating action 15: start
dlm_start_0 on wirt2v
Sep 11 15:01:05 wirt1v crmd[31951]: notice: Initiating action 61: notify
Drbd2_pre_notify_start_0 on wirt1v (local)
Sep 11 15:01:05 wirt1v crmd[31951]: notice: Initiating action 51: stop
Virtfs2_stop_0 on wirt1v (local)
Sep 11 15:01:06 wirt1v Filesystem(Virtfs2)[42311]: INFO: Running stop for
/dev/drbd2 on /virtfs2
Sep 11 15:01:06 wirt1v crmd[31951]: notice: Operation Drbd2_notify_0: ok
(node=wirt1v, call=90, rc=0, cib-update=0, confirmed=true)
Sep 11 15:01:06 wirt1v crmd[31951]: notice: Initiating action 25: start
Drbd2_start_0 on wirt2v
Sep 11 15:01:06 wirt1v Filesystem(Virtfs2)[42311]: INFO: Trying to unmount
/virtfs2
Sep 11 15:01:06 wirt1v Filesystem(Virtfs2)[42311]: ERROR: Couldn't unmount
/virtfs2; trying cleanup with TERM
Sep 11 15:01:06 wirt1v crmd[31951]: notice: Transition aborted by
status-2-master-Drbd2, master-Drbd2=1000: Transient attribute change
(create cib=0.105.1, source=abort_unless_down:319,
path=/cib/status/node_state[@id='2']/transient_attributes[@id='2']/instance_attributes[@id='status-2'],
0)
Sep 11 15:01:06 wirt1v Filesystem(Virtfs2)[42311]: INFO: sending signal
TERM to: root 39937 39934 0 14:49 pts/0 Ss+ 0:00 -bash
Sep 11 15:01:06 wirt1v crmd[31951]: notice: Initiating action 62: notify
Drbd2_post_notify_start_0 on wirt1v (local)
Sep 11 15:01:06 wirt1v crmd[31951]: notice: Initiating action 63: notify
Drbd2_post_notify_start_0 on wirt2v
Sep 11 15:01:06 wirt1v crmd[31951]: notice: Operation Drbd2_notify_0: ok
(node=wirt1v, call=93, rc=0, cib-update=0, confirmed=true)
Sep 11 15:01:06 wirt1v kernel: drbd drbd2: Handshake successful: Agreed
network protocol version 101
Sep 11 15:01:06 wirt1v kernel: drbd drbd2: Feature flags enabled on
protocol level: 0x7 TRIM THIN_RESYNC WRITE_SAME.
Sep 11 15:01:06 wirt1v kernel: drbd drbd2: conn( WFConnection ->
WFReportParams )
Sep 11 15:01:06 wirt1v kernel: drbd drbd2: Starting ack_recv thread (from
drbd_r_drbd2 [32139])
Sep 11 15:01:06 wirt1v kernel: block drbd2: drbd_sync_handshake:
Sep 11 15:01:06 wirt1v kernel: block drbd2: self
12027A6DEC39CCB7:9A297D737BE3FBC7:214339307E5385FF:214239307E5385FF bits:0
flags:0
Sep 11 15:01:06 wirt1v kernel: block drbd2: peer
9A297D737BE3FBC6:0000000000000000:214339307E5385FE:214239307E5385FF bits:0
flags:0
Sep 11 15:01:06 wirt1v kernel: block drbd2: uuid_compare()=1 by rule 70
Sep 11 15:01:06 wirt1v kernel: block drbd2: peer( Unknown -> Secondary )
conn( WFReportParams -> WFBitMapS ) pdsk( DUnknown -> Consistent )
Sep 11 15:01:06 wirt1v kernel: block drbd2: send bitmap stats
[Bytes(packets)]: plain 0(0), RLE 23(1), total 23; compression: 100.0%
Sep 11 15:01:06 wirt1v kernel: block drbd2: receive bitmap stats
[Bytes(packets)]: plain 0(0), RLE 23(1), total 23; compression: 100.0%
Sep 11 15:01:06 wirt1v kernel: block drbd2: helper command: /sbin/drbdadm
before-resync-source minor-2
Sep 11 15:01:06 wirt1v kernel: block drbd2: helper command: /sbin/drbdadm
before-resync-source minor-2 exit code 0 (0x0)
Sep 11 15:01:06 wirt1v kernel: block drbd2: conn( WFBitMapS -> SyncSource )
pdsk( Consistent -> Inconsistent )
Sep 11 15:01:06 wirt1v kernel: block drbd2: Began resync as SyncSource
(will sync 0 KB [0 bits set]).
Sep 11 15:01:06 wirt1v kernel: block drbd2: updated sync UUID
12027A6DEC39CCB7:9A2A7D737BE3FBC7:9A297D737BE3FBC7:214339307E5385FF
Sep 11 15:01:06 wirt1v kernel: block drbd2: Resync done (total 1 sec;
paused 0 sec; 0 K/sec)
Sep 11 15:01:06 wirt1v kernel: block drbd2: updated UUIDs
12027A6DEC39CCB7:0000000000000000:9A2A7D737BE3FBC7:9A297D737BE3FBC7
Sep 11 15:01:06 wirt1v kernel: block drbd2: conn( SyncSource -> Connected )
pdsk( Inconsistent -> UpToDate )
Sep 11 15:01:07 wirt1v Filesystem(Virtfs2)[42311]: ERROR: Couldn't unmount
/virtfs2; trying cleanup with TERM
Sep 11 15:01:07 wirt1v Filesystem(Virtfs2)[42311]: INFO: sending signal
TERM to: root 39937 39934 0 14:49 pts/0 Ss+ 0:00 -bash
Sep 11 15:01:08 wirt1v Filesystem(Virtfs2)[42311]: ERROR: Couldn't unmount
/virtfs2; trying cleanup with TERM
Sep 11 15:01:08 wirt1v Filesystem(Virtfs2)[42311]: INFO: sending signal
TERM to: root 39937 39934 0 14:49 pts/0 Ss+ 0:00 -bash
Sep 11 15:01:09 wirt1v Filesystem(Virtfs2)[42311]: ERROR: Couldn't unmount
/virtfs2; trying cleanup with KILL
Sep 11 15:01:09 wirt1v Filesystem(Virtfs2)[42311]: INFO: sending signal
KILL to: root 39937 39934 0 14:49 pts/0 Ss+ 0:00 -bash
Sep 11 15:01:09 wirt1v systemd-logind: Removed session 58.
Sep 11 15:01:10 wirt1v Filesystem(Virtfs2)[42311]: INFO: unmounted /virtfs2
successfully
Sep 11 15:01:10 wirt1v lrmd[31948]: notice: Virtfs2_stop_0:42311:stderr [
umount: /virtfs2: target is busy. ]
Sep 11 15:01:10 wirt1v lrmd[31948]: notice: Virtfs2_stop_0:42311:stderr
[ (In some cases useful info about processes that use ]
Sep 11 15:01:10 wirt1v lrmd[31948]: notice: Virtfs2_stop_0:42311:stderr
[ the device is found by lsof(8) or fuser(1)) ]
Sep 11 15:01:10 wirt1v lrmd[31948]: notice: Virtfs2_stop_0:42311:stderr [
ocf-exit-reason:Couldn't unmount /virtfs2; trying cleanup with TERM ]
Sep 11 15:01:10 wirt1v lrmd[31948]: notice: Virtfs2_stop_0:42311:stderr [
umount: /virtfs2: target is busy. ]
Sep 11 15:01:10 wirt1v lrmd[31948]: notice: Virtfs2_stop_0:42311:stderr
[ (In some cases useful info about processes that use ]
Sep 11 15:01:10 wirt1v lrmd[31948]: notice: Virtfs2_stop_0:42311:stderr
[ the device is found by lsof(8) or fuser(1)) ]
Sep 11 15:01:10 wirt1v lrmd[31948]: notice: Virtfs2_stop_0:42311:stderr [
ocf-exit-reason:Couldn't unmount /virtfs2; trying cleanup with TERM ]
Sep 11 15:01:10 wirt1v lrmd[31948]: notice: Virtfs2_stop_0:42311:stderr [
umount: /virtfs2: target is busy. ]
Sep 11 15:01:10 wirt1v lrmd[31948]: notice: Virtfs2_stop_0:42311:stderr
[ (In some cases useful info about processes that use ]
Sep 11 15:01:10 wirt1v lrmd[31948]: notice: Virtfs2_stop_0:42311:stderr
[ the device is found by lsof(8) or fuser(1)) ]
Sep 11 15:01:10 wirt1v lrmd[31948]: notice: Virtfs2_stop_0:42311:stderr [
ocf-exit-reason:Couldn't unmount /virtfs2; trying cleanup with TERM ]
Sep 11 15:01:10 wirt1v lrmd[31948]: notice: Virtfs2_stop_0:42311:stderr [
umount: /virtfs2: target is busy. ]
Sep 11 15:01:10 wirt1v lrmd[31948]: notice: Virtfs2_stop_0:42311:stderr
[ (In some cases useful info about processes that use ]
Sep 11 15:01:10 wirt1v lrmd[31948]: notice: Virtfs2_stop_0:42311:stderr
[ the device is found by lsof(8) or fuser(1)) ]
Sep 11 15:01:10 wirt1v lrmd[31948]: notice: Virtfs2_stop_0:42311:stderr [
ocf-exit-reason:Couldn't unmount /virtfs2; trying cleanup with KILL ]
Sep 11 15:01:10 wirt1v crmd[31951]: notice: Operation Virtfs2_stop_0: ok
(node=wirt1v, call=92, rc=0, cib-update=178, confirmed=true)
Sep 11 15:01:10 wirt1v crmd[31951]: notice: Transition 83 (Complete=18,
Pending=0, Fired=0, Skipped=3, Incomplete=5,
Source=/var/lib/pacemaker/pengine/pe-input-2854.bz2): Stopped
Sep 11 15:01:10 wirt1v pengine[31950]: notice: On loss of CCM Quorum:
Ignore
Sep 11 15:01:10 wirt1v pengine[31950]: notice: Promote Drbd2:1#011(Slave
-> Master wirt2v)
Sep 11 15:01:10 wirt1v pengine[31950]: notice: Start
Virtfs2:0#011(wirt1v)
Sep 11 15:01:10 wirt1v pengine[31950]: notice: Start
Virtfs2:1#011(wirt2v)
Sep 11 15:01:10 wirt1v pengine[31950]: notice: Calculated Transition 84:
/var/lib/pacemaker/pengine/pe-input-2855.bz2
Sep 11 15:01:10 wirt1v crmd[31951]: notice: Initiating action 16: monitor
dlm_monitor_60000 on wirt2v
Sep 11 15:01:10 wirt1v crmd[31951]: notice: Initiating action 69: notify
Drbd2_pre_notify_promote_0 on wirt1v (local)
Sep 11 15:01:10 wirt1v crmd[31951]: notice: Initiating action 71: notify
Drbd2_pre_notify_promote_0 on wirt2v
Sep 11 15:01:10 wirt1v crmd[31951]: notice: Operation Drbd2_notify_0: ok
(node=wirt1v, call=94, rc=0, cib-update=0, confirmed=true)
Sep 11 15:01:10 wirt1v crmd[31951]: notice: Initiating action 27: promote
Drbd2_promote_0 on wirt2v
Sep 11 15:01:10 wirt1v kernel: block drbd2: peer( Secondary -> Primary )
Sep 11 15:01:10 wirt1v crmd[31951]: notice: Initiating action 70: notify
Drbd2_post_notify_promote_0 on wirt1v (local)
Sep 11 15:01:10 wirt1v crmd[31951]: notice: Initiating action 72: notify
Drbd2_post_notify_promote_0 on wirt2v
Sep 11 15:01:10 wirt1v crmd[31951]: notice: Transition aborted by
status-2-master-Drbd2, master-Drbd2=10000: Transient attribute change
(modify cib=0.105.7, source=abort_unless_down:319,
path=/cib/status/node_state[@id='2']/transient_attributes[@id='2']/instance_attributes[@id='status-2']/nvpair[@id='status-2-master-Drbd2'],
0)
Sep 11 15:01:10 wirt1v crmd[31951]: notice: Operation Drbd2_notify_0: ok
(node=wirt1v, call=95, rc=0, cib-update=0, confirmed=true)
Sep 11 15:01:10 wirt1v crmd[31951]: notice: Transition 84 (Complete=13,
Pending=0, Fired=0, Skipped=2, Incomplete=5,
Source=/var/lib/pacemaker/pengine/pe-input-2855.bz2): Stopped
Sep 11 15:01:10 wirt1v pengine[31950]: notice: On loss of CCM Quorum:
Ignore
Sep 11 15:01:10 wirt1v pengine[31950]: notice: Start
Virtfs2:0#011(wirt2v)
Sep 11 15:01:10 wirt1v pengine[31950]: notice: Start
Virtfs2:1#011(wirt1v)
Sep 11 15:01:10 wirt1v pengine[31950]: notice: Calculated Transition 85:
/var/lib/pacemaker/pengine/pe-input-2856.bz2
Sep 11 15:01:10 wirt1v crmd[31951]: notice: Initiating action 53: start
Virtfs2_start_0 on wirt2v
Sep 11 15:01:10 wirt1v crmd[31951]: notice: Initiating action 55: start
Virtfs2:1_start_0 on wirt1v (local)
Sep 11 15:01:10 wirt1v Filesystem(Virtfs2)[42615]: INFO: Running start for
/dev/drbd2 on /virtfs2
Sep 11 15:01:10 wirt1v kernel: GFS2: fsid=klasterek:drbd2: Trying to join
cluster "lock_dlm", "klasterek:drbd2"
Sep 11 15:01:10 wirt1v kernel: dlm: Using TCP for communications
Sep 11 15:01:10 wirt1v kernel: dlm: connecting to 2
Sep 11 15:01:10 wirt1v kernel: dlm: got connection from 2
Sep 11 15:01:11 wirt1v kernel: GFS2: fsid=klasterek:drbd2: first mounter
control generation 0
Sep 11 15:01:11 wirt1v kernel: GFS2: fsid=klasterek:drbd2: Joined cluster.
Now mounting FS...
Sep 11 15:01:11 wirt1v kernel: GFS2: fsid=klasterek:drbd2.0: jid=0, already
locked for use
Sep 11 15:01:11 wirt1v kernel: GFS2: fsid=klasterek:drbd2.0: jid=0: Looking
at journal...
Sep 11 15:01:11 wirt1v kernel: GFS2: fsid=klasterek:drbd2.0: jid=0: Done
Sep 11 15:01:11 wirt1v kernel: GFS2: fsid=klasterek:drbd2.0: jid=1: Trying
to acquire journal lock...
Sep 11 15:01:11 wirt1v kernel: GFS2: fsid=klasterek:drbd2.0: jid=1: Looking
at journal...
Sep 11 15:01:11 wirt1v kernel: GFS2: fsid=klasterek:drbd2.0: jid=1: Done
Sep 11 15:01:11 wirt1v kernel: GFS2: fsid=klasterek:drbd2.0: first mount
done, others may mount
Sep 11 15:01:11 wirt1v crmd[31951]: notice: Operation Virtfs2_start_0: ok
(node=wirt1v, call=96, rc=0, cib-update=181, confirmed=true)
Sep 11 15:01:11 wirt1v crmd[31951]: notice: Initiating action 56: monitor
Virtfs2:1_monitor_20000 on wirt1v (local)
Sep 11 15:01:11 wirt1v crmd[31951]: notice: Initiating action 54: monitor
Virtfs2_monitor_20000 on wirt2v
Sep 11 15:01:11 wirt1v crmd[31951]: notice: Transition 85 (Complete=6,
Pending=0, Fired=0, Skipped=0, Incomplete=0,
Source=/var/lib/pacemaker/pengine/pe-input-2856.bz2): Complete
Sep 11 15:01:11 wirt1v crmd[31951]: notice: State transition
S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL
origin=notify_crmd ]
Sep 11 15:01:18 wirt1v systemd-logind: New session 60 of user root.
Sep 11 15:01:18 wirt1v systemd: Started Session 60 of user root.
Sep 11 15:01:18 wirt1v systemd: Starting Session 60 of user root.
# =====================================================================
# wirt2v-during_pcs-cluster-standby-wirt2v.log
#
Sep 11 14:58:06 wirt2v crmd[27038]: notice: Operation Drbd2_notify_0: ok
(node=wirt2v, call=92, rc=0, cib-update=0, confirmed=true)
Sep 11 14:58:06 wirt2v Filesystem(Virtfs2)[28208]: INFO: Running stop for
/dev/drbd2 on /virtfs2
Sep 11 14:58:06 wirt2v Filesystem(Virtfs2)[28208]: INFO: Trying to unmount
/virtfs2
Sep 11 14:58:06 wirt2v Filesystem(Virtfs2)[28208]: INFO: unmounted /virtfs2
successfully
Sep 11 14:58:06 wirt2v crmd[27038]: notice: Operation Virtfs2_stop_0: ok
(node=wirt2v, call=94, rc=0, cib-update=58, confirmed=true)
Sep 11 14:58:06 wirt2v kernel: dlm: closing connection to node 2
Sep 11 14:58:06 wirt2v kernel: dlm: closing connection to node 1
Sep 11 14:58:06 wirt2v kernel: block drbd2: role( Primary -> Secondary )
Sep 11 14:58:06 wirt2v kernel: block drbd2: bitmap WRITE of 0 pages took 0
jiffies
Sep 11 14:58:06 wirt2v kernel: block drbd2: 0 KB (0 bits) marked
out-of-sync by on disk bit-map.
Sep 11 14:58:06 wirt2v systemd-udevd: error: /dev/drbd2: Wrong medium type
Sep 11 14:58:06 wirt2v crmd[27038]: error: pcmkRegisterNode: Triggered
assert at xml.c:594 : node->type == XML_ELEMENT_NODE
Sep 11 14:58:06 wirt2v crmd[27038]: notice: Operation Drbd2_demote_0: ok
(node=wirt2v, call=97, rc=0, cib-update=59, confirmed=true)
Sep 11 14:58:06 wirt2v systemd-udevd: error: /dev/drbd2: Wrong medium type
Sep 11 14:58:06 wirt2v crmd[27038]: notice: Operation Drbd2_notify_0: ok
(node=wirt2v, call=98, rc=0, cib-update=0, confirmed=true)
Sep 11 14:58:06 wirt2v crmd[27038]: notice: Operation Drbd2_notify_0: ok
(node=wirt2v, call=99, rc=0, cib-update=0, confirmed=true)
Sep 11 14:58:06 wirt2v kernel: drbd drbd2: peer( Primary -> Unknown ) conn(
Connected -> Disconnecting ) pdsk( UpToDate -> DUnknown )
Sep 11 14:58:06 wirt2v kernel: drbd drbd2: ack_receiver terminated
Sep 11 14:58:06 wirt2v kernel: drbd drbd2: Terminating drbd_a_drbd2
Sep 11 14:58:06 wirt2v kernel: drbd drbd2: Connection closed
Sep 11 14:58:06 wirt2v kernel: drbd drbd2: conn( Disconnecting ->
StandAlone )
Sep 11 14:58:06 wirt2v kernel: drbd drbd2: receiver terminated
Sep 11 14:58:06 wirt2v kernel: drbd drbd2: Terminating drbd_r_drbd2
Sep 11 14:58:06 wirt2v kernel: block drbd2: disk( UpToDate -> Failed )
Sep 11 14:58:06 wirt2v kernel: block drbd2: bitmap WRITE of 0 pages took 0
jiffies
Sep 11 14:58:06 wirt2v kernel: block drbd2: 0 KB (0 bits) marked
out-of-sync by on disk bit-map.
Sep 11 14:58:06 wirt2v kernel: block drbd2: disk( Failed -> Diskless )
Sep 11 14:58:06 wirt2v kernel: drbd drbd2: Terminating drbd_w_drbd2
Sep 11 14:58:06 wirt2v crmd[27038]: notice: Operation Drbd2_stop_0: ok
(node=wirt2v, call=100, rc=0, cib-update=60, confirmed=true)
Sep 11 14:58:08 wirt2v crmd[27038]: notice: Operation dlm_stop_0: ok
(node=wirt2v, call=96, rc=0, cib-update=61, confirmed=true)
# =====================================================================
# wirt2v-during_pcs-cluster-unstandby-wirt2v.log
#
Sep 11 15:01:01 wirt2v systemd: Started Session 51 of user root.
Sep 11 15:01:01 wirt2v systemd: Starting Session 51 of user root.
Sep 11 15:01:06 wirt2v dlm_controld[28577]: 62718 dlm_controld 4.0.2 started
Sep 11 15:01:06 wirt2v systemd-udevd: error: /dev/drbd2: Wrong medium type
Sep 11 15:01:06 wirt2v kernel: drbd drbd2: Starting worker thread (from
drbdsetup-84 [28610])
Sep 11 15:01:06 wirt2v kernel: block drbd2: disk( Diskless -> Attaching )
Sep 11 15:01:06 wirt2v kernel: drbd drbd2: Method to ensure write ordering:
flush
Sep 11 15:01:06 wirt2v kernel: block drbd2: max BIO size = 262144
Sep 11 15:01:06 wirt2v kernel: block drbd2: drbd_bm_resize called with
capacity == 104854328
Sep 11 15:01:06 wirt2v kernel: block drbd2: resync bitmap: bits=13106791
words=204794 pages=400
Sep 11 15:01:06 wirt2v kernel: block drbd2: size = 50 GB (52427164 KB)
Sep 11 15:01:06 wirt2v kernel: block drbd2: recounting of set bits took
additional 1 jiffies
Sep 11 15:01:06 wirt2v kernel: block drbd2: 0 KB (0 bits) marked
out-of-sync by on disk bit-map.
Sep 11 15:01:06 wirt2v kernel: block drbd2: disk( Attaching -> UpToDate )
Sep 11 15:01:06 wirt2v kernel: block drbd2: attached to UUIDs
9A297D737BE3FBC6:0000000000000000:214339307E5385FE:214239307E5385FF
Sep 11 15:01:06 wirt2v systemd-udevd: error: /dev/drbd2: Wrong medium type
Sep 11 15:01:06 wirt2v kernel: drbd drbd2: conn( StandAlone -> Unconnected )
Sep 11 15:01:06 wirt2v kernel: drbd drbd2: Starting receiver thread (from
drbd_w_drbd2 [28612])
Sep 11 15:01:06 wirt2v kernel: drbd drbd2: receiver (re)started
Sep 11 15:01:06 wirt2v kernel: drbd drbd2: conn( Unconnected ->
WFConnection )
Sep 11 15:01:06 wirt2v crmd[27038]: error: pcmkRegisterNode: Triggered
assert at xml.c:594 : node->type == XML_ELEMENT_NODE
Sep 11 15:01:06 wirt2v crmd[27038]: notice: Operation Drbd2_start_0: ok
(node=wirt2v, call=102, rc=0, cib-update=62, confirmed=true)
Sep 11 15:01:06 wirt2v crmd[27038]: notice: Operation Drbd2_notify_0: ok
(node=wirt2v, call=103, rc=0, cib-update=0, confirmed=true)
Sep 11 15:01:06 wirt2v kernel: drbd drbd2: Handshake successful: Agreed
network protocol version 101
Sep 11 15:01:06 wirt2v kernel: drbd drbd2: Feature flags enabled on
protocol level: 0x7 TRIM THIN_RESYNC WRITE_SAME.
Sep 11 15:01:06 wirt2v kernel: drbd drbd2: conn( WFConnection ->
WFReportParams )
Sep 11 15:01:06 wirt2v kernel: drbd drbd2: Starting ack_recv thread (from
drbd_r_drbd2 [28622])
Sep 11 15:01:06 wirt2v kernel: block drbd2: drbd_sync_handshake:
Sep 11 15:01:06 wirt2v kernel: block drbd2: self
9A297D737BE3FBC6:0000000000000000:214339307E5385FE:214239307E5385FF bits:0
flags:0
Sep 11 15:01:06 wirt2v kernel: block drbd2: peer
12027A6DEC39CCB7:9A297D737BE3FBC7:214339307E5385FF:214239307E5385FF bits:0
flags:0
Sep 11 15:01:06 wirt2v kernel: block drbd2: uuid_compare()=-1 by rule 50
Sep 11 15:01:06 wirt2v kernel: block drbd2: peer( Unknown -> Primary )
conn( WFReportParams -> WFBitMapT ) disk( UpToDate -> Outdated ) pdsk(
DUnknown -> UpToDate )
Sep 11 15:01:06 wirt2v kernel: block drbd2: receive bitmap stats
[Bytes(packets)]: plain 0(0), RLE 23(1), total 23; compression: 100.0%
Sep 11 15:01:06 wirt2v kernel: block drbd2: send bitmap stats
[Bytes(packets)]: plain 0(0), RLE 23(1), total 23; compression: 100.0%
Sep 11 15:01:06 wirt2v kernel: block drbd2: conn( WFBitMapT -> WFSyncUUID )
Sep 11 15:01:06 wirt2v kernel: block drbd2: updated sync uuid
9A2A7D737BE3FBC6:0000000000000000:214339307E5385FE:214239307E5385FF
Sep 11 15:01:06 wirt2v kernel: block drbd2: helper command: /sbin/drbdadm
before-resync-target minor-2
Sep 11 15:01:06 wirt2v kernel: block drbd2: helper command: /sbin/drbdadm
before-resync-target minor-2 exit code 0 (0x0)
Sep 11 15:01:06 wirt2v kernel: block drbd2: conn( WFSyncUUID -> SyncTarget
) disk( Outdated -> Inconsistent )
Sep 11 15:01:06 wirt2v kernel: block drbd2: Began resync as SyncTarget
(will sync 0 KB [0 bits set]).
Sep 11 15:01:06 wirt2v kernel: block drbd2: Resync done (total 1 sec;
paused 0 sec; 0 K/sec)
Sep 11 15:01:06 wirt2v kernel: block drbd2: updated UUIDs
12027A6DEC39CCB6:0000000000000000:9A2A7D737BE3FBC6:9A297D737BE3FBC7
Sep 11 15:01:06 wirt2v kernel: block drbd2: conn( SyncTarget -> Connected )
disk( Inconsistent -> UpToDate )
Sep 11 15:01:06 wirt2v kernel: block drbd2: helper command: /sbin/drbdadm
after-resync-target minor-2
Sep 11 15:01:06 wirt2v kernel: block drbd2: helper command: /sbin/drbdadm
after-resync-target minor-2 exit code 0 (0x0)
Sep 11 15:01:07 wirt2v crmd[27038]: notice: Operation dlm_start_0: ok
(node=wirt2v, call=101, rc=0, cib-update=63, confirmed=true)
Sep 11 15:01:10 wirt2v crmd[27038]: notice: Operation Drbd2_notify_0: ok
(node=wirt2v, call=105, rc=0, cib-update=0, confirmed=true)
Sep 11 15:01:10 wirt2v kernel: block drbd2: role( Secondary -> Primary )
Sep 11 15:01:10 wirt2v crmd[27038]: error: pcmkRegisterNode: Triggered
assert at xml.c:594 : node->type == XML_ELEMENT_NODE
Sep 11 15:01:10 wirt2v crmd[27038]: notice: Operation Drbd2_promote_0: ok
(node=wirt2v, call=106, rc=0, cib-update=65, confirmed=true)
Sep 11 15:01:10 wirt2v crmd[27038]: notice: Operation Drbd2_notify_0: ok
(node=wirt2v, call=107, rc=0, cib-update=0, confirmed=true)
Sep 11 15:01:10 wirt2v Filesystem(Virtfs2)[28772]: INFO: Running start for
/dev/drbd2 on /virtfs2
Sep 11 15:01:10 wirt2v kernel: GFS2: fsid=klasterek:drbd2: Trying to join
cluster "lock_dlm", "klasterek:drbd2"
Sep 11 15:01:10 wirt2v kernel: dlm: Using TCP for communications
Sep 11 15:01:10 wirt2v kernel: dlm: connecting to 1
Sep 11 15:01:10 wirt2v kernel: dlm: got connection from 1
Sep 11 15:01:11 wirt2v kernel: GFS2: fsid=klasterek:drbd2: Joined cluster.
Now mounting FS...
Sep 11 15:01:11 wirt2v kernel: GFS2: fsid=klasterek:drbd2.1: jid=1, already
locked for use
Sep 11 15:01:11 wirt2v kernel: GFS2: fsid=klasterek:drbd2.1: jid=1: Looking
at journal...
Sep 11 15:01:11 wirt2v kernel: GFS2: fsid=klasterek:drbd2.1: jid=1: Done
Sep 11 15:01:11 wirt2v crmd[27038]: notice: Operation Virtfs2_start_0: ok
(node=wirt2v, call=108, rc=0, cib-update=66, confirmed=true)
==============================================================
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.clusterlabs.org/pipermail/users/attachments/20160911/8a74e846/attachment-0003.html>
More information about the Users
mailing list