<div dir="ltr"><div>Hi,</div><div><br></div><div>I created a cloned resource vha-46cd52eb-fecc-49f8-bbe8-bc4157672b7b on systems vsanqa11/12.</div><div>service pacemaker stop is called on vsanqa12 at 6:40:19 and completes approx at 6:41:18</div>
<div>service pacemaker start is called on vsanqa12 at 6:41:20 and completes at 6:42:30<span class="" style="white-space:pre"> </span></div><div><br></div><div>I see that on vsanqa11, a stop on resource gets called (for vsanqa12) at 6:42:29. Why does pacemaker invoke a stop on resource vha-46cd52eb-fecc-49f8-bbe8-bc4157672b7b:1 ?</div>
<div>and why are there so many processor joined/left messages during this period on vsanqa12</div><div><br></div><div><br></div><div>Configuration</div><div>==========</div><div><br></div><div><br></div><div><div>[root@vsanqa12 ~]# crm configure show</div>
<div>node vsanqa11</div><div>node vsanqa12</div><div>primitive vha-46cd52eb-fecc-49f8-bbe8-bc4157672b7b ocf:heartbeat:vgc-cm-agent.ocf \</div><div> params cluster_uuid="46cd52eb-fecc-49f8-bbe8-bc4157672b7b" \</div>
<div> op monitor interval="30s" role="Master" timeout="100s" \</div><div> op monitor interval="31s" role="Slave" timeout="100s"</div><div>ms ms-46cd52eb-fecc-49f8-bbe8-bc4157672b7b vha-46cd52eb-fecc-49f8-bbe8-bc4157672b7b \</div>
<div> meta clone-max="2" globally-unique="false" target-role="Started"</div><div>location ms-46cd52eb-fecc-49f8-bbe8-bc4157672b7b-nodes ms-46cd52eb-fecc-49f8-bbe8-bc4157672b7b \</div><div>
rule $id="ms-46cd52eb-fecc-49f8-bbe8-bc4157672b7b-nodes-rule" -inf: #uname ne vsanqa11 and #uname ne vsanqa12</div><div>property $id="cib-bootstrap-options" \</div><div> dc-version="1.1.8-7.el6-394e906" \</div>
<div> cluster-infrastructure="cman" \</div><div> stonith-enabled="false" \</div><div> no-quorum-policy="ignore"</div><div>rsc_defaults $id="rsc-options" \</div>
<div> resource-stickiness="100"</div></div><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div><div>Logs..</div><div><br></div><div>vsanqa11</div><div>
<br></div><div>Mar 24 06:37:38 vsanqa11 kernel: VGC: [000000650fed1b03:I] Instance "VHA" connected with peer "vsanqa12" (status 0xc, 1, 0)</div><div>Mar 24 06:37:58 vsanqa11 attrd[24424]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-vha-46cd52eb-fecc-49f8-bbe8-bc4157672b7b (2)</div>
<div>Mar 24 06:37:58 vsanqa11 attrd[24424]: notice: attrd_perform_update: Sent update 11: master-vha-46cd52eb-fecc-49f8-bbe8-bc4157672b7b=2</div><div>Mar 24 06:40:17 vsanqa11 attrd[24424]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-vha-46cd52eb-fecc-49f8-bbe8-bc4157672b7b (4)</div>
<div>Mar 24 06:40:17 vsanqa11 attrd[24424]: notice: attrd_perform_update: Sent update 17: master-vha-46cd52eb-fecc-49f8-bbe8-bc4157672b7b=4</div><div>Mar 24 06:40:17 vsanqa11 crmd[24426]: notice: process_lrm_event: LRM operation vha-46cd52eb-fecc-49f8-bbe8-bc4157672b7b_promote_0 (call=18, rc=0, cib-update=12, confirmed=true) ok</div>
<div>Mar 24 06:40:17 vsanqa11 crmd[24426]: notice: process_lrm_event: LRM operation vha-46cd52eb-fecc-49f8-bbe8-bc4157672b7b_monitor_30000 (call=21, rc=8, cib-update=13, confirmed=false) master</div><div>Mar 24 06:40:17 vsanqa11 crmd[24426]: notice: peer_update_callback: Got client status callback - our DC is dead</div>
<div>Mar 24 06:40:17 vsanqa11 crmd[24426]: notice: do_state_transition: State transition S_NOT_DC -> S_ELECTION [ input=I_ELECTION cause=C_CRMD_STATUS_CALLBACK origin=peer_update_callback ]</div><div>Mar 24 06:40:17 vsanqa11 crmd[24426]: notice: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_FSA_INTERNAL origin=do_election_check ]</div>
<div>Mar 24 06:40:17 vsanqa11 corosync[24211]: [TOTEM ] Retransmit List: fa fb</div><div>Mar 24 06:40:17 vsanqa11 attrd[24424]: notice: attrd_local_callback: Sending full refresh (origin=crmd)</div><div>Mar 24 06:40:17 vsanqa11 attrd[24424]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-vha-46cd52eb-fecc-49f8-bbe8-bc4157672b7b (4)</div>
<div>Mar 24 06:40:17 vsanqa11 attrd[24424]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)</div><div>Mar 24 06:40:18 vsanqa11 pengine[24425]: notice: unpack_config: On loss of CCM Quorum: Ignore</div>
<div>Mar 24 06:40:18 vsanqa11 pengine[24425]: notice: process_pe_message: Calculated Transition 0: /var/lib/pacemaker/pengine/pe-input-359.bz2</div><div>Mar 24 06:40:18 vsanqa11 crmd[24426]: notice: run_graph: Transition 0 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-359.bz2): Complete</div>
<div>Mar 24 06:40:18 vsanqa11 crmd[24426]: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]</div><div>Mar 24 06:40:19 vsanqa11 corosync[24211]: [CMAN ] quorum lost, blocking activity</div>
<div>Mar 24 06:40:19 vsanqa11 corosync[24211]: [QUORUM] This node is within the non-primary component and will NOT provide any services.</div><div>Mar 24 06:40:19 vsanqa11 corosync[24211]: [QUORUM] Members[1]: 1</div>
<div>Mar 24 06:40:19 vsanqa11 corosync[24211]: [TOTEM ] A processor joined or left the membership and a new membership was formed.</div><div>Mar 24 06:40:19 vsanqa11 crmd[24426]: notice: cman_event_callback: Membership 4047912: quorum lost</div>
<div>Mar 24 06:40:19 vsanqa11 crmd[24426]: notice: crm_update_peer_state: cman_event_callback: Node vsanqa12[2] - state is now lost</div><div>Mar 24 06:40:19 vsanqa11 corosync[24211]: [CPG ] chosen downlist: sender r(0) ip(172.16.68.123) ; members(old:2 left:1)</div>
<div>Mar 24 06:40:19 vsanqa11 corosync[24211]: [MAIN ] Completed service synchronization, ready to provide service.</div><div>Mar 24 06:40:19 vsanqa11 kernel: dlm: closing connection to node 2</div><div>Mar 24 06:42:01 vsanqa11 kernel: doing a send with ctx_id 1</div>
<div>Mar 24 06:42:07 vsanqa11 kernel: VGC: [000000650fed1b03:I] Instance "VHA" connected with peer "vsanqa12" (status 0xc, 1, 0)</div><div>Mar 24 06:42:26 vsanqa11 corosync[24211]: [TOTEM ] A processor joined or left the membership and a new membership was formed.</div>
<div>Mar 24 06:42:26 vsanqa11 corosync[24211]: [CMAN ] quorum regained, resuming activity</div><div>Mar 24 06:42:26 vsanqa11 corosync[24211]: [QUORUM] This node is within the primary component and will provide service.</div>
<div>Mar 24 06:42:26 vsanqa11 corosync[24211]: [QUORUM] Members[2]: 1 2</div><div>Mar 24 06:42:26 vsanqa11 corosync[24211]: [QUORUM] Members[2]: 1 2</div><div>Mar 24 06:42:26 vsanqa11 crmd[24426]: notice: cman_event_callback: Membership 4047980: quorum acquired</div>
<div>Mar 24 06:42:26 vsanqa11 crmd[24426]: notice: crm_update_peer_state: cman_event_callback: Node vsanqa12[2] - state is now member</div><div>Mar 24 06:42:26 vsanqa11 corosync[24211]: [CPG ] chosen downlist: sender r(0) ip(172.16.68.123) ; members(old:1 left:0)</div>
<div>Mar 24 06:42:26 vsanqa11 corosync[24211]: [MAIN ] Completed service synchronization, ready to provide service.</div><div>Mar 24 06:42:27 vsanqa11 crmd[24426]: warning: match_down_event: No match for shutdown action on vsanqa12</div>
<div>Mar 24 06:42:27 vsanqa11 crmd[24426]: warning: crmd_ha_msg_filter: Another DC detected: vsanqa12 (op=noop)</div><div>Mar 24 06:42:27 vsanqa11 crmd[24426]: warning: crmd_ha_msg_filter: Another DC detected: vsanqa12 (op=noop)</div>
<div>Mar 24 06:42:27 vsanqa11 crmd[24426]: notice: do_state_transition: State transition S_IDLE -> S_ELECTION [ input=I_ELECTION cause=C_FSA_INTERNAL origin=crmd_ha_msg_filter ]</div><div>Mar 24 06:42:27 vsanqa11 crmd[24426]: warning: do_log: FSA: Input I_NODE_JOIN from peer_update_callback() received in state S_ELECTION</div>
<div>Mar 24 06:42:27 vsanqa11 cib[24421]: warning: cib_process_diff: Diff 0.11071.16 -> 0.11071.17 from vsanqa12 not applied to 0.11071.93: current "num_updates" is greater than required</div><div>Mar 24 06:42:27 vsanqa11 cib[24421]: warning: cib_process_diff: Diff 0.11071.17 -> 0.11071.18 from vsanqa12 not applied to 0.11071.93: current "num_updates" is greater than required</div>
<div>Mar 24 06:42:27 vsanqa11 cib[24421]: warning: cib_process_diff: Diff 0.11071.18 -> 0.11071.19 from vsanqa12 not applied to 0.11071.93: current "num_updates" is greater than required</div><div>Mar 24 06:42:27 vsanqa11 cib[24421]: warning: cib_process_diff: Diff 0.11071.19 -> 0.11071.20 from vsanqa12 not applied to 0.11071.93: current "num_updates" is greater than required</div>
<div>Mar 24 06:42:27 vsanqa11 cib[24421]: warning: cib_process_diff: Diff 0.11071.20 -> 0.11071.21 from vsanqa12 not applied to 0.11071.93: current "num_updates" is greater than required</div><div>Mar 24 06:42:27 vsanqa11 cib[24421]: warning: cib_process_diff: Diff 0.11071.21 -> 0.11071.22 from vsanqa12 not applied to 0.11071.93: current "num_updates" is greater than required</div>
<div>Mar 24 06:42:27 vsanqa11 crmd[24426]: notice: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_FSA_INTERNAL origin=do_election_check ]</div><div>Mar 24 06:42:28 vsanqa11 attrd[24424]: notice: attrd_local_callback: Sending full refresh (origin=crmd)</div>
<div>Mar 24 06:42:28 vsanqa11 attrd[24424]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-vha-46cd52eb-fecc-49f8-bbe8-bc4157672b7b (4)</div><div>Mar 24 06:42:28 vsanqa11 attrd[24424]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)</div>
<div>Mar 24 06:42:28 vsanqa11 corosync[24211]: [TOTEM ] Retransmit List: aa</div><div>Mar 24 06:42:29 vsanqa11 pengine[24425]: notice: unpack_config: On loss of CCM Quorum: Ignore</div><div>Mar 24 06:42:29 vsanqa11 pengine[24425]: notice: LogActions: Stop vha-46cd52eb-fecc-49f8-bbe8-bc4157672b7b:1#011(vsanqa12) ===<<< ***STOP ***>>>====</div>
<div>Mar 24 06:42:29 vsanqa11 pengine[24425]: notice: process_pe_message: Calculated Transition 1: /var/lib/pacemaker/pengine/pe-input-360.bz2</div><div>Mar 24 06:42:35 vsanqa11 crmd[24426]: notice: run_graph: Transition 1 (Complete=3, Pending=0, Fired=0, Skipped=1, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-360.bz2): Stopped</div>
<div>Mar 24 06:42:35 vsanqa11 pengine[24425]: notice: unpack_config: On loss of CCM Quorum: Ignore</div><div>Mar 24 06:42:35 vsanqa11 pengine[24425]: notice: process_pe_message: Calculated Transition 2: /var/lib/pacemaker/pengine/pe-input-361.bz2</div>
<div>Mar 24 06:42:35 vsanqa11 crmd[24426]: notice: run_graph: Transition 2 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-361.bz2): Complete</div><div>Mar 24 06:42:35 vsanqa11 crmd[24426]: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]</div>
<div><br></div><div><br></div><div><br></div><div><br></div><div><br></div><div>vsanqa12</div><div><br></div><div>Mar 24 06:40:19 vsanqa12 corosync[15344]: [SERV ] Unloading all Corosync service engines.</div><div>Mar 24 06:40:19 vsanqa12 corosync[15344]: [SERV ] Service engine unloaded: corosync extended virtual synchrony service</div>
<div>Mar 24 06:40:19 vsanqa12 corosync[15344]: [SERV ] Service engine unloaded: corosync configuration service</div><div>Mar 24 06:40:19 vsanqa12 corosync[15344]: [SERV ] Service engine unloaded: corosync cluster closed process group service v1.01</div>
<div>Mar 24 06:40:19 vsanqa12 corosync[15344]: [SERV ] Service engine unloaded: corosync cluster config database access v1.01</div><div>Mar 24 06:40:19 vsanqa12 corosync[15344]: [SERV ] Service engine unloaded: corosync profile loading service</div>
<div>Mar 24 06:40:19 vsanqa12 corosync[15344]: [SERV ] Service engine unloaded: openais checkpoint service B.01.01</div><div>Mar 24 06:40:19 vsanqa12 corosync[15344]: [SERV ] Service engine unloaded: corosync CMAN membership service 2.90</div>
<div>Mar 24 06:40:19 vsanqa12 corosync[15344]: [SERV ] Service engine unloaded: corosync cluster quorum service v0.1</div><div>Mar 24 06:40:19 vsanqa12 corosync[15344]: [MAIN ] Corosync Cluster Engine exiting with status 0 at main.c:1894.</div>
<div>Mar 24 06:41:22 vsanqa12 kernel: DLM (built Nov 9 2011 08:04:11) installed</div><div>Mar 24 06:41:22 vsanqa12 corosync[17159]: [MAIN ] Corosync Cluster Engine ('1.4.1'): started and ready to provide service.</div>
<div>Mar 24 06:41:22 vsanqa12 corosync[17159]: [MAIN ] Corosync built-in features: nss dbus rdma snmp</div><div>Mar 24 06:41:22 vsanqa12 corosync[17159]: [MAIN ] Successfully read config from /etc/cluster/cluster.conf</div>
<div>Mar 24 06:41:22 vsanqa12 corosync[17159]: [MAIN ] Successfully parsed cman config</div><div>Mar 24 06:41:22 vsanqa12 corosync[17159]: [TOTEM ] Initializing transport (UDP/IP Multicast).</div><div>Mar 24 06:41:22 vsanqa12 corosync[17159]: [TOTEM ] Initializing transmit/receive security: libtomcrypt SOBER128/SHA1HMAC (mode 0).</div>
<div>Mar 24 06:41:23 vsanqa12 corosync[17159]: [TOTEM ] The network interface [172.16.68.124] is now up.</div><div>Mar 24 06:41:23 vsanqa12 corosync[17159]: [QUORUM] Using quorum provider quorum_cman</div><div>Mar 24 06:41:23 vsanqa12 corosync[17159]: [SERV ] Service engine loaded: corosync cluster quorum service v0.1</div>
<div>Mar 24 06:41:23 vsanqa12 corosync[17159]: [CMAN ] CMAN 3.0.12.1 (built Feb 23 2013 10:25:47) started</div><div>Mar 24 06:41:23 vsanqa12 corosync[17159]: [SERV ] Service engine loaded: corosync CMAN membership service 2.90</div>
<div>Mar 24 06:41:23 vsanqa12 corosync[17159]: [SERV ] Service engine loaded: openais checkpoint service B.01.01</div><div>Mar 24 06:41:23 vsanqa12 corosync[17159]: [SERV ] Service engine loaded: corosync extended virtual synchrony service</div>
<div>Mar 24 06:41:23 vsanqa12 corosync[17159]: [SERV ] Service engine loaded: corosync configuration service</div><div>Mar 24 06:41:23 vsanqa12 corosync[17159]: [SERV ] Service engine loaded: corosync cluster closed process group service v1.01</div>
<div>Mar 24 06:41:23 vsanqa12 corosync[17159]: [SERV ] Service engine loaded: corosync cluster config database access v1.01</div><div>Mar 24 06:41:23 vsanqa12 corosync[17159]: [SERV ] Service engine loaded: corosync profile loading service</div>
<div>Mar 24 06:41:23 vsanqa12 corosync[17159]: [QUORUM] Using quorum provider quorum_cman</div><div>Mar 24 06:41:23 vsanqa12 corosync[17159]: [SERV ] Service engine loaded: corosync cluster quorum service v0.1</div><div>
Mar 24 06:41:23 vsanqa12 corosync[17159]: [MAIN ] Compatibility mode set to whitetank. Using V1 and V2 of the synchronization engine.</div><div>Mar 24 06:41:23 vsanqa12 corosync[17159]: [TOTEM ] A processor joined or left the membership and a new membership was formed.</div>
<div>Mar 24 06:41:23 vsanqa12 corosync[17159]: [QUORUM] Members[1]: 2</div><div>Mar 24 06:41:23 vsanqa12 corosync[17159]: [QUORUM] Members[1]: 2</div><div>Mar 24 06:41:23 vsanqa12 corosync[17159]: [CPG ] chosen downlist: sender r(0) ip(172.16.68.124) ; members(old:0 left:0)</div>
<div>Mar 24 06:41:23 vsanqa12 corosync[17159]: [MAIN ] Completed service synchronization, ready to provide service.</div><div>Mar 24 06:41:25 vsanqa12 corosync[17159]: [TOTEM ] A processor joined or left the membership and a new membership was formed.</div>
<div>Mar 24 06:41:25 vsanqa12 corosync[17159]: [CPG ] chosen downlist: sender r(0) ip(172.16.68.124) ; members(old:1 left:0)</div><div>Mar 24 06:41:25 vsanqa12 corosync[17159]: [MAIN ] Completed service synchronization, ready to provide service.</div>
<div>Mar 24 06:41:27 vsanqa12 fenced[17213]: fenced 3.0.12.1 started</div><div>Mar 24 06:41:27 vsanqa12 dlm_controld[17239]: dlm_controld 3.0.12.1 started</div><div>Mar 24 06:41:28 vsanqa12 gfs_controld[17288]: gfs_controld 3.0.12.1 started</div>
<div>Mar 24 06:41:29 vsanqa12 corosync[17159]: [TOTEM ] A processor joined or left the membership and a new membership was formed.</div><div>Mar 24 06:41:29 vsanqa12 corosync[17159]: [MAIN ] Completed service synchronization, ready to provide service.</div>
<div>Mar 24 06:41:30 vsanqa12 pacemakerd[17363]: notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log</div><div>Mar 24 06:41:30 vsanqa12 pacemakerd[17363]: notice: main: Starting Pacemaker 1.1.8-7.el6 (Build: 394e906): generated-manpages agent-manpages ascii-docs publican-docs ncurses libqb-logging libqb-ipc corosync-plugin cman</div>
<div>Mar 24 06:41:30 vsanqa12 pacemakerd[17363]: notice: update_node_processes: 0x125af80 Node 2 now known as vsanqa12, was:</div><div>Mar 24 06:41:30 vsanqa12 stonith-ng[17370]: notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log</div>
<div>Mar 24 06:41:30 vsanqa12 cib[17369]: notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log</div><div>Mar 24 06:41:30 vsanqa12 stonith-ng[17370]: notice: crm_cluster_connect: Connecting to cluster infrastructure: cman</div>
<div>Mar 24 06:41:30 vsanqa12 lrmd[17371]: notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log</div><div>Mar 24 06:41:30 vsanqa12 cib[17369]: notice: main: Using legacy config location: /var/lib/heartbeat/crm</div>
<div>Mar 24 06:41:30 vsanqa12 attrd[17372]: notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log</div><div>Mar 24 06:41:30 vsanqa12 attrd[17372]: notice: crm_cluster_connect: Connecting to cluster infrastructure: cman</div>
<div>Mar 24 06:41:30 vsanqa12 pengine[17373]: notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log</div><div>Mar 24 06:41:30 vsanqa12 crmd[17374]: notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log</div>
<div>Mar 24 06:41:30 vsanqa12 crmd[17374]: notice: main: CRM Git Version: 394e906</div><div>Mar 24 06:41:30 vsanqa12 attrd[17372]: notice: main: Starting mainloop...</div><div>Mar 24 06:41:30 vsanqa12 cib[17369]: notice: crm_cluster_connect: Connecting to cluster infrastructure: cman</div>
<div>Mar 24 06:41:31 vsanqa12 stonith-ng[17370]: notice: setup_cib: Watching for stonith topology changes</div><div>Mar 24 06:41:31 vsanqa12 crmd[17374]: notice: crm_cluster_connect: Connecting to cluster infrastructure: cman</div>
<div>Mar 24 06:41:31 vsanqa12 crmd[17374]: notice: crm_update_peer_state: cman_event_callback: Node vsanqa11[1] - state is now lost</div><div>Mar 24 06:41:31 vsanqa12 crmd[17374]: notice: crm_update_peer_state: cman_event_callback: Node vsanqa12[2] - state is now member</div>
<div>Mar 24 06:41:31 vsanqa12 crmd[17374]: notice: do_started: The local CRM is operational</div><div>Mar 24 06:41:33 vsanqa12 corosync[17159]: [TOTEM ] A processor joined or left the membership and a new membership was formed.</div>
<div>Mar 24 06:41:33 vsanqa12 corosync[17159]: [CPG ] chosen downlist: sender r(0) ip(172.16.68.124) ; members(old:1 left:0)</div><div>Mar 24 06:41:33 vsanqa12 corosync[17159]: [MAIN ] Completed service synchronization, ready to provide service.</div>
<div>Mar 24 06:41:37 vsanqa12 corosync[17159]: [TOTEM ] A processor joined or left the membership and a new membership was formed.</div><div>Mar 24 06:41:37 vsanqa12 corosync[17159]: [CPG ] chosen downlist: sender r(0) ip(172.16.68.124) ; members(old:1 left:0)</div>
<div>Mar 24 06:41:37 vsanqa12 corosync[17159]: [MAIN ] Completed service synchronization, ready to provide service.</div><div>Mar 24 06:41:40 vsanqa12 corosync[17159]: [TOTEM ] A processor joined or left the membership and a new membership was formed.</div>
<div>Mar 24 06:41:40 vsanqa12 corosync[17159]: [CPG ] chosen downlist: sender r(0) ip(172.16.68.124) ; members(old:1 left:0)</div><div>Mar 24 06:41:40 vsanqa12 corosync[17159]: [MAIN ] Completed service synchronization, ready to provide service.</div>
<div>Mar 24 06:41:44 vsanqa12 corosync[17159]: [TOTEM ] A processor joined or left the membership and a new membership was formed.</div><div>Mar 24 06:41:44 vsanqa12 corosync[17159]: [CPG ] chosen downlist: sender r(0) ip(172.16.68.124) ; members(old:1 left:0)</div>
<div>Mar 24 06:41:44 vsanqa12 corosync[17159]: [MAIN ] Completed service synchronization, ready to provide service.</div><div>Mar 24 06:41:48 vsanqa12 corosync[17159]: [TOTEM ] A processor joined or left the membership and a new membership was formed.</div>
<div>Mar 24 06:41:48 vsanqa12 corosync[17159]: [CPG ] chosen downlist: sender r(0) ip(172.16.68.124) ; members(old:1 left:0)</div><div>Mar 24 06:41:48 vsanqa12 corosync[17159]: [MAIN ] Completed service synchronization, ready to provide service.</div>
<div>Mar 24 06:41:52 vsanqa12 corosync[17159]: [TOTEM ] A processor joined or left the membership and a new membership was formed.</div><div>Mar 24 06:41:52 vsanqa12 corosync[17159]: [CPG ] chosen downlist: sender r(0) ip(172.16.68.124) ; members(old:1 left:0)</div>
<div>Mar 24 06:41:52 vsanqa12 corosync[17159]: [MAIN ] Completed service synchronization, ready to provide service.</div><div>Mar 24 06:41:52 vsanqa12 crmd[17374]: warning: do_log: FSA: Input I_DC_TIMEOUT from crm_timer_popped() received in state S_PENDING</div>
<div>Mar 24 06:41:52 vsanqa12 crmd[17374]: notice: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_FSA_INTERNAL origin=do_election_check ]</div><div>Mar 24 06:41:52 vsanqa12 attrd[17372]: notice: attrd_local_callback: Sending full refresh (origin=crmd)</div>
<div>Mar 24 06:41:53 vsanqa12 pengine[17373]: notice: unpack_config: On loss of CCM Quorum: Ignore</div><div>Mar 24 06:41:53 vsanqa12 pengine[17373]: notice: LogActions: Start vha-46cd52eb-fecc-49f8-bbe8-bc4157672b7b:0#011(vsanqa12)</div>
<div>Mar 24 06:41:53 vsanqa12 pengine[17373]: notice: process_pe_message: Calculated Transition 0: /var/lib/pacemaker/pengine/pe-input-1494.bz2</div><div>Mar 24 06:41:54 vsanqa12 crmd[17374]: notice: process_lrm_event: LRM operation vha-46cd52eb-fecc-49f8-bbe8-bc4157672b7b_monitor_0 (call=6, rc=7, cib-update=24, confirmed=true) not running</div>
<div>Mar 24 06:41:54 vsanqa12 attrd[17372]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)</div><div>Mar 24 06:41:54 vsanqa12 attrd[17372]: notice: attrd_perform_update: Sent update 4: probe_complete=true</div>
<div>Mar 24 06:41:54 vsanqa12 kernel: VGC: [0000006711331b03:I] Started vHA/vShare instance /dev/vgca0_VHA</div><div>Mar 24 06:41:55 vsanqa12 crmd[17374]: notice: process_lrm_event: LRM operation vha-46cd52eb-fecc-49f8-bbe8-bc4157672b7b_start_0 (call=9, rc=0, cib-update=25, confirmed=true) ok</div>
<div>Mar 24 06:41:55 vsanqa12 crmd[17374]: notice: process_lrm_event: LRM operation vha-46cd52eb-fecc-49f8-bbe8-bc4157672b7b_monitor_31000 (call=12, rc=0, cib-update=26, confirmed=false) ok</div><div>Mar 24 06:41:55 vsanqa12 crmd[17374]: notice: run_graph: Transition 0 (Complete=7, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-1494.bz2): Complete</div>
<div>Mar 24 06:41:55 vsanqa12 crmd[17374]: notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]</div><div>Mar 24 06:41:56 vsanqa12 corosync[17159]: [TOTEM ] A processor joined or left the membership and a new membership was formed.</div>
<div>Mar 24 06:41:56 vsanqa12 corosync[17159]: [CPG ] chosen downlist: sender r(0) ip(172.16.68.124) ; members(old:1 left:0)</div><div>Mar 24 06:41:56 vsanqa12 corosync[17159]: [MAIN ] Completed service synchronization, ready to provide service.</div>
<div>Mar 24 06:42:00 vsanqa12 corosync[17159]: [TOTEM ] A processor joined or left the membership and a new membership was formed.</div><div>Mar 24 06:42:00 vsanqa12 corosync[17159]: [CPG ] chosen downlist: sender r(0) ip(172.16.68.124) ; members(old:1 left:0)</div>
<div>Mar 24 06:42:00 vsanqa12 corosync[17159]: [MAIN ] Completed service synchronization, ready to provide service.</div><div>Mar 24 06:42:01 vsanqa12 kernel: doing a send with ctx_id 1</div><div>Mar 24 06:42:03 vsanqa12 corosync[17159]: [TOTEM ] A processor joined or left the membership and a new membership was formed.</div>
<div>Mar 24 06:42:03 vsanqa12 corosync[17159]: [CPG ] chosen downlist: sender r(0) ip(172.16.68.124) ; members(old:1 left:0)</div><div>Mar 24 06:42:03 vsanqa12 corosync[17159]: [MAIN ] Completed service synchronization, ready to provide service.</div>
<div>Mar 24 06:42:06 vsanqa12 kernel: vgca0_VHA: unknown partition table</div><div>Mar 24 06:42:07 vsanqa12 kernel: doing a send with ctx_id 1</div><div>Mar 24 06:42:07 vsanqa12 kernel: VGC: [000000650fed1b03:I] Instance "VHA" connected with peer "vsanqa11" (status 0xc, 1, 0)</div>
<div>Mar 24 06:42:07 vsanqa12 corosync[17159]: [TOTEM ] A processor joined or left the membership and a new membership was formed.</div><div>Mar 24 06:42:07 vsanqa12 corosync[17159]: [CPG ] chosen downlist: sender r(0) ip(172.16.68.124) ; members(old:1 left:0)</div>
<div>Mar 24 06:42:07 vsanqa12 corosync[17159]: [MAIN ] Completed service synchronization, ready to provide service.</div><div>Mar 24 06:42:11 vsanqa12 corosync[17159]: [TOTEM ] A processor joined or left the membership and a new membership was formed.</div>
<div>Mar 24 06:42:11 vsanqa12 corosync[17159]: [CPG ] chosen downlist: sender r(0) ip(172.16.68.124) ; members(old:1 left:0)</div><div>Mar 24 06:42:11 vsanqa12 corosync[17159]: [MAIN ] Completed service synchronization, ready to provide service.</div>
<div>Mar 24 06:42:15 vsanqa12 corosync[17159]: [TOTEM ] A processor joined or left the membership and a new membership was formed.</div><div>Mar 24 06:42:15 vsanqa12 corosync[17159]: [CPG ] chosen downlist: sender r(0) ip(172.16.68.124) ; members(old:1 left:0)</div>
<div>Mar 24 06:42:15 vsanqa12 corosync[17159]: [MAIN ] Completed service synchronization, ready to provide service.</div><div>Mar 24 06:42:19 vsanqa12 corosync[17159]: [TOTEM ] A processor joined or left the membership and a new membership was formed.</div>
<div>Mar 24 06:42:19 vsanqa12 corosync[17159]: [CPG ] chosen downlist: sender r(0) ip(172.16.68.124) ; members(old:1 left:0)</div><div>Mar 24 06:42:19 vsanqa12 corosync[17159]: [MAIN ] Completed service synchronization, ready to provide service.</div>
<div>Mar 24 06:42:22 vsanqa12 corosync[17159]: [TOTEM ] A processor joined or left the membership and a new membership was formed.</div><div>Mar 24 06:42:22 vsanqa12 corosync[17159]: [TOTEM ] A processor joined or left the membership and a new membership was formed.</div>
<div>Mar 24 06:42:22 vsanqa12 corosync[17159]: [CPG ] chosen downlist: sender r(0) ip(172.16.68.124) ; members(old:1 left:0)</div><div>Mar 24 06:42:22 vsanqa12 corosync[17159]: [MAIN ] Completed service synchronization, ready to provide service.</div>
<div>Mar 24 06:42:26 vsanqa12 corosync[17159]: [TOTEM ] A processor joined or left the membership and a new membership was formed.</div><div>Mar 24 06:42:26 vsanqa12 corosync[17159]: [CMAN ] quorum regained, resuming activity</div>
<div>Mar 24 06:42:26 vsanqa12 corosync[17159]: [QUORUM] This node is within the primary component and will provide service.</div><div>Mar 24 06:42:26 vsanqa12 corosync[17159]: [QUORUM] Members[2]: 1 2</div><div>Mar 24 06:42:26 vsanqa12 corosync[17159]: [QUORUM] Members[2]: 1 2</div>
<div>Mar 24 06:42:26 vsanqa12 crmd[17374]: notice: cman_event_callback: Membership 4047980: quorum acquired</div><div>Mar 24 06:42:26 vsanqa12 crmd[17374]: notice: crm_update_peer_state: cman_event_callback: Node vsanqa11[1] - state is now member</div>
<div>Mar 24 06:42:26 vsanqa12 corosync[17159]: [CPG ] chosen downlist: sender r(0) ip(172.16.68.123) ; members(old:1 left:0)</div><div>Mar 24 06:42:26 vsanqa12 corosync[17159]: [MAIN ] Completed service synchronization, ready to provide service.</div>
<div>Mar 24 06:42:26 vsanqa12 fenced[17213]: fencing deferred to vsanqa11</div><div>Mar 24 06:42:27 vsanqa12 crmd[17374]: warning: match_down_event: No match for shutdown action on vsanqa11</div><div>Mar 24 06:42:27 vsanqa12 cib[17369]: warning: cib_server_process_diff: Not requesting full refresh in R/W mode</div>
<div>Mar 24 06:42:27 vsanqa12 crmd[17374]: warning: crmd_ha_msg_filter: Another DC detected: vsanqa11 (op=noop)</div><div>Mar 24 06:42:27 vsanqa12 crmd[17374]: warning: crmd_ha_msg_filter: Another DC detected: vsanqa11 (op=noop)</div>
<div>Mar 24 06:42:27 vsanqa12 crmd[17374]: notice: do_state_transition: State transition S_IDLE -> S_ELECTION [ input=I_ELECTION cause=C_FSA_INTERNAL origin=crmd_ha_msg_filter ]</div><div>Mar 24 06:42:27 vsanqa12 crmd[17374]: warning: do_log: FSA: Input I_NODE_JOIN from peer_update_callback() received in state S_ELECTION</div>
<div>Mar 24 06:42:27 vsanqa12 crmd[17374]: notice: do_state_transition: State transition S_ELECTION -> S_RELEASE_DC [ input=I_RELEASE_DC cause=C_FSA_INTERNAL origin=do_election_count_vote ]</div><div>Mar 24 06:42:27 vsanqa12 pacemakerd[17363]: notice: update_node_processes: 0x1271260 Node 1 now known as vsanqa11, was:</div>
<div>Mar 24 06:42:27 vsanqa12 cib[17369]: warning: cib_server_process_diff: Not requesting full refresh in R/W mode</div><div>Mar 24 06:42:27 vsanqa12 cib[17369]: warning: cib_server_process_diff: Not requesting full refresh in R/W mode</div>
<div>Mar 24 06:42:27 vsanqa12 cib[17369]: warning: cib_server_process_diff: Not requesting full refresh in R/W mode</div><div>Mar 24 06:42:27 vsanqa12 cib[17369]: warning: cib_server_process_diff: Not requesting full refresh in R/W mode</div>
<div>Mar 24 06:42:27 vsanqa12 cib[17369]: warning: cib_server_process_diff: Not requesting full refresh in R/W mode</div><div>Mar 24 06:42:27 vsanqa12 crmd[17374]: warning: do_log: FSA: Input I_RELEASE_DC from do_election_count_vote() received in state S_RELEASE_DC</div>
<div>Mar 24 06:42:27 vsanqa12 attrd[17372]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-vha-46cd52eb-fecc-49f8-bbe8-bc4157672b7b (3)</div><div>Mar 24 06:42:27 vsanqa12 attrd[17372]: notice: attrd_perform_update: Sent update 7: master-vha-46cd52eb-fecc-49f8-bbe8-bc4157672b7b=3</div>
<div>Mar 24 06:42:27 vsanqa12 cib[17369]: notice: cib_server_process_diff: Not applying diff 0.11071.94 -> 0.11071.95 (sync in progress)</div><div>Mar 24 06:42:27 vsanqa12 cib[17369]: notice: cib_server_process_diff: Not applying diff 0.11071.95 -> 0.11071.96 (sync in progress)</div>
<div>Mar 24 06:42:28 vsanqa12 crmd[17374]: notice: do_state_transition: State transition S_PENDING -> S_NOT_DC [ input=I_NOT_DC cause=C_HA_MESSAGE origin=do_cl_join_finalize_respond ]</div><div>Mar 24 06:42:28 vsanqa12 attrd[17372]: notice: attrd_local_callback: Sending full refresh (origin=crmd)</div>
<div>Mar 24 06:42:28 vsanqa12 attrd[17372]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-vha-46cd52eb-fecc-49f8-bbe8-bc4157672b7b (3)</div><div>Mar 24 06:42:28 vsanqa12 attrd[17372]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)</div>
<div>Mar 24 06:42:29 vsanqa12 kernel: VGC: [0000006711341b03:I] Stopped vHA/vShare instance /dev/vgca0_VHA</div><div>Mar 24 06:42:35 vsanqa12 attrd[17372]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-vha-46cd52eb-fecc-49f8-bbe8-bc4157672b7b (<null>)</div>
<div>Mar 24 06:42:35 vsanqa12 attrd[17372]: notice: attrd_perform_update: Sent delete 30: node=vsanqa12, attr=master-vha-46cd52eb-fecc-49f8-bbe8-bc4157672b7b, id=<n/a>, set=(null), section=status</div><div><br></div>
<div><br></div><div>Regards,</div><div> kiran</div></div>