<div dir="ltr"><div>I have a two node test cluster running with CMAN plugin. Fencing is not configured. I see that vsanqa7 sends a message to vsanqa8 to shutdown.</div><div>However, it is not clear why vsanqa7 takes this decision.</div>
<div><br></div><div><br></div><div><br></div><div><br></div><div>===================/var/log/messages=================================</div><div><br></div><div>Node vsanqa7</div><div><br></div><div>Jul 15 08:51:18 vsanqa7 corosync[12081]: [TOTEM ] A processor joined or left the membership and a new membership was formed.</div>
<div>Jul 15 08:51:18 vsanqa7 corosync[12081]: [CMAN ] quorum regained, resuming activity</div><div>Jul 15 08:51:18 vsanqa7 corosync[12081]: [QUORUM] This node is within the primary component and will provide service.</div>
<div>Jul 15 08:51:18 vsanqa7 corosync[12081]: [QUORUM] Members[2]: 1 2</div><div>Jul 15 08:51:18 vsanqa7 corosync[12081]: [QUORUM] Members[2]: 1 2</div><div>Jul 15 08:51:18 vsanqa7 crmd[12372]: notice: cman_event_callback: Membership 4035520: quorum acquired</div>
<div>Jul 15 08:51:18 vsanqa7 crmd[12372]: notice: crm_update_peer_state: cman_event_callback: Node vsanqa8[2] - state is now member</div><div>Jul 15 08:51:18 vsanqa7 corosync[12081]: [CPG ] chosen downlist: sender r(0) ip(172.16.68.120) ; members(old:1 left:0)</div>
<div>Jul 15 08:51:18 vsanqa7 corosync[12081]: [MAIN ] Completed service synchronization, ready to provide service.</div><div>Jul 15 08:51:30 vsanqa7 crmd[12372]: warning: match_down_event: No match for shutdown action on vsanqa8</div>
<div>Jul 15 08:51:30 vsanqa7 crmd[12372]: notice: do_state_transition: State transition S_IDLE -> S_INTEGRATION [ input=I_NODE_JOIN cause=C_FSA_INTERNAL origin=peer_update_callback ]</div><div>Jul 15 08:51:30 vsanqa7 cib[12367]: warning: cib_process_diff: Diff 0.2760.0 -> 0.2760.1 from vsanqa8 not applied to 0.2760.307: current "num_updates" is greater than required</div>
<div>Jul 15 08:51:31 vsanqa7 kernel: send_and_wait_for_client_info failed with -110 uuid=0x74</div><div>Jul 15 08:51:32 vsanqa7 attrd[12370]: notice: attrd_local_callback: Sending full refresh (origin=crmd)</div><div>Jul 15 08:51:32 vsanqa7 attrd[12370]: notice: attrd_trigger_update: Sending flush op to all hosts for: master-vha-7413ed6d-2a3b-4ffc-9cd0-b80778d7a839 (8)</div>
<div>Jul 15 08:51:32 vsanqa7 attrd[12370]: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)</div><div>Jul 15 08:51:32 vsanqa7 pengine[12371]: notice: unpack_config: On loss of CCM Quorum: Ignore</div>
<div><br></div><div>>>> Why vsanqa8 is scheduled for shutdown ? <<<<<</div><div><br></div><div>Jul 15 08:51:32 vsanqa7 pengine[12371]: notice: stage6: Scheduling Node vsanqa8 for shutdown</div><div>
Jul 15 08:51:32 vsanqa7 pengine[12371]: notice: process_pe_message: Calculated Transition 1: /var/lib/pacemaker/pengine/pe-input-3530.bz2</div><div><br></div><div><br></div><div><br></div><div><br></div><div>Node vsanqa8</div>
<div><br></div><div>Jul 15 08:51:18 vsanqa8 corosync[21392]: [MAIN ] Corosync Cluster Engine ('1.4.1'): started and ready to provide service.</div><div>Jul 15 08:51:18 vsanqa8 corosync[21392]: [MAIN ] Corosync built-in features: nss dbus rdma snmp</div>
<div>Jul 15 08:51:18 vsanqa8 corosync[21392]: [MAIN ] Successfully read config from /etc/cluster/cluster.conf</div><div>Jul 15 08:51:18 vsanqa8 corosync[21392]: [MAIN ] Successfully parsed cman config</div><div>Jul 15 08:51:18 vsanqa8 corosync[21392]: [TOTEM ] Initializing transport (UDP/IP Multicast).</div>
<div>Jul 15 08:51:18 vsanqa8 corosync[21392]: [TOTEM ] Initializing transmit/receive security: libtomcrypt SOBER128/SHA1HMAC (mode 0).</div><div>Jul 15 08:51:18 vsanqa8 corosync[21392]: [TOTEM ] The network interface [172.16.68.126] is now up.</div>
<div>Jul 15 08:51:18 vsanqa8 corosync[21392]: [QUORUM] Using quorum provider quorum_cman</div><div>Jul 15 08:51:18 vsanqa8 corosync[21392]: [SERV ] Service engine loaded: corosync cluster quorum service v0.1</div><div>
Jul 15 08:51:18 vsanqa8 corosync[21392]: [CMAN ] CMAN 3.0.12.1 (built Feb 22 2013 07:20:27) started</div><div>Jul 15 08:51:18 vsanqa8 corosync[21392]: [SERV ] Service engine loaded: corosync CMAN membership service 2.90</div>
<div>Jul 15 08:51:18 vsanqa8 corosync[21392]: [SERV ] Service engine loaded: openais checkpoint service B.01.01</div><div>Jul 15 08:51:18 vsanqa8 corosync[21392]: [SERV ] Service engine loaded: corosync extended virtual synchrony service</div>
<div>Jul 15 08:51:18 vsanqa8 corosync[21392]: [SERV ] Service engine loaded: corosync configuration service</div><div>Jul 15 08:51:18 vsanqa8 corosync[21392]: [SERV ] Service engine loaded: corosync cluster closed process group service v1.01</div>
<div>Jul 15 08:51:18 vsanqa8 corosync[21392]: [SERV ] Service engine loaded: corosync cluster config database access v1.01</div><div>Jul 15 08:51:18 vsanqa8 corosync[21392]: [SERV ] Service engine loaded: corosync profile loading service</div>
<div>Jul 15 08:51:18 vsanqa8 corosync[21392]: [QUORUM] Using quorum provider quorum_cman</div><div>Jul 15 08:51:18 vsanqa8 corosync[21392]: [SERV ] Service engine loaded: corosync cluster quorum service v0.1</div><div>
Jul 15 08:51:18 vsanqa8 corosync[21392]: [MAIN ] Compatibility mode set to whitetank. Using V1 and V2 of the synchronization engine.</div><div>Jul 15 08:51:18 vsanqa8 corosync[21392]: [TOTEM ] A processor joined or left the membership and a new membership was formed.</div>
<div>Jul 15 08:51:18 vsanqa8 corosync[21392]: [QUORUM] Members[1]: 2</div><div>Jul 15 08:51:18 vsanqa8 corosync[21392]: [QUORUM] Members[1]: 2</div><div>Jul 15 08:51:18 vsanqa8 corosync[21392]: [CPG ] chosen downlist: sender r(0) ip(172.16.68.126) ; members(old:0 left:0)</div>
<div>Jul 15 08:51:18 vsanqa8 corosync[21392]: [MAIN ] Completed service synchronization, ready to provide service.</div><div>Jul 15 08:51:18 vsanqa8 corosync[21392]: [TOTEM ] A processor joined or left the membership and a new membership was formed.</div>
<div>Jul 15 08:51:18 vsanqa8 corosync[21392]: [CMAN ] quorum regained, resuming activity</div><div>Jul 15 08:51:18 vsanqa8 corosync[21392]: [QUORUM] This node is within the primary component and will provide service.</div>
<div>Jul 15 08:51:18 vsanqa8 corosync[21392]: [QUORUM] Members[2]: 1 2</div><div>Jul 15 08:51:18 vsanqa8 corosync[21392]: [QUORUM] Members[2]: 1 2</div><div>Jul 15 08:51:18 vsanqa8 corosync[21392]: [CPG ] chosen downlist: sender r(0) ip(172.16.68.120) ; members(old:1 left:0)</div>
<div>Jul 15 08:51:18 vsanqa8 corosync[21392]: [MAIN ] Completed service synchronization, ready to provide service.</div><div>Jul 15 08:51:22 vsanqa8 fenced[21447]: fenced 3.0.12.1 started</div><div>Jul 15 08:51:22 vsanqa8 dlm_controld[21467]: dlm_controld 3.0.12.1 started</div>
<div>Jul 15 08:51:23 vsanqa8 gfs_controld[21522]: gfs_controld 3.0.12.1 started</div><div>Jul 15 08:51:29 vsanqa8 pacemakerd[21673]: notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log</div>
<div>Jul 15 08:51:29 vsanqa8 pacemakerd[21673]: notice: main: Starting Pacemaker 1.1.8-7.el6 (Build: 394e906): generated-manpages agent-manpages ascii-docs publican-docs ncurses libqb-logging libqb-ipc corosync-plugin cman</div>
<div>Jul 15 08:51:29 vsanqa8 pacemakerd[21673]: notice: update_node_processes: 0x13c1f80 Node 2 now known as vsanqa8, was:</div><div>Jul 15 08:51:29 vsanqa8 pacemakerd[21673]: notice: update_node_processes: 0x13be960 Node 1 now known as vsanqa7, was:</div>
<div>Jul 15 08:51:29 vsanqa8 lrmd[21681]: notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log</div><div>Jul 15 08:51:29 vsanqa8 stonith-ng[21680]: notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log</div>
<div>Jul 15 08:51:29 vsanqa8 cib[21679]: notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log</div><div>Jul 15 08:51:29 vsanqa8 attrd[21682]: notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log</div>
<div>Jul 15 08:51:29 vsanqa8 stonith-ng[21680]: notice: crm_cluster_connect: Connecting to cluster infrastructure: cman</div><div>Jul 15 08:51:29 vsanqa8 cib[21679]: notice: main: Using legacy config location: /var/lib/heartbeat/crm</div>
<div>Jul 15 08:51:29 vsanqa8 attrd[21682]: notice: crm_cluster_connect: Connecting to cluster infrastructure: cman</div><div>Jul 15 08:51:29 vsanqa8 crmd[21684]: notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log</div>
<div>Jul 15 08:51:29 vsanqa8 crmd[21684]: notice: main: CRM Git Version: 394e906</div><div>Jul 15 08:51:29 vsanqa8 pengine[21683]: notice: crm_add_logfile: Additional logging available in /var/log/cluster/corosync.log</div>
<div>Jul 15 08:51:29 vsanqa8 attrd[21682]: notice: main: Starting mainloop...</div><div>Jul 15 08:51:29 vsanqa8 cib[21679]: notice: crm_cluster_connect: Connecting to cluster infrastructure: cman</div><div>Jul 15 08:51:30 vsanqa8 crmd[21684]: notice: crm_cluster_connect: Connecting to cluster infrastructure: cman</div>
<div>Jul 15 08:51:30 vsanqa8 stonith-ng[21680]: notice: setup_cib: Watching for stonith topology changes</div><div>Jul 15 08:51:30 vsanqa8 crmd[21684]: notice: cman_event_callback: Membership 4035520: quorum acquired</div>
<div>Jul 15 08:51:30 vsanqa8 crmd[21684]: notice: crm_update_peer_state: cman_event_callback: Node vsanqa7[1] - state is now member</div><div>Jul 15 08:51:30 vsanqa8 crmd[21684]: notice: crm_update_peer_state: cman_event_callback: Node vsanqa8[2] - state is now member</div>
<div>Jul 15 08:51:30 vsanqa8 crmd[21684]: notice: do_started: The local CRM is operational</div><div>Jul 15 08:51:32 vsanqa8 crmd[21684]: notice: do_state_transition: State transition S_PENDING -> S_NOT_DC [ input=I_NOT_DC cause=C_HA_MESSAGE origin=do_cl_join_finalize_respond ]</div>
<div>Jul 15 08:51:32 vsanqa8 attrd[21682]: notice: attrd_local_callback: Sending full refresh (origin=crmd)</div><div><br></div><div>>>>>> vsanqa8 is not expecting a shutdown request from vsanqa7 <<<<<<<</div>
<div><br></div><div>Jul 15 08:51:32 vsanqa8 crmd[21684]: error: handle_request: We didn't ask to be shut down, yet our DC is telling us too.</div><div>Jul 15 08:51:32 vsanqa8 crmd[21684]: notice: do_state_transition: State transition S_NOT_DC -> S_STOPPING [ input=I_STOP cause=C_HA_MESSAGE origin=route_message ]</div>
<div>Jul 15 08:51:33 vsanqa8 crmd[21684]: notice: process_lrm_event: LRM operation vha-7413ed6d-2a3b-4ffc-9cd0-b80778d7a839_monitor_0 (call=6, rc=7, cib-update=9, confirmed=true) not running</div><div>Jul 15 08:51:33 vsanqa8 crmd[21684]: notice: terminate_cs_connection: Disconnecting from Corosync</div>
<div>Jul 15 08:51:33 vsanqa8 cib[21679]: warning: qb_ipcs_event_sendv: new_event_notification (21679-21684-12): Broken pipe (32)</div><div>Jul 15 08:51:33 vsanqa8 crmd[21684]: warning: do_exit: Inhibiting respawn by Heartbeat</div>
<div>Jul 15 08:51:33 vsanqa8 cib[21679]: warning: do_local_notify: A-Sync reply to crmd failed: No message of desired type</div><div>Jul 15 08:51:33 vsanqa8 pacemakerd[21673]: error: pcmk_child_exit: Child process crmd exited (pid=21684, rc=100)</div>
<div>Jul 15 08:51:33 vsanqa8 pacemakerd[21673]: warning: pcmk_child_exit: Pacemaker child process crmd no longer wishes to be respawned. Shutting ourselves down.</div><div>Jul 15 08:51:33 vsanqa8 pacemakerd[21673]: notice: pcmk_shutdown_worker: Shuting down Pacemaker</div>
<div>Jul 15 08:51:33 vsanqa8 pacemakerd[21673]: notice: stop_child: Stopping pengine: Sent -15 to process 21683</div><div>Jul 15 08:51:33 vsanqa8 pacemakerd[21673]: notice: stop_child: Stopping attrd: Sent -15 to process 21682</div>
<div>Jul 15 08:51:33 vsanqa8 attrd[21682]: notice: main: Exiting...</div><div>Jul 15 08:51:33 vsanqa8 pacemakerd[21673]: notice: stop_child: Stopping lrmd: Sent -15 to process 21681</div><div>Jul 15 08:51:33 vsanqa8 pacemakerd[21673]: notice: stop_child: Stopping stonith-ng: Sent -15 to process 21680</div>
<div><br></div><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div><div>=========================corosync.log===============================================</div><div><br></div><div><br>
</div><div>vsanqa7</div><div><br></div><div>Jul 15 08:51:29 [12367] vsanqa7 cib: info: pcmk_cpg_membership: Member[3.1] cib.2</div><div>Jul 15 08:51:29 [12367] vsanqa7 cib: info: crm_update_peer_proc: pcmk_cpg_membership: Node vsanqa8[2] - corosync-cpg is now online</div>
<div>Jul 15 08:51:30 [12372] vsanqa7 crmd: info: pcmk_cpg_membership: Joined[3.0] crmd.2</div><div>Jul 15 08:51:30 [12372] vsanqa7 crmd: info: pcmk_cpg_membership: Member[3.0] crmd.1</div><div>
Jul 15 08:51:30 [12372] vsanqa7 crmd: info: pcmk_cpg_membership: Member[3.1] crmd.2</div><div>Jul 15 08:51:30 [12372] vsanqa7 crmd: info: crm_update_peer_proc: pcmk_cpg_membership: Node vsanqa8[2] - corosync-cpg is now online</div>
<div>Jul 15 08:51:30 [12372] vsanqa7 crmd: info: peer_update_callback: Client vsanqa8/peer now has status [online] (DC=true)</div><div>Jul 15 08:51:30 [12372] vsanqa7 crmd: warning: match_down_event: No match for shutdown action on vsanqa8</div>
<div>Jul 15 08:51:30 [12372] vsanqa7 crmd: notice: do_state_transition: State transition S_IDLE -> S_INTEGRATION [ input=I_NODE_JOIN cause=C_FSA_INTERNAL origin=peer_update_callback ]</div><div>Jul 15 08:51:30 [12372] vsanqa7 crmd: info: abort_transition_graph: do_te_invoke:163 - Triggered transition abort (complete=1) : Peer Halt</div>
<div>Jul 15 08:51:30 [12372] vsanqa7 crmd: info: join_make_offer: Making join offers based on membership 4035520</div><div>Jul 15 08:51:30 [12372] vsanqa7 crmd: info: do_dc_join_offer_all: join-2: Waiting on 2 outstanding join acks</div>
<div>Jul 15 08:51:30 [12372] vsanqa7 crmd: info: update_dc: Set DC to vsanqa7 (3.0.7)</div><div>Jul 15 08:51:30 [12367] vsanqa7 cib: warning: cib_process_diff: Diff 0.2760.0 -> 0.2760.1 from vsanqa8 not applied to 0.2760.307: current "num_updates" is greater than required</div>
<div>Jul 15 08:51:30 [12367] vsanqa7 cib: info: cib_process_request: Operation complete: op cib_sync_one for section 'all' (origin=vsanqa8/vsanqa8/(null), version=0.2760.307): OK (rc=0)</div><div>Jul 15 08:51:31 [12372] vsanqa7 crmd: info: do_dc_join_offer_all: A new node joined the cluster</div>
<div>Jul 15 08:51:31 [12372] vsanqa7 crmd: info: do_dc_join_offer_all: join-3: Waiting on 2 outstanding join acks</div><div>Jul 15 08:51:31 [12372] vsanqa7 crmd: info: update_dc: Set DC to vsanqa7 (3.0.7)</div>
<div>Jul 15 08:51:32 [12372] vsanqa7 crmd: info: crm_update_peer_expected: do_dc_join_filter_offer: Node vsanqa8[2] - expected state is now member</div><div>Jul 15 08:51:32 [12372] vsanqa7 crmd: info: do_state_transition: State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_FSA_INTERNAL origin=check_join_state ]</div>
<div>Jul 15 08:51:32 [12372] vsanqa7 crmd: info: do_dc_join_finalize: join-3: Syncing the CIB from vsanqa7 to the rest of the cluster</div><div>Jul 15 08:51:32 [12367] vsanqa7 cib: info: cib_process_request: Operation complete: op cib_sync for section 'all' (origin=local/crmd/43, version=0.2760.307): OK (rc=0)</div>
<div>Jul 15 08:51:32 [12367] vsanqa7 cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/44, version=0.2760.308): OK (rc=0)</div><div>Jul 15 08:51:32 [12372] vsanqa7 crmd: info: do_dc_join_ack: join-3: Updating node state to member for vsanqa7</div>
<div>Jul 15 08:51:32 [12372] vsanqa7 crmd: info: erase_status_tag: Deleting xpath: //node_state[@uname='vsanqa7']/lrm</div><div>Jul 15 08:51:32 [12372] vsanqa7 crmd: info: do_dc_join_ack: join-3: Updating node state to member for vsanqa8</div>
<div>Jul 15 08:51:32 [12372] vsanqa7 crmd: info: erase_status_tag: Deleting xpath: //node_state[@uname='vsanqa8']/lrm</div><div>Jul 15 08:51:32 [12367] vsanqa7 cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/45, version=0.2760.309): OK (rc=0)</div>
<div>Jul 15 08:51:32 [12367] vsanqa7 cib: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='vsanqa7']/lrm (origin=local/crmd/46, version=0.2760.310): OK (rc=0)</div>
<div>Jul 15 08:51:32 [12367] vsanqa7 cib: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='vsanqa8']/lrm (origin=local/crmd/48, version=0.2760.312): OK (rc=0)</div>
<div>Jul 15 08:51:32 [12372] vsanqa7 crmd: info: do_state_transition: State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state ]</div><div>Jul 15 08:51:32 [12372] vsanqa7 crmd: info: abort_transition_graph: do_te_invoke:156 - Triggered transition abort (complete=1) : Peer Cancelled</div>
<div>Jul 15 08:51:32 [12370] vsanqa7 attrd: notice: attrd_local_callback: Sending full refresh (origin=crmd)</div><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div><div><br></div><div><br>
</div><div>vsanqa8</div><div><br></div><div>Jul 15 08:51:30 [21679] vsanqa8 cib: info: cib_process_diff: Diff 0.2760.306 -> 0.2760.307 from vsanqa7 not applied to 0.2760.1: current "num_updates" is less than required</div>
<div>Jul 15 08:51:30 [21679] vsanqa8 cib: info: cib_server_process_diff: Requesting re-sync from peer</div><div>Jul 15 08:51:30 [21684] vsanqa8 crmd: info: do_started: Delaying start, Config not read (0000000000000040)</div>
<div>Jul 15 08:51:30 [21684] vsanqa8 crmd: info: do_started: Delaying start, Config not read (0000000000000040)</div><div>Jul 15 08:51:30 [21684] vsanqa8 crmd: info: qb_ipcs_us_publish: server name: crmd</div>
<div>Jul 15 08:51:30 [21684] vsanqa8 crmd: notice: do_started: The local CRM is operational</div><div>Jul 15 08:51:30 [21684] vsanqa8 crmd: info: do_state_transition: State transition S_STARTING -> S_PENDING [ input=I_PENDING cause=C_FSA_INTERNAL origin=do_started ]</div>
<div>Jul 15 08:51:30 [21679] vsanqa8 cib: info: cib_process_replace: Digest matched on replace from vsanqa7: 0936115228a36c943f181954830c9b2b</div><div>Jul 15 08:51:30 [21679] vsanqa8 cib: info: cib_process_replace: Replaced 0.2760.1 with 0.2760.307 from vsanqa7</div>
<div>Jul 15 08:51:30 [21679] vsanqa8 cib: info: cib_replace_notify: Replaced: 0.2760.1 -> 0.2760.307 from vsanqa7</div><div>Jul 15 08:51:31 [21684] vsanqa8 crmd: info: pcmk_cpg_membership: Joined[0.0] crmd.2</div>
<div>Jul 15 08:51:31 [21684] vsanqa8 crmd: info: pcmk_cpg_membership: Member[0.0] crmd.1</div><div>Jul 15 08:51:31 [21684] vsanqa8 crmd: info: crm_update_peer_proc: pcmk_cpg_membership: Node vsanqa7[1] - corosync-cpg is now online</div>
<div>Jul 15 08:51:31 [21684] vsanqa8 crmd: info: peer_update_callback: Client vsanqa7/peer now has status [online] (DC=<null>)</div><div>Jul 15 08:51:31 [21684] vsanqa8 crmd: info: pcmk_cpg_membership: Member[0.1] crmd.2</div>
<div>Jul 15 08:51:31 [21684] vsanqa8 crmd: info: update_dc: Set DC to vsanqa7 (3.0.7)</div><div>Jul 15 08:51:32 [21680] vsanqa8 stonith-ng: info: stonith_command: Processed register from crmd.21684: OK (0)</div>
<div>Jul 15 08:51:32 [21680] vsanqa8 stonith-ng: info: stonith_command: Processed st_notify from crmd.21684: OK (0)</div><div>Jul 15 08:51:32 [21680] vsanqa8 stonith-ng: info: stonith_command: Processed st_notify from crmd.21684: OK (0)</div>
<div>Jul 15 08:51:32 [21684] vsanqa8 crmd: info: erase_status_tag: Deleting xpath: //node_state[@uname='vsanqa8']/transient_attributes</div><div>Jul 15 08:51:32 [21684] vsanqa8 crmd: info: update_attrd: Connecting to attrd... 5 retries remaining</div>
<div>Jul 15 08:51:32 [21684] vsanqa8 crmd: notice: do_state_transition: State transition S_PENDING -> S_NOT_DC [ input=I_NOT_DC cause=C_HA_MESSAGE origin=do_cl_join_finalize_respond ]</div><div>Jul 15 08:51:32 [21679] vsanqa8 cib: info: cib_process_replace: Digest matched on replace from vsanqa7: 0936115228a36c943f181954830c9b2b</div>
<div>Jul 15 08:51:32 [21679] vsanqa8 cib: info: cib_process_replace: Replaced 0.2760.307 with 0.2760.307 from vsanqa7</div><div>Jul 15 08:51:32 [21682] vsanqa8 attrd: notice: attrd_local_callback: Sending full refresh (origin=crmd)</div>
<div>Jul 15 08:51:32 [21681] vsanqa8 lrmd: info: process_lrmd_get_rsc_info: Resource 'vha-7413ed6d-2a3b-4ffc-9cd0-b80778d7a839' not found (0 active resources)</div><div>Jul 15 08:51:32 [21681] vsanqa8 lrmd: info: process_lrmd_get_rsc_info: Resource 'vha-7413ed6d-2a3b-4ffc-9cd0-b80778d7a839:0' not found (0 active resources)</div>
<div>Jul 15 08:51:32 [21681] vsanqa8 lrmd: info: process_lrmd_rsc_register: Added 'vha-7413ed6d-2a3b-4ffc-9cd0-b80778d7a839' to the rsc list (1 active resources)</div><div>Jul 15 08:51:32 [21684] vsanqa8 crmd: error: handle_request: We didn't ask to be shut down, yet our DC is telling us too.</div>
<div>Jul 15 08:51:32 [21684] vsanqa8 crmd: notice: do_state_transition: State transition S_NOT_DC -> S_STOPPING [ input=I_STOP cause=C_HA_MESSAGE origin=route_message ]</div><div>Jul 15 08:51:32 [21684] vsanqa8 crmd: info: do_shutdown: Disconnecting STONITH...</div>
<div>Jul 15 08:51:32 [21684] vsanqa8 crmd: info: tengine_stonith_connection_destroy: Fencing daemon disconnected</div><div>Jul 15 08:51:32 [21684] vsanqa8 crmd: info: verify_stopped: 1 pending LRM operations at shutdown... waiting</div>
<div>Jul 15 08:51:32 [21684] vsanqa8 crmd: info: ghash_print_pending: Pending action: vha-7413ed6d-2a3b-4ffc-9cd0-b80778d7a839:6 (vha-7413ed6d-2a3b-4ffc-9cd0-b80778d7a839_monitor_0)</div><div><br></div><div>
<br></div><div>Regards,</div><div> Kiran</div></div>