Jun  5 15:29:51 vm1 Dummy(prmDummy)[4337]: DEBUG: prmDummy monitor : 0
Jun  5 15:29:51 vm1 cib[4129]:     info: crm_client_destroy: Destroying 0 events
Jun  5 15:29:54 vm1 stonith-ng[4130]:     info: crm_client_new: Connecting 0x1942700 for uid=0 gid=0 pid=4364 id=2a2c8ba2-2188-4518-9f79-a9ef9fa489a1
Jun  5 15:29:54 vm1 stonith-ng[4130]:     info: stonith_command: Processed register from stonith_admin.4364: OK (0)
Jun  5 15:29:54 vm1 stonith-ng[4130]:     info: stonith_command: Processed st_notify from stonith_admin.4364: OK (0)
Jun  5 15:29:54 vm1 stonith-ng[4130]:   notice: handle_request: Client stonith_admin.4364.2a2c8ba2 wants to fence (reboot) 'vm2' with device '(any)'
Jun  5 15:29:54 vm1 stonith-ng[4130]:   notice: initiate_remote_stonith_op: Initiating remote operation reboot for vm2: 8b54c434-f4ee-4a9a-88c4-350432fe89d4 (0)
Jun  5 15:29:54 vm1 stonith-ng[4130]:     info: stonith_command: Processed st_fence from stonith_admin.4364: Operation now in progress (-115)
Jun  5 15:29:54 vm1 stonith-ng[4130]:     info: can_fence_host_with_device: st1 can fence vm2 (aka. 'iida-rhel64-2'): static-list
Jun  5 15:29:54 vm1 stonith-ng[4130]:     info: can_fence_host_with_device: st1:0 can fence vm2 (aka. 'iida-rhel64-2'): static-list
Jun  5 15:29:54 vm1 stonith-ng[4130]:     info: stonith_command: Processed st_query from vm1: OK (0)
Jun  5 15:29:54 vm1 stonith-ng[4130]:     info: process_remote_stonith_query: Query result 1 of 2 from vm2 (2 devices)
Jun  5 15:29:54 vm1 stonith-ng[4130]:     info: stonith_command: Processed st_query reply from vm2: OK (0)
Jun  5 15:29:54 vm1 stonith-ng[4130]:     info: process_remote_stonith_query: Query result 2 of 2 from vm1 (2 devices)
Jun  5 15:29:54 vm1 stonith-ng[4130]:     info: call_remote_stonith: Total remote op timeout set to 240 for fencing of node vm2 for stonith_admin.4364.8b54c434
Jun  5 15:29:54 vm1 stonith-ng[4130]:     info: call_remote_stonith: Requesting that vm1 perform op reboot vm2 for stonith_admin.4364 (288s)
Jun  5 15:29:54 vm1 stonith-ng[4130]:     info: stonith_command: Processed st_query reply from vm1: OK (0)
Jun  5 15:29:54 vm1 stonith-ng[4130]:     info: can_fence_host_with_device: st1 can fence vm2 (aka. 'iida-rhel64-2'): static-list
Jun  5 15:29:54 vm1 stonith-ng[4130]:     info: can_fence_host_with_device: st1:0 can fence vm2 (aka. 'iida-rhel64-2'): static-list
Jun  5 15:29:54 vm1 stonith-ng[4130]:     info: stonith_fence_get_devices_cb: Found 2 matching devices for 'vm2'
Jun  5 15:29:54 vm1 stonith-ng[4130]:     info: stonith_command: Processed st_fence from vm1: Operation now in progress (-115)
Jun  5 15:29:54 vm1 stonith-ng[4130]:     info: stonith_action_create: Initiating action reboot for agent fence_rhevm (target=vm2)
Jun  5 15:29:55 vm1 corosync[4108]:   [TOTEM ] timer_function_orf_token_timeout The token was lost in the OPERATIONAL state.
Jun  5 15:29:55 vm1 corosync[4108]:   [TOTEM ] timer_function_orf_token_timeout A processor failed, forming new configuration.
Jun  5 15:29:55 vm1 corosync[4108]:   [TOTEM ] totemudp_build_sockets_ip Receive multicast socket recv buffer size (320000 bytes).
Jun  5 15:29:55 vm1 corosync[4108]:   [TOTEM ] totemudp_build_sockets_ip Transmit multicast socket send buffer size (320000 bytes).
Jun  5 15:29:55 vm1 corosync[4108]:   [TOTEM ] totemudp_build_sockets_ip Local receive multicast loop socket recv buffer size (320000 bytes).
Jun  5 15:29:55 vm1 corosync[4108]:   [TOTEM ] totemudp_build_sockets_ip Local transmit multicast loop socket send buffer size (320000 bytes).
Jun  5 15:29:55 vm1 corosync[4108]:   [TOTEM ] totemudp_build_sockets_ip Receive multicast socket recv buffer size (320000 bytes).
Jun  5 15:29:55 vm1 corosync[4108]:   [TOTEM ] totemudp_build_sockets_ip Transmit multicast socket send buffer size (320000 bytes).
Jun  5 15:29:55 vm1 corosync[4108]:   [TOTEM ] totemudp_build_sockets_ip Local receive multicast loop socket recv buffer size (320000 bytes).
Jun  5 15:29:55 vm1 corosync[4108]:   [TOTEM ] totemudp_build_sockets_ip Local transmit multicast loop socket send buffer size (320000 bytes).
Jun  5 15:29:55 vm1 corosync[4108]:   [TOTEM ] memb_state_gather_enter entering GATHER state from 2.
Jun  5 15:29:56 vm1 attrd_updater[4374]:   notice: crm_add_logfile: Additional logging available in /var/log/pacemaker.log
Jun  5 15:29:56 vm1 attrd_updater[4374]:    debug: crm_update_callsites: Enabling callsites based on priority=7, files=(null), functions=(null), formats=(null), tags=(null)
Jun  5 15:29:56 vm1 attrd[4132]:     info: crm_client_new: Connecting 0x222c970 for uid=0 gid=0 pid=4374 id=eaae6bad-bea1-4a56-a6fd-7a8e114cbcb1
Jun  5 15:29:56 vm1 attrd[4132]:     info: crm_client_destroy: Destroying 0 events
Jun  5 15:29:57 vm1 corosync[4108]:   [TOTEM ] memb_state_gather_enter entering GATHER state from 0.
Jun  5 15:29:57 vm1 corosync[4108]:   [TOTEM ] memb_state_commit_token_create Creating commit token because I am the rep.
Jun  5 15:29:57 vm1 corosync[4108]:   [TOTEM ] old_ring_state_save Saving state aru 88 high seq received 88
Jun  5 15:29:57 vm1 corosync[4108]:   [TOTEM ] memb_ring_id_set_and_store Storing new sequence id for ring 178
Jun  5 15:29:57 vm1 corosync[4108]:   [TOTEM ] memb_state_commit_enter entering COMMIT state.
Jun  5 15:29:57 vm1 corosync[4108]:   [TOTEM ] message_handler_memb_commit_token got commit token
Jun  5 15:29:57 vm1 corosync[4108]:   [TOTEM ] memb_state_recovery_enter entering RECOVERY state.
Jun  5 15:29:57 vm1 corosync[4108]:   [TOTEM ] memb_state_recovery_enter TRANS [0] member 192.168.101.131:
Jun  5 15:29:57 vm1 corosync[4108]:   [TOTEM ] memb_state_recovery_enter position [0] member 192.168.101.131:
Jun  5 15:29:57 vm1 corosync[4108]:   [TOTEM ] memb_state_recovery_enter previous ring seq 174 rep 192.168.101.131
Jun  5 15:29:57 vm1 corosync[4108]:   [TOTEM ] memb_state_recovery_enter aru 88 high delivered 88 received flag 1
Jun  5 15:29:57 vm1 corosync[4108]:   [TOTEM ] memb_state_recovery_enter Did not need to originate any messages in recovery.
Jun  5 15:29:57 vm1 corosync[4108]:   [TOTEM ] message_handler_memb_commit_token got commit token
Jun  5 15:29:57 vm1 corosync[4108]:   [TOTEM ] message_handler_memb_commit_token Sending initial ORF token
Jun  5 15:29:57 vm1 corosync[4108]:   [TOTEM ] message_handler_memb_commit_token got commit token
Jun  5 15:29:57 vm1 corosync[4108]:   [TOTEM ] message_handler_memb_commit_token Sending initial ORF token
Jun  5 15:29:57 vm1 corosync[4108]:   [TOTEM ] message_handler_memb_commit_token got commit token
Jun  5 15:29:57 vm1 corosync[4108]:   [TOTEM ] message_handler_memb_commit_token Sending initial ORF token
Jun  5 15:29:57 vm1 corosync[4108]:   [TOTEM ] message_handler_memb_commit_token got commit token
Jun  5 15:29:57 vm1 corosync[4108]:   [TOTEM ] message_handler_memb_commit_token Sending initial ORF token
Jun  5 15:29:57 vm1 corosync[4108]:   [TOTEM ] message_handler_orf_token token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 0, aru 0
Jun  5 15:29:57 vm1 corosync[4108]:   [TOTEM ] message_handler_orf_token install seq 0 aru 0 high seq received 0
Jun  5 15:29:57 vm1 corosync[4108]:   [TOTEM ] message_handler_orf_token token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 1, aru 0
Jun  5 15:29:57 vm1 corosync[4108]:   [TOTEM ] message_handler_orf_token install seq 0 aru 0 high seq received 0
Jun  5 15:29:57 vm1 corosync[4108]:   [TOTEM ] message_handler_orf_token token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 2, aru 0
Jun  5 15:29:57 vm1 corosync[4108]:   [TOTEM ] message_handler_orf_token install seq 0 aru 0 high seq received 0
Jun  5 15:29:57 vm1 corosync[4108]:   [TOTEM ] message_handler_orf_token token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 3, aru 0
Jun  5 15:29:57 vm1 corosync[4108]:   [TOTEM ] message_handler_orf_token install seq 0 aru 0 high seq received 0
Jun  5 15:29:57 vm1 corosync[4108]:   [TOTEM ] message_handler_orf_token retrans flag count 4 token aru 0 install seq 0 aru 0 0
Jun  5 15:29:57 vm1 corosync[4108]:   [TOTEM ] old_ring_state_reset Resetting old ring state
Jun  5 15:29:57 vm1 corosync[4108]:   [TOTEM ] deliver_messages_from_recovery_to_regular recovery to regular 1-0
Jun  5 15:29:57 vm1 corosync[4108]:   [MAIN  ] member_object_left Member left: r(0) ip(192.168.101.132) r(1) ip(192.168.102.132) 
Jun  5 15:29:57 vm1 corosync[4108]:   [VOTEQ ] decode_flags flags: quorate: Yes Leaving: No WFA Status: No First: No Qdevice: No QdeviceAlive: No QdeviceCastVote: No QdeviceMasterWins: No
Jun  5 15:29:57 vm1 corosync[4108]:   [VOTEQ ] recalculate_quorum total_votes=1, expected_votes=2
Jun  5 15:29:57 vm1 corosync[4108]:   [VOTEQ ] calculate_quorum node 2204477632 state=1, votes=1, expected=2
Jun  5 15:29:57 vm1 corosync[4108]:   [VOTEQ ] calculate_quorum node 2221254848 state=2, votes=1, expected=2
Jun  5 15:29:57 vm1 corosync[4108]:   [VOTEQ ] get_lowest_node_id lowest node id: -2090489664 us: -2090489664
Jun  5 15:29:57 vm1 corosync[4108]:   [TOTEM ] totempg_waiting_trans_ack_cb waiting_trans_ack changed to 1
Jun  5 15:29:57 vm1 corosync[4108]:   [VOTEQ ] decode_flags flags: quorate: Yes Leaving: No WFA Status: No First: No Qdevice: No QdeviceAlive: No QdeviceCastVote: No QdeviceMasterWins: No
Jun  5 15:29:57 vm1 corosync[4108]:   [QUORUM] log_view_list Members[1]: -2090489664
Jun  5 15:29:57 vm1 corosync[4108]:   [QUORUM] send_library_notification sending quorum notification to (nil), length = 52
Jun  5 15:29:57 vm1 crmd[4134]:     info: pcmk_quorum_notification: Membership 376: quorum retained (1)
Jun  5 15:29:57 vm1 crmd[4134]:   notice: corosync_mark_unseen_peer_dead: Node -2073712448/vm2 was not seen in the previous transition
Jun  5 15:29:57 vm1 crmd[4134]:   notice: crm_update_peer_state: corosync_mark_unseen_peer_dead: Node vm2[2221254848] - state is now lost (was member)
Jun  5 15:29:57 vm1 crmd[4134]:     info: peer_update_callback: vm2 is now lost (was member)
Jun  5 15:29:57 vm1 crmd[4134]:  warning: match_down_event: No match for shutdown action on 2221254848
Jun  5 15:29:57 vm1 crmd[4134]:   notice: peer_update_callback: Stonith/shutdown of vm2 not matched
Jun  5 15:29:57 vm1 crmd[4134]:     info: crm_update_peer_join: erase_node_from_join: Node vm2[2221254848] - join-1 phase 4 -> 0
Jun  5 15:29:57 vm1 crmd[4134]:     info: abort_transition_graph: peer_update_callback:214 - Triggered transition abort (complete=1) : Node failure
Jun  5 15:29:57 vm1 crmd[4134]:     info: crm_cs_flush: Sent 0 CPG messages  (1 remaining, last=15): Try again
Jun  5 15:29:57 vm1 crmd[4134]:   notice: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Jun  5 15:29:57 vm1 cib[4129]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/crmd/36, version=0.20.21)
Jun  5 15:29:57 vm1 cib[4129]:     info: cib_process_request: Completed cib_modify operation for section nodes: OK (rc=0, origin=local/crmd/37, version=0.20.21)
Jun  5 15:29:57 vm1 corosync[4108]:   [TOTEM ] memb_state_operational_enter entering OPERATIONAL state.
Jun  5 15:29:57 vm1 corosync[4108]:   [TOTEM ] memb_state_operational_enter A processor joined or left the membership and a new membership (192.168.101.131:376) was formed.
Jun  5 15:29:57 vm1 cib[4129]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/crmd/38, version=0.20.22)
Jun  5 15:29:57 vm1 corosync[4108]:   [VOTEQ ] message_handler_req_exec_votequorum_nodeinfo got nodeinfo message from cluster node 2204477632
Jun  5 15:29:57 vm1 cib[4129]:     info: crm_cs_flush: Sent 0 CPG messages  (1 remaining, last=22): Try again
Jun  5 15:29:57 vm1 corosync[4108]:   [VOTEQ ] message_handler_req_exec_votequorum_nodeinfo nodeinfo message[2204477632]: votes: 1, expected: 2 flags: 1
Jun  5 15:29:57 vm1 corosync[4108]:   [VOTEQ ] decode_flags flags: quorate: Yes Leaving: No WFA Status: No First: No Qdevice: No QdeviceAlive: No QdeviceCastVote: No QdeviceMasterWins: No
Jun  5 15:29:57 vm1 corosync[4108]:   [VOTEQ ] recalculate_quorum total_votes=1, expected_votes=2
Jun  5 15:29:57 vm1 corosync[4108]:   [VOTEQ ] calculate_quorum node 2204477632 state=1, votes=1, expected=2
Jun  5 15:29:57 vm1 corosync[4108]:   [VOTEQ ] calculate_quorum node 2221254848 state=2, votes=1, expected=2
Jun  5 15:29:57 vm1 cib[4129]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/39, version=0.20.22)
Jun  5 15:29:57 vm1 corosync[4108]:   [VOTEQ ] get_lowest_node_id lowest node id: -2090489664 us: -2090489664
Jun  5 15:29:57 vm1 corosync[4108]:   [VOTEQ ] message_handler_req_exec_votequorum_nodeinfo got nodeinfo message from cluster node 2204477632
Jun  5 15:29:57 vm1 corosync[4108]:   [VOTEQ ] message_handler_req_exec_votequorum_nodeinfo nodeinfo message[0]: votes: 0, expected: 0 flags: 0
Jun  5 15:29:57 vm1 corosync[4108]:   [VOTEQ ] message_handler_req_exec_votequorum_nodeinfo got nodeinfo message from cluster node 2204477632
Jun  5 15:29:57 vm1 corosync[4108]:   [VOTEQ ] message_handler_req_exec_votequorum_nodeinfo nodeinfo message[2204477632]: votes: 1, expected: 2 flags: 1
Jun  5 15:29:57 vm1 corosync[4108]:   [VOTEQ ] decode_flags flags: quorate: Yes Leaving: No WFA Status: No First: No Qdevice: No QdeviceAlive: No QdeviceCastVote: No QdeviceMasterWins: No
Jun  5 15:29:57 vm1 corosync[4108]:   [VOTEQ ] recalculate_quorum total_votes=1, expected_votes=2
Jun  5 15:29:57 vm1 corosync[4108]:   [VOTEQ ] calculate_quorum node 2204477632 state=1, votes=1, expected=2
Jun  5 15:29:57 vm1 corosync[4108]:   [VOTEQ ] calculate_quorum node 2221254848 state=2, votes=1, expected=2
Jun  5 15:29:57 vm1 corosync[4108]:   [VOTEQ ] get_lowest_node_id lowest node id: -2090489664 us: -2090489664
Jun  5 15:29:57 vm1 corosync[4108]:   [VOTEQ ] message_handler_req_exec_votequorum_nodeinfo got nodeinfo message from cluster node 2204477632
Jun  5 15:29:57 vm1 corosync[4108]:   [VOTEQ ] message_handler_req_exec_votequorum_nodeinfo nodeinfo message[0]: votes: 0, expected: 0 flags: 0
Jun  5 15:29:57 vm1 corosync[4108]:   [SYNC  ] sync_barrier_handler Committing synchronization for corosync configuration map access
Jun  5 15:29:57 vm1 corosync[4108]:   [CMAP  ] cmap_sync_activate Not first sync -> no action
Jun  5 15:29:57 vm1 corosync[4108]:   [CPG   ] downlist_log comparing: sender r(0) ip(192.168.101.131) r(1) ip(192.168.102.131) ; members(old:2 left:1)
Jun  5 15:29:57 vm1 corosync[4108]:   [CPG   ] downlist_log chosen downlist: sender r(0) ip(192.168.101.131) r(1) ip(192.168.102.131) ; members(old:2 left:1)
Jun  5 15:29:57 vm1 corosync[4108]:   [CPG   ] downlist_master_choose_and_send left_list_entries:1
Jun  5 15:29:57 vm1 corosync[4108]:   [CPG   ] downlist_master_choose_and_send left_list[0] group:attrd\x00, ip:r(0) ip(192.168.101.132) r(1) ip(192.168.102.132) , pid:2444
Jun  5 15:29:57 vm1 attrd[4132]:     info: pcmk_cpg_membership: Left[2.0] attrd.2221254848 
Jun  5 15:29:57 vm1 attrd[4132]:     info: crm_update_peer_proc: pcmk_cpg_membership: Node vm2[2221254848] - corosync-cpg is now offline
Jun  5 15:29:57 vm1 attrd[4132]:     info: pcmk_cpg_membership: Member[2.0] attrd.2204477632 
Jun  5 15:29:57 vm1 corosync[4108]:   [CPG   ] downlist_master_choose_and_send left_list_entries:1
Jun  5 15:29:57 vm1 corosync[4108]:   [CPG   ] downlist_master_choose_and_send left_list[0] group:cib\x00, ip:r(0) ip(192.168.101.132) r(1) ip(192.168.102.132) , pid:2441
Jun  5 15:29:57 vm1 corosync[4108]:   [CPG   ] downlist_master_choose_and_send left_list_entries:1
Jun  5 15:29:57 vm1 corosync[4108]:   [CPG   ] downlist_master_choose_and_send left_list[0] group:crmd\x00, ip:r(0) ip(192.168.101.132) r(1) ip(192.168.102.132) , pid:2446
Jun  5 15:29:57 vm1 corosync[4108]:   [CPG   ] downlist_master_choose_and_send left_list_entries:1
Jun  5 15:29:57 vm1 corosync[4108]:   [CPG   ] downlist_master_choose_and_send left_list[0] group:pcmk\x00, ip:r(0) ip(192.168.101.132) r(1) ip(192.168.102.132) , pid:2439
Jun  5 15:29:57 vm1 corosync[4108]:   [CPG   ] downlist_master_choose_and_send left_list_entries:1
Jun  5 15:29:57 vm1 corosync[4108]:   [CPG   ] downlist_master_choose_and_send left_list[0] group:stonith-ng\x00, ip:r(0) ip(192.168.101.132) r(1) ip(192.168.102.132) , pid:2442
Jun  5 15:29:57 vm1 stonith-ng[4130]:     info: pcmk_cpg_membership: Left[2.0] stonith-ng.2221254848 
Jun  5 15:29:57 vm1 stonith-ng[4130]:     info: crm_update_peer_proc: pcmk_cpg_membership: Node vm2[2221254848] - corosync-cpg is now offline
Jun  5 15:29:57 vm1 stonith-ng[4130]:     info: crm_cs_flush: Sent 0 CPG messages  (1 remaining, last=4): Try again
Jun  5 15:29:57 vm1 stonith-ng[4130]:     info: pcmk_cpg_membership: Member[2.0] stonith-ng.2204477632 
Jun  5 15:29:57 vm1 corosync[4108]:   [CPG   ] message_handler_req_exec_cpg_joinlist got joinlist message from node 8365a8c0
Jun  5 15:29:57 vm1 cib[4129]:     info: pcmk_cpg_membership: Left[1.0] cib.2221254848 
Jun  5 15:29:57 vm1 corosync[4108]:   [SYNC  ] sync_barrier_handler Committing synchronization for corosync cluster closed process group service v1.01
Jun  5 15:29:57 vm1 corosync[4108]:   [CPG   ] joinlist_inform_clients joinlist_messages[0] group:crmd\x00, ip:r(0) ip(192.168.101.131) r(1) ip(192.168.102.131) , pid:4134
Jun  5 15:29:57 vm1 corosync[4108]:   [CPG   ] joinlist_inform_clients joinlist_messages[1] group:attrd\x00, ip:r(0) ip(192.168.101.131) r(1) ip(192.168.102.131) , pid:4132
Jun  5 15:29:57 vm1 corosync[4108]:   [CPG   ] joinlist_inform_clients joinlist_messages[2] group:stonith-ng\x00, ip:r(0) ip(192.168.101.131) r(1) ip(192.168.102.131) , pid:4130
Jun  5 15:29:57 vm1 corosync[4108]:   [CPG   ] joinlist_inform_clients joinlist_messages[3] group:cib\x00, ip:r(0) ip(192.168.101.131) r(1) ip(192.168.102.131) , pid:4129
Jun  5 15:29:57 vm1 corosync[4108]:   [CPG   ] joinlist_inform_clients joinlist_messages[4] group:pcmk\x00, ip:r(0) ip(192.168.101.131) r(1) ip(192.168.102.131) , pid:4127
Jun  5 15:29:57 vm1 corosync[4108]:   [MAIN  ] corosync_sync_completed Completed service synchronization, ready to provide service.
Jun  5 15:29:57 vm1 corosync[4108]:   [TOTEM ] totempg_waiting_trans_ack_cb waiting_trans_ack changed to 0
Jun  5 15:29:57 vm1 cib[4129]:     info: crm_update_peer_proc: pcmk_cpg_membership: Node vm2[2221254848] - corosync-cpg is now offline
Jun  5 15:29:57 vm1 cib[4129]:     info: pcmk_cpg_membership: Member[1.0] cib.2204477632 
Jun  5 15:29:58 vm1 crmd[4134]:     info: pcmk_cpg_membership: Left[1.0] crmd.2221254848 
Jun  5 15:29:58 vm1 crmd[4134]:     info: crm_update_peer_proc: pcmk_cpg_membership: Node vm2[2221254848] - corosync-cpg is now offline
Jun  5 15:29:58 vm1 crmd[4134]:     info: peer_update_callback: Client vm2/peer now has status [offline] (DC=true)
Jun  5 15:29:58 vm1 crmd[4134]:  warning: match_down_event: No match for shutdown action on 2221254848
Jun  5 15:29:58 vm1 crmd[4134]:   notice: peer_update_callback: Stonith/shutdown of vm2 not matched
Jun  5 15:29:58 vm1 crmd[4134]:     info: abort_transition_graph: peer_update_callback:214 - Triggered transition abort (complete=1) : Node failure
Jun  5 15:29:58 vm1 crmd[4134]:     info: pcmk_cpg_membership: Member[1.0] crmd.2204477632 
Jun  5 15:29:58 vm1 crmd[4134]:     info: do_state_transition: State transition S_POLICY_ENGINE -> S_INTEGRATION [ input=I_NODE_JOIN cause=C_FSA_INTERNAL origin=check_join_state ]
Jun  5 15:29:58 vm1 crmd[4134]:     info: do_dc_join_offer_one: An unknown node joined - (re-)offer to any unconfirmed nodes
Jun  5 15:29:58 vm1 crmd[4134]:     info: join_make_offer: Making join offers based on membership 376
Jun  5 15:29:58 vm1 crmd[4134]:     info: join_make_offer: Skipping vm1: already known 4
Jun  5 15:29:58 vm1 cib[4129]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/40, version=0.20.22)
Jun  5 15:29:58 vm1 cib[4129]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/crmd/41, version=0.20.23)
Jun  5 15:29:58 vm1 crmd[4134]:     info: do_state_transition: State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_FSA_INTERNAL origin=check_join_state ]
Jun  5 15:29:58 vm1 crmd[4134]:     info: crmd_join_phase_log: join-1: vm1=confirmed
Jun  5 15:29:58 vm1 crmd[4134]:     info: crmd_join_phase_log: join-1: vm2=none
Jun  5 15:29:58 vm1 crmd[4134]:     info: do_state_transition: State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state ]
Jun  5 15:29:58 vm1 crmd[4134]:     info: abort_transition_graph: do_te_invoke:155 - Triggered transition abort (complete=1) : Peer Cancelled
Jun  5 15:29:58 vm1 attrd[4132]:   notice: attrd_local_callback: Sending full refresh (origin=crmd)
Jun  5 15:29:58 vm1 attrd[4132]:   notice: attrd_trigger_update: Sending flush op to all hosts for: default_ping_set(1) (100)
Jun  5 15:29:58 vm1 attrd[4132]:   notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Jun  5 15:29:58 vm1 cib[4129]:     info: cib_process_request: Completed cib_modify operation for section nodes: OK (rc=0, origin=local/crmd/42, version=0.20.23)
Jun  5 15:29:58 vm1 cib[4129]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/crmd/43, version=0.20.24)
Jun  5 15:29:58 vm1 cib[4129]:     info: cib_process_request: Completed cib_modify operation for section cib: OK (rc=0, origin=local/crmd/44, version=0.20.24)
Jun  5 15:29:58 vm1 cib[4129]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/45, version=0.20.24)
Jun  5 15:29:58 vm1 cib[4129]:     info: cib_process_request: Completed cib_query operation for section //cib/status//node_state[@id='2204477632']//transient_attributes//nvpair[@name='probe_complete']: OK (rc=0, origin=local/attrd/10, version=0.20.24)
Jun  5 15:29:58 vm1 cib[4129]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/attrd/11, version=0.20.24)
Jun  5 15:29:58 vm1 cib[4129]:     info: cib_process_request: Completed cib_query operation for section //cib/status//node_state[@id='2204477632']//transient_attributes//nvpair[@name='default_ping_set(1)']: OK (rc=0, origin=local/attrd/12, version=0.20.24)
Jun  5 15:29:58 vm1 cib[4129]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/attrd/13, version=0.20.24)
Jun  5 15:29:58 vm1 stonith-ng[4130]:   notice: log_operation: Operation 'reboot' [4365] (call 2 from stonith_admin.4364) for host 'vm2' with device 'st1' returned: 0 (OK)
Jun  5 15:29:58 vm1 stonith-ng[4130]:     info: log_operation: st1:4365 [ Parse error: Ignoring unknown option 'nodename=vm2' ]
Jun  5 15:29:58 vm1 stonith-ng[4130]:     info: log_operation: st1:4365 [ Success: Rebooted ]
Jun  5 15:29:58 vm1 stonith-ng[4130]:     info: stonith_command: Processed st_fence reply from vm1: OK (0)
Jun  5 15:29:58 vm1 stonith-ng[4130]:   notice: remote_op_done: Operation reboot of vm2 by vm1 for stonith_admin.4364@vm1.8b54c434: OK
Jun  5 15:29:58 vm1 stonith-ng[4130]:     info: stonith_command: Processed st_notify reply from vm1: OK (0)
Jun  5 15:29:58 vm1 stonith-ng[4130]:     info: crm_client_destroy: Destroying 0 events
Jun  5 15:29:59 vm1 crmd[4134]:   notice: tengine_stonith_notify: Peer vm2 was terminated (reboot) by vm1 for vm1: OK (ref=8b54c434-f4ee-4a9a-88c4-350432fe89d4) by client stonith_admin.4364
Jun  5 15:29:59 vm1 crmd[4134]:     info: crm_update_peer_proc: send_stonith_update: Node vm2[2221254848] - all processes are now offline
Jun  5 15:29:59 vm1 crmd[4134]:     info: peer_update_callback: Client vm2/peer now has status [offline] (DC=true)
Jun  5 15:29:59 vm1 crmd[4134]:     info: crm_update_peer_expected: send_stonith_update: Node vm2[2221254848] - expected state is now down
Jun  5 15:29:59 vm1 crmd[4134]:     info: erase_status_tag: Deleting xpath: //node_state[@uname='vm2']/lrm
Jun  5 15:29:59 vm1 crmd[4134]:     info: erase_status_tag: Deleting xpath: //node_state[@uname='vm2']/transient_attributes
Jun  5 15:29:59 vm1 crmd[4134]:     info: tengine_stonith_notify: External fencing operation from stonith_admin.4364 fenced vm2
Jun  5 15:29:59 vm1 crmd[4134]:     info: abort_transition_graph: tengine_stonith_notify:172 - Triggered transition abort (complete=1) : External Fencing Operation
Jun  5 15:29:59 vm1 cib[4129]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/46, version=0.20.24)
Jun  5 15:29:59 vm1 cib[4129]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/crmd/47, version=0.20.25)
Jun  5 15:29:59 vm1 cib[4129]:     info: cib_process_request: Completed cib_delete operation for section //node_state[@uname='vm2']/lrm: OK (rc=0, origin=local/crmd/48, version=0.20.26)
Jun  5 15:29:59 vm1 crmd[4134]:     info: abort_transition_graph: te_update_diff:258 - Triggered transition abort (complete=1, node=vm2, tag=lrm_rsc_op, id=st1_last_0, magic=0:0;12:0:0:74a601cb-307b-4fe0-a62c-d4436cdc7a48, cib=0.20.26) : Resource op removal
Jun  5 15:29:59 vm1 crmd[4134]:     info: abort_transition_graph: te_update_diff:188 - Triggered transition abort (complete=1, node=vm2, tag=transient_attributes, id=2221254848, magic=NA, cib=0.20.27) : Transient attribute: removal
Jun  5 15:29:59 vm1 cib[4129]:     info: cib_process_request: Completed cib_delete operation for section //node_state[@uname='vm2']/transient_attributes: OK (rc=0, origin=local/crmd/49, version=0.20.27)
Jun  5 15:29:59 vm1 cib[4129]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/50, version=0.20.27)
Jun  5 15:29:59 vm1 cib[4129]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/51, version=0.20.27)
Jun  5 15:29:59 vm1 cib[4129]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/52, version=0.20.27)
Jun  5 15:29:59 vm1 pengine[4133]:   notice: unpack_config: On loss of CCM Quorum: Ignore
Jun  5 15:29:59 vm1 pengine[4133]:  warning: unpack_nodes: Blind faith: not fencing unseen nodes
Jun  5 15:29:59 vm1 pengine[4133]:     info: determine_online_status_fencing: Node vm1 is active
Jun  5 15:29:59 vm1 pengine[4133]:     info: determine_online_status: Node vm1 is online
Jun  5 15:29:59 vm1 pengine[4133]:     info: clone_print:  Clone Set: cl1 [st1]
Jun  5 15:29:59 vm1 pengine[4133]:     info: short_print:      Started: [ vm1 ]
Jun  5 15:29:59 vm1 pengine[4133]:     info: short_print:      Stopped: [ vm2 ]
Jun  5 15:29:59 vm1 pengine[4133]:     info: native_print: prmDummy#011(ocf::pacemaker:Dummy):#011Started vm1 
Jun  5 15:29:59 vm1 pengine[4133]:     info: clone_print:  Clone Set: clnPing [prmPing]
Jun  5 15:29:59 vm1 pengine[4133]:     info: short_print:      Started: [ vm1 ]
Jun  5 15:29:59 vm1 pengine[4133]:     info: short_print:      Stopped: [ vm2 ]
Jun  5 15:29:59 vm1 pengine[4133]:     info: native_color: Resource st1:1 cannot run anywhere
Jun  5 15:29:59 vm1 pengine[4133]:     info: native_color: Resource prmPing:1 cannot run anywhere
Jun  5 15:29:59 vm1 pengine[4133]:     info: LogActions: Leave   st1:0#011(Started vm1)
Jun  5 15:29:59 vm1 pengine[4133]:     info: LogActions: Leave   st1:1#011(Stopped)
Jun  5 15:29:59 vm1 pengine[4133]:     info: LogActions: Leave   prmDummy#011(Started vm1)
Jun  5 15:29:59 vm1 pengine[4133]:     info: LogActions: Leave   prmPing:0#011(Started vm1)
Jun  5 15:29:59 vm1 pengine[4133]:     info: LogActions: Leave   prmPing:1#011(Stopped)
Jun  5 15:29:59 vm1 crmd[4134]:     info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Jun  5 15:29:59 vm1 pengine[4133]:   notice: process_pe_message: Calculated Transition 3: /var/lib/pacemaker/pengine/pe-input-3.bz2
Jun  5 15:29:59 vm1 crmd[4134]:     info: do_te_invoke: Processing graph 3 (ref=pe_calc-dc-1370413799-29) derived from /var/lib/pacemaker/pengine/pe-input-3.bz2
Jun  5 15:29:59 vm1 crmd[4134]:   notice: run_graph: Transition 3 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-3.bz2): Complete
Jun  5 15:29:59 vm1 crmd[4134]:     info: do_log: FSA: Input I_TE_SUCCESS from notify_crmd() received in state S_TRANSITION_ENGINE
Jun  5 15:29:59 vm1 crmd[4134]:   notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
Jun  5 15:29:59 vm1 crmd[4134]:     info: cib_fencing_updated: Fencing update 47 for vm2: complete
Jun  5 15:30:01 vm1 Dummy(prmDummy)[4377]: DEBUG: prmDummy monitor : 0
Jun  5 15:30:08 vm1 attrd_updater[4406]:   notice: crm_add_logfile: Additional logging available in /var/log/pacemaker.log
Jun  5 15:30:08 vm1 attrd_updater[4406]:    debug: crm_update_callsites: Enabling callsites based on priority=7, files=(null), functions=(null), formats=(null), tags=(null)
Jun  5 15:30:08 vm1 attrd[4132]:     info: crm_client_new: Connecting 0x222c970 for uid=0 gid=0 pid=4406 id=02a8e28c-2b20-45a6-ae8b-3a7113db2863
Jun  5 15:30:08 vm1 attrd[4132]:     info: crm_client_destroy: Destroying 0 events
Jun  5 15:30:11 vm1 Dummy(prmDummy)[4407]: DEBUG: prmDummy monitor : 0
Jun  5 15:30:20 vm1 attrd_updater[4436]:   notice: crm_add_logfile: Additional logging available in /var/log/pacemaker.log
Jun  5 15:30:20 vm1 attrd_updater[4436]:    debug: crm_update_callsites: Enabling callsites based on priority=7, files=(null), functions=(null), formats=(null), tags=(null)
Jun  5 15:30:20 vm1 attrd[4132]:     info: crm_client_new: Connecting 0x222c970 for uid=0 gid=0 pid=4436 id=a0b8ea21-8eb2-436a-bada-12ece730cdb7
Jun  5 15:30:20 vm1 attrd[4132]:     info: crm_client_destroy: Destroying 0 events
Jun  5 15:30:21 vm1 Dummy(prmDummy)[4437]: DEBUG: prmDummy monitor : 0
Jun  5 15:30:23 vm1 pacemakerd[4127]:     info: crm_signal_dispatch: Invoking handler for signal 15: Terminated
Jun  5 15:30:24 vm1 pacemakerd[4127]:   notice: pcmk_shutdown_worker: Shuting down Pacemaker
Jun  5 15:30:24 vm1 pacemakerd[4127]:   notice: stop_child: Stopping crmd: Sent -15 to process 4134
Jun  5 15:30:24 vm1 crmd[4134]:     info: crm_signal_dispatch: Invoking handler for signal 15: Terminated
Jun  5 15:30:24 vm1 crmd[4134]:   notice: crm_shutdown: Requesting shutdown, upper limit is 1200000ms
Jun  5 15:30:24 vm1 crmd[4134]:     info: do_log: FSA: Input I_SHUTDOWN from crm_shutdown() received in state S_IDLE
Jun  5 15:30:24 vm1 crmd[4134]:   notice: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_SHUTDOWN cause=C_SHUTDOWN origin=crm_shutdown ]
Jun  5 15:30:24 vm1 crmd[4134]:     info: do_shutdown_req: Sending shutdown request to vm1
Jun  5 15:30:24 vm1 crmd[4134]:     info: handle_shutdown_request: Creating shutdown request for vm1 (state=S_POLICY_ENGINE)
Jun  5 15:30:24 vm1 attrd[4132]:   notice: attrd_trigger_update: Sending flush op to all hosts for: shutdown (1370413824)
Jun  5 15:30:24 vm1 cib[4129]:     info: cib_process_request: Completed cib_query operation for section //cib/status//node_state[@id='2204477632']//transient_attributes//nvpair[@name='shutdown']: No such device or address (rc=-6, origin=local/attrd/14, version=0.20.27)
Jun  5 15:30:24 vm1 cib[4129]:     info: cib_process_request: Completed cib_query operation for section /cib: OK (rc=0, origin=local/attrd/15, version=0.20.27)
Jun  5 15:30:24 vm1 attrd[4132]:   notice: attrd_perform_update: Sent update 16: shutdown=1370413824
Jun  5 15:30:24 vm1 cib[4129]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/attrd/16, version=0.20.28)
Jun  5 15:30:24 vm1 crmd[4134]:     info: abort_transition_graph: te_update_diff:172 - Triggered transition abort (complete=1, node=vm1, tag=nvpair, id=status-2204477632-shutdown, name=shutdown, value=1370413824, magic=NA, cib=0.20.28) : Transient attribute: update
Jun  5 15:30:24 vm1 cib[4129]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/53, version=0.20.28)
Jun  5 15:30:24 vm1 pengine[4133]:   notice: unpack_config: On loss of CCM Quorum: Ignore
Jun  5 15:30:24 vm1 pengine[4133]:  warning: unpack_nodes: Blind faith: not fencing unseen nodes
Jun  5 15:30:24 vm1 pengine[4133]:     info: determine_online_status: Node vm1 is shutting down
Jun  5 15:30:24 vm1 pengine[4133]:     info: clone_print:  Clone Set: cl1 [st1]
Jun  5 15:30:24 vm1 pengine[4133]:     info: short_print:      Started: [ vm1 ]
Jun  5 15:30:24 vm1 pengine[4133]:     info: short_print:      Stopped: [ vm2 ]
Jun  5 15:30:24 vm1 pengine[4133]:     info: native_print: prmDummy#011(ocf::pacemaker:Dummy):#011Started vm1 
Jun  5 15:30:24 vm1 pengine[4133]:     info: clone_print:  Clone Set: clnPing [prmPing]
Jun  5 15:30:24 vm1 pengine[4133]:     info: short_print:      Started: [ vm1 ]
Jun  5 15:30:24 vm1 pengine[4133]:     info: short_print:      Stopped: [ vm2 ]
Jun  5 15:30:24 vm1 pengine[4133]:     info: native_color: Resource st1:0 cannot run anywhere
Jun  5 15:30:24 vm1 pengine[4133]:     info: native_color: Resource st1:1 cannot run anywhere
Jun  5 15:30:24 vm1 pengine[4133]:     info: native_color: Resource prmDummy cannot run anywhere
Jun  5 15:30:24 vm1 pengine[4133]:     info: native_color: Resource prmPing:0 cannot run anywhere
Jun  5 15:30:24 vm1 pengine[4133]:     info: native_color: Resource prmPing:1 cannot run anywhere
Jun  5 15:30:24 vm1 pengine[4133]:   notice: stage6: Scheduling Node vm1 for shutdown
Jun  5 15:30:24 vm1 pengine[4133]:   notice: LogActions: Stop    st1:0#011(vm1)
Jun  5 15:30:24 vm1 pengine[4133]:     info: LogActions: Leave   st1:1#011(Stopped)
Jun  5 15:30:24 vm1 pengine[4133]:   notice: LogActions: Stop    prmDummy#011(vm1)
Jun  5 15:30:24 vm1 pengine[4133]:   notice: LogActions: Stop    prmPing:0#011(vm1)
Jun  5 15:30:24 vm1 pengine[4133]:     info: LogActions: Leave   prmPing:1#011(Stopped)
Jun  5 15:30:24 vm1 crmd[4134]:     info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Jun  5 15:30:24 vm1 crmd[4134]:     info: do_te_invoke: Processing graph 4 (ref=pe_calc-dc-1370413824-31) derived from /var/lib/pacemaker/pengine/pe-input-4.bz2
Jun  5 15:30:24 vm1 crmd[4134]:   notice: te_rsc_command: Initiating action 11: stop prmDummy_stop_0 on vm1 (local)
Jun  5 15:30:24 vm1 lrmd[4131]:     info: cancel_recurring_action: Cancelling operation prmDummy_monitor_10000
Jun  5 15:30:24 vm1 pengine[4133]:   notice: process_pe_message: Calculated Transition 4: /var/lib/pacemaker/pengine/pe-input-4.bz2
Jun  5 15:30:24 vm1 crmd[4134]:     info: do_lrm_rsc_op: Performing key=11:4:0:74a601cb-307b-4fe0-a62c-d4436cdc7a48 op=prmDummy_stop_0
Jun  5 15:30:24 vm1 lrmd[4131]:     info: log_execute: executing - rsc:prmDummy action:stop call_id:36
Jun  5 15:30:24 vm1 crmd[4134]:     info: process_lrm_event: LRM operation prmDummy_monitor_10000 (call=32, status=1, cib-update=0, confirmed=false) Cancelled
Jun  5 15:30:24 vm1 crmd[4134]:   notice: te_rsc_command: Initiating action 6: stop st1_stop_0 on vm1 (local)
Jun  5 15:30:24 vm1 crmd[4134]:     info: do_lrm_rsc_op: Performing key=6:4:0:74a601cb-307b-4fe0-a62c-d4436cdc7a48 op=st1_stop_0
Jun  5 15:30:24 vm1 lrmd[4131]:     info: log_execute: executing - rsc:st1 action:stop call_id:39
Jun  5 15:30:24 vm1 stonith-ng[4130]:     info: stonith_device_remove: Removed 'st1' from the device list (1 active devices)
Jun  5 15:30:24 vm1 stonith-ng[4130]:     info: stonith_command: Processed st_device_remove from lrmd.4131: OK (0)
Jun  5 15:30:24 vm1 lrmd[4131]:     info: log_finished: finished - rsc:st1 action:stop call_id:39  exit-code:0 exec-time:1ms queue-time:0ms
Jun  5 15:30:24 vm1 crmd[4134]:   notice: te_rsc_command: Initiating action 12: stop prmPing_stop_0 on vm1 (local)
Jun  5 15:30:24 vm1 lrmd[4131]:     info: cancel_recurring_action: Cancelling operation prmPing_monitor_10000
Jun  5 15:30:24 vm1 crmd[4134]:     info: do_lrm_rsc_op: Performing key=12:4:0:74a601cb-307b-4fe0-a62c-d4436cdc7a48 op=prmPing_stop_0
Jun  5 15:30:24 vm1 lrmd[4131]:     info: log_execute: executing - rsc:prmPing action:stop call_id:42
Jun  5 15:30:24 vm1 cib[4129]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/crmd/54, version=0.20.29)
Jun  5 15:30:24 vm1 crmd[4134]:   notice: process_lrm_event: LRM operation st1_stop_0 (call=39, rc=0, cib-update=54, confirmed=true) ok
Jun  5 15:30:24 vm1 crmd[4134]:     info: process_lrm_event: LRM operation prmPing_monitor_10000 (call=26, status=1, cib-update=0, confirmed=false) Cancelled
Jun  5 15:30:24 vm1 crmd[4134]:     info: match_graph_event: Action st1_stop_0 (6) confirmed on vm1 (rc=0)
Jun  5 15:30:24 vm1 Dummy(prmDummy)[4464]: DEBUG: prmDummy stop : 0
Jun  5 15:30:24 vm1 lrmd[4131]:     info: log_finished: finished - rsc:prmDummy action:stop call_id:36 pid:4464 exit-code:0 exec-time:34ms queue-time:0ms
Jun  5 15:30:24 vm1 crmd[4134]:   notice: process_lrm_event: LRM operation prmDummy_stop_0 (call=36, rc=0, cib-update=55, confirmed=true) ok
Jun  5 15:30:24 vm1 cib[4129]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/crmd/55, version=0.20.30)
Jun  5 15:30:24 vm1 crmd[4134]:     info: match_graph_event: Action prmDummy_stop_0 (11) confirmed on vm1 (rc=0)
Jun  5 15:30:24 vm1 attrd_updater[4485]:   notice: crm_add_logfile: Additional logging available in /var/log/pacemaker.log
Jun  5 15:30:24 vm1 attrd_updater[4485]:    debug: crm_update_callsites: Enabling callsites based on priority=7, files=(null), functions=(null), formats=(null), tags=(null)
Jun  5 15:30:24 vm1 attrd[4132]:     info: crm_client_new: Connecting 0x222c970 for uid=0 gid=0 pid=4485 id=4a92eb58-3806-457c-8f7d-d83f26950837
Jun  5 15:30:24 vm1 lrmd[4131]:     info: log_finished: finished - rsc:prmPing action:stop call_id:42 pid:4465 exit-code:0 exec-time:44ms queue-time:0ms
Jun  5 15:30:24 vm1 crmd[4134]:   notice: process_lrm_event: LRM operation prmPing_stop_0 (call=42, rc=0, cib-update=56, confirmed=true) ok
Jun  5 15:30:24 vm1 cib[4129]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/crmd/56, version=0.20.31)
Jun  5 15:30:24 vm1 crmd[4134]:     info: match_graph_event: Action prmPing_stop_0 (12) confirmed on vm1 (rc=0)
Jun  5 15:30:24 vm1 crmd[4134]:     info: te_crm_command: Executing crm-event (18): do_shutdown on vm1
Jun  5 15:30:24 vm1 crmd[4134]:     info: te_crm_command: crm-event (18) is a local shutdown
Jun  5 15:30:24 vm1 crmd[4134]:   notice: run_graph: Transition 4 (Complete=9, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-4.bz2): Complete
Jun  5 15:30:24 vm1 crmd[4134]:     info: do_log: FSA: Input I_STOP from notify_crmd() received in state S_TRANSITION_ENGINE
Jun  5 15:30:24 vm1 crmd[4134]:     info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_STOPPING [ input=I_STOP cause=C_FSA_INTERNAL origin=notify_crmd ]
Jun  5 15:30:24 vm1 crmd[4134]:     info: do_dc_release: DC role released
Jun  5 15:30:24 vm1 crmd[4134]:     info: pe_ipc_destroy: Connection to the Policy Engine released
Jun  5 15:30:24 vm1 crmd[4134]:     info: do_te_control: Transitioner is now inactive
Jun  5 15:30:24 vm1 crmd[4134]:     info: do_shutdown: Disconnecting STONITH...
Jun  5 15:30:24 vm1 crmd[4134]:     info: tengine_stonith_connection_destroy: Fencing daemon disconnected
Jun  5 15:30:24 vm1 crmd[4134]:     info: do_lrm_control: Disconnecting from the LRM
Jun  5 15:30:24 vm1 crmd[4134]:     info: lrmd_api_disconnect: Disconnecting from lrmd service
Jun  5 15:30:24 vm1 lrmd[4131]:     info: crm_client_destroy: Destroying 0 events
Jun  5 15:30:24 vm1 crmd[4134]:     info: lrmd_ipc_connection_destroy: IPC connection destroyed
Jun  5 15:30:24 vm1 crmd[4134]:     info: lrm_connection_destroy: LRM Connection disconnected
Jun  5 15:30:24 vm1 crmd[4134]:     info: lrmd_api_disconnect: Disconnecting from lrmd service
Jun  5 15:30:24 vm1 crmd[4134]:   notice: do_lrm_control: Disconnected from the LRM
Jun  5 15:30:24 vm1 crmd[4134]:     info: crm_cluster_disconnect: Disconnecting from cluster infrastructure: corosync
Jun  5 15:30:24 vm1 crmd[4134]:   notice: terminate_cs_connection: Disconnecting from Corosync
Jun  5 15:30:24 vm1 corosync[4108]:   [CPG   ] message_handler_req_lib_cpg_leave got leave request on 0x7f1bac9957c0
Jun  5 15:30:24 vm1 corosync[4108]:   [CPG   ] message_handler_req_exec_cpg_procleave got procleave message from cluster node -2090489664
Jun  5 15:30:24 vm1 corosync[4108]:   [CPG   ] message_handler_req_lib_cpg_finalize cpg finalize for conn=0x7f1bac9957c0
Jun  5 15:30:24 vm1 crmd[4134]:     info: crm_cluster_disconnect: Disconnected from corosync
Jun  5 15:30:24 vm1 corosync[4108]:   [QB    ] qb_ipcs_dispatch_connection_request HUP conn (4108-4134-29)
Jun  5 15:30:24 vm1 corosync[4108]:   [QB    ] qb_ipcs_disconnect qb_ipcs_disconnect(4108-4134-29) state:2
Jun  5 15:30:24 vm1 corosync[4108]:   [QB    ] _del epoll_ctl(del): Bad file descriptor (9)
Jun  5 15:30:24 vm1 corosync[4108]:   [MAIN  ] cs_ipcs_connection_closed cs_ipcs_connection_closed() 
Jun  5 15:30:24 vm1 corosync[4108]:   [CPG   ] cpg_lib_exit_fn exit_fn for conn=0x7f1bac9957c0
Jun  5 15:30:24 vm1 corosync[4108]:   [MAIN  ] cs_ipcs_connection_destroyed cs_ipcs_connection_destroyed() 
Jun  5 15:30:24 vm1 corosync[4108]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cpg-response-4108-4134-29-header
Jun  5 15:30:24 vm1 corosync[4108]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cpg-event-4108-4134-29-header
Jun  5 15:30:24 vm1 corosync[4108]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cpg-request-4108-4134-29-header
Jun  5 15:30:24 vm1 crmd[4134]:     info: do_ha_control: Disconnected from the cluster
Jun  5 15:30:24 vm1 crmd[4134]:     info: do_cib_control: Waiting for resource update 56 to complete
Jun  5 15:30:24 vm1 crmd[4134]:  warning: do_log: FSA: Input I_RELEASE_SUCCESS from do_dc_release() received in state S_STOPPING
Jun  5 15:30:24 vm1 corosync[4108]:   [QB    ] qb_ipcs_dispatch_connection_request HUP conn (4108-4134-30)
Jun  5 15:30:24 vm1 crmd[4134]:     info: do_cib_control: Waiting for resource update 56 to complete
Jun  5 15:30:24 vm1 crmd[4134]:     info: crmd_quorum_destroy: connection closed
Jun  5 15:30:24 vm1 crmd[4134]:     info: do_cib_control: Disconnecting CIB
Jun  5 15:30:24 vm1 corosync[4108]:   [QB    ] qb_ipcs_disconnect qb_ipcs_disconnect(4108-4134-30) state:2
Jun  5 15:30:24 vm1 corosync[4108]:   [QB    ] _del epoll_ctl(del): Bad file descriptor (9)
Jun  5 15:30:24 vm1 corosync[4108]:   [MAIN  ] cs_ipcs_connection_closed cs_ipcs_connection_closed() 
Jun  5 15:30:24 vm1 corosync[4108]:   [QUORUM] quorum_lib_exit_fn lib_exit_fn: conn=0x7f1bac999840
Jun  5 15:30:24 vm1 corosync[4108]:   [MAIN  ] cs_ipcs_connection_destroyed cs_ipcs_connection_destroyed() 
Jun  5 15:30:24 vm1 corosync[4108]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-quorum-response-4108-4134-30-header
Jun  5 15:30:24 vm1 cib[4129]:     info: cib_process_readwrite: We are now in R/O mode
Jun  5 15:30:24 vm1 cib[4129]:     info: cib_process_request: Completed cib_slave operation for section 'all': OK (rc=0, origin=local/crmd/57, version=0.20.31)
Jun  5 15:30:24 vm1 corosync[4108]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-quorum-event-4108-4134-30-header
Jun  5 15:30:24 vm1 cib[4129]:     info: crm_client_destroy: Destroying 0 events
Jun  5 15:30:24 vm1 corosync[4108]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-quorum-request-4108-4134-30-header
Jun  5 15:30:24 vm1 crmd[4134]:     info: crmd_cib_connection_destroy: Connection to the CIB terminated...
Jun  5 15:30:24 vm1 pengine[4133]:     info: crm_client_destroy: Destroying 0 events
Jun  5 15:30:24 vm1 stonith-ng[4130]:     info: crm_client_destroy: Destroying 0 events
Jun  5 15:30:24 vm1 attrd[4132]:     info: crm_client_destroy: Destroying 0 events
Jun  5 15:30:24 vm1 crmd[4134]:     info: qb_ipcs_us_withdraw: withdrawing server sockets
Jun  5 15:30:24 vm1 crmd[4134]:     info: do_exit: Performing A_EXIT_0 - gracefully exiting the CRMd
Jun  5 15:30:24 vm1 crmd[4134]:     info: do_exit: [crmd] stopped (0)
Jun  5 15:30:24 vm1 crmd[4134]:     info: crmd_exit: Dropping I_TERMINATE: [ state=S_STOPPING cause=C_FSA_INTERNAL origin=do_stop ]
Jun  5 15:30:24 vm1 attrd[4132]:     info: crm_client_destroy: Destroying 0 events
Jun  5 15:30:24 vm1 crmd[4134]:     info: crm_xml_cleanup: Cleaning up memory from libxml2
Jun  5 15:30:24 vm1 pacemakerd[4127]:     info: pcmk_child_exit: Child process crmd exited (pid=4134, rc=0)
Jun  5 15:30:24 vm1 pacemakerd[4127]:   notice: stop_child: Stopping pengine: Sent -15 to process 4133
Jun  5 15:30:24 vm1 pengine[4133]:     info: crm_signal_dispatch: Invoking handler for signal 15: Terminated
Jun  5 15:30:24 vm1 pengine[4133]:     info: qb_ipcs_us_withdraw: withdrawing server sockets
Jun  5 15:30:24 vm1 pengine[4133]:     info: crm_xml_cleanup: Cleaning up memory from libxml2
Jun  5 15:30:24 vm1 pacemakerd[4127]:     info: pcmk_child_exit: Child process pengine exited (pid=4133, rc=0)
Jun  5 15:30:24 vm1 pacemakerd[4127]:   notice: stop_child: Stopping attrd: Sent -15 to process 4132
Jun  5 15:30:24 vm1 attrd[4132]:     info: crm_signal_dispatch: Invoking handler for signal 15: Terminated
Jun  5 15:30:24 vm1 attrd[4132]:     info: attrd_shutdown: Exiting
Jun  5 15:30:24 vm1 attrd[4132]:   notice: main: Exiting...
Jun  5 15:30:24 vm1 attrd[4132]:     info: qb_ipcs_us_withdraw: withdrawing server sockets
Jun  5 15:30:24 vm1 cib[4129]:     info: crm_client_destroy: Destroying 0 events
Jun  5 15:30:24 vm1 attrd[4132]:     info: attrd_cib_connection_destroy: Connection to the CIB terminated...
Jun  5 15:30:24 vm1 attrd[4132]:     info: crm_xml_cleanup: Cleaning up memory from libxml2
Jun  5 15:30:24 vm1 pacemakerd[4127]:     info: pcmk_child_exit: Child process attrd exited (pid=4132, rc=0)
Jun  5 15:30:24 vm1 pacemakerd[4127]:   notice: stop_child: Stopping lrmd: Sent -15 to process 4131
Jun  5 15:30:24 vm1 lrmd[4131]:     info: crm_signal_dispatch: Invoking handler for signal 15: Terminated
Jun  5 15:30:24 vm1 lrmd[4131]:     info: lrmd_shutdown: Terminating with  0 clients
Jun  5 15:30:24 vm1 lrmd[4131]:     info: qb_ipcs_us_withdraw: withdrawing server sockets
Jun  5 15:30:24 vm1 lrmd[4131]:     info: crm_xml_cleanup: Cleaning up memory from libxml2
Jun  5 15:30:24 vm1 stonith-ng[4130]:     info: crm_client_destroy: Destroying 0 events
Jun  5 15:30:24 vm1 pacemakerd[4127]:     info: pcmk_child_exit: Child process lrmd exited (pid=4131, rc=0)
Jun  5 15:30:24 vm1 pacemakerd[4127]:   notice: stop_child: Stopping stonith-ng: Sent -15 to process 4130
Jun  5 15:30:24 vm1 stonith-ng[4130]:     info: crm_signal_dispatch: Invoking handler for signal 15: Terminated
Jun  5 15:30:24 vm1 stonith-ng[4130]:     info: stonith_shutdown: Terminating with  0 clients
Jun  5 15:30:24 vm1 stonith-ng[4130]:     info: cib_connection_destroy: Connection to the CIB closed.
Jun  5 15:30:24 vm1 stonith-ng[4130]:     info: qb_ipcs_us_withdraw: withdrawing server sockets
Jun  5 15:30:24 vm1 stonith-ng[4130]:     info: main: Done
Jun  5 15:30:24 vm1 stonith-ng[4130]:     info: crm_xml_cleanup: Cleaning up memory from libxml2
Jun  5 15:30:24 vm1 cib[4129]:     info: crm_client_destroy: Destroying 0 events
Jun  5 15:30:24 vm1 pacemakerd[4127]:     info: pcmk_child_exit: Child process stonith-ng exited (pid=4130, rc=0)
Jun  5 15:30:24 vm1 pacemakerd[4127]:   notice: stop_child: Stopping cib: Sent -15 to process 4129
Jun  5 15:30:24 vm1 cib[4129]:     info: crm_signal_dispatch: Invoking handler for signal 15: Terminated
Jun  5 15:30:24 vm1 cib[4129]:     info: cib_shutdown: Disconnected 0 clients
Jun  5 15:30:24 vm1 cib[4129]:     info: cib_shutdown: All clients disconnected (0)
Jun  5 15:30:24 vm1 cib[4129]:     info: terminate_cib: initiate_exit: Disconnecting from cluster infrastructure
Jun  5 15:30:24 vm1 cib[4129]:     info: crm_cluster_disconnect: Disconnecting from cluster infrastructure: corosync
Jun  5 15:30:24 vm1 cib[4129]:   notice: terminate_cs_connection: Disconnecting from Corosync
Jun  5 15:30:24 vm1 corosync[4108]:   [QB    ] qb_ipcs_dispatch_connection_request HUP conn (4108-4132-26)
Jun  5 15:30:24 vm1 corosync[4108]:   [QB    ] qb_ipcs_disconnect qb_ipcs_disconnect(4108-4132-26) state:2
Jun  5 15:30:24 vm1 corosync[4108]:   [QB    ] _del epoll_ctl(del): Bad file descriptor (9)
Jun  5 15:30:24 vm1 corosync[4108]:   [MAIN  ] cs_ipcs_connection_closed cs_ipcs_connection_closed() 
Jun  5 15:30:24 vm1 corosync[4108]:   [CPG   ] cpg_lib_exit_fn exit_fn for conn=0x7f1bac98c240
Jun  5 15:30:24 vm1 corosync[4108]:   [MAIN  ] cs_ipcs_connection_destroyed cs_ipcs_connection_destroyed() 
Jun  5 15:30:24 vm1 corosync[4108]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cpg-response-4108-4132-26-header
Jun  5 15:30:24 vm1 corosync[4108]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cpg-event-4108-4132-26-header
Jun  5 15:30:24 vm1 corosync[4108]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cpg-request-4108-4132-26-header
Jun  5 15:30:24 vm1 corosync[4108]:   [QB    ] qb_ipcs_dispatch_connection_request HUP conn (4108-4130-27)
Jun  5 15:30:24 vm1 corosync[4108]:   [QB    ] qb_ipcs_disconnect qb_ipcs_disconnect(4108-4130-27) state:2
Jun  5 15:30:24 vm1 corosync[4108]:   [QB    ] _del epoll_ctl(del): Bad file descriptor (9)
Jun  5 15:30:24 vm1 corosync[4108]:   [MAIN  ] cs_ipcs_connection_closed cs_ipcs_connection_closed() 
Jun  5 15:30:24 vm1 corosync[4108]:   [CPG   ] cpg_lib_exit_fn exit_fn for conn=0x7f1bac98d2b0
Jun  5 15:30:24 vm1 corosync[4108]:   [MAIN  ] cs_ipcs_connection_destroyed cs_ipcs_connection_destroyed() 
Jun  5 15:30:24 vm1 corosync[4108]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cpg-response-4108-4130-27-header
Jun  5 15:30:24 vm1 corosync[4108]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cpg-event-4108-4130-27-header
Jun  5 15:30:24 vm1 corosync[4108]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cpg-request-4108-4130-27-header
Jun  5 15:30:24 vm1 corosync[4108]:   [CPG   ] message_handler_req_lib_cpg_leave got leave request on 0x7f1bac993850
Jun  5 15:30:24 vm1 corosync[4108]:   [CPG   ] message_handler_req_lib_cpg_finalize cpg finalize for conn=0x7f1bac993850
Jun  5 15:30:24 vm1 cib[4129]:     info: terminate_cs_connection: No Quorum connection
Jun  5 15:30:24 vm1 cib[4129]:     info: crm_cluster_disconnect: Disconnected from corosync
Jun  5 15:30:24 vm1 cib[4129]:     info: terminate_cib: initiate_exit: Exiting from mainloop...
Jun  5 15:30:24 vm1 cib[4129]:     info: qb_ipcs_us_withdraw: withdrawing server sockets
Jun  5 15:30:24 vm1 cib[4129]:     info: qb_ipcs_us_withdraw: withdrawing server sockets
Jun  5 15:30:24 vm1 cib[4129]:     info: qb_ipcs_us_withdraw: withdrawing server sockets
Jun  5 15:30:24 vm1 cib[4129]:     info: crm_xml_cleanup: Cleaning up memory from libxml2
Jun  5 15:30:24 vm1 pacemakerd[4127]:     info: pcmk_child_exit: Child process cib exited (pid=4129, rc=0)
Jun  5 15:30:24 vm1 pacemakerd[4127]:   notice: pcmk_shutdown_worker: Shutdown complete
Jun  5 15:30:24 vm1 pacemakerd[4127]:     info: qb_ipcs_us_withdraw: withdrawing server sockets
Jun  5 15:30:24 vm1 corosync[4108]:   [QB    ] qb_ipcs_dispatch_connection_request HUP conn (4108-4129-28)
Jun  5 15:30:24 vm1 corosync[4108]:   [QB    ] qb_ipcs_disconnect qb_ipcs_disconnect(4108-4129-28) state:2
Jun  5 15:30:24 vm1 corosync[4108]:   [QB    ] _del epoll_ctl(del): Bad file descriptor (9)
Jun  5 15:30:24 vm1 corosync[4108]:   [MAIN  ] cs_ipcs_connection_closed cs_ipcs_connection_closed() 
Jun  5 15:30:24 vm1 corosync[4108]:   [CPG   ] cpg_lib_exit_fn exit_fn for conn=0x7f1bac993850
Jun  5 15:30:24 vm1 corosync[4108]:   [MAIN  ] cs_ipcs_connection_destroyed cs_ipcs_connection_destroyed() 
Jun  5 15:30:24 vm1 corosync[4108]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cpg-response-4108-4129-28-header
Jun  5 15:30:24 vm1 corosync[4108]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cpg-event-4108-4129-28-header
Jun  5 15:30:24 vm1 corosync[4108]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cpg-request-4108-4129-28-header
Jun  5 15:30:24 vm1 corosync[4108]:   [CPG   ] message_handler_req_lib_cpg_finalize cpg finalize for conn=0x7f1bac789c40
Jun  5 15:30:24 vm1 pacemakerd[4127]:     info: main: Exiting pacemakerd
Jun  5 15:30:24 vm1 pacemakerd[4127]:     info: crm_xml_cleanup: Cleaning up memory from libxml2
Jun  5 15:30:24 vm1 corosync[4108]:   [CPG   ] message_handler_req_exec_cpg_procleave got procleave message from cluster node -2090489664
Jun  5 15:30:24 vm1 corosync[4108]:   [CPG   ] message_handler_req_exec_cpg_procleave got procleave message from cluster node -2090489664
Jun  5 15:30:24 vm1 corosync[4108]:   [CPG   ] message_handler_req_exec_cpg_procleave got procleave message from cluster node -2090489664
Jun  5 15:30:24 vm1 corosync[4108]:   [QB    ] qb_ipcs_dispatch_connection_request HUP conn (4108-4127-25)
Jun  5 15:30:24 vm1 corosync[4108]:   [QB    ] qb_ipcs_disconnect qb_ipcs_disconnect(4108-4127-25) state:2
Jun  5 15:30:24 vm1 corosync[4108]:   [QB    ] _del epoll_ctl(del): Bad file descriptor (9)
Jun  5 15:30:24 vm1 corosync[4108]:   [MAIN  ] cs_ipcs_connection_closed cs_ipcs_connection_closed() 
Jun  5 15:30:24 vm1 corosync[4108]:   [CPG   ] cpg_lib_exit_fn exit_fn for conn=0x7f1bac789c40
Jun  5 15:30:24 vm1 corosync[4108]:   [MAIN  ] cs_ipcs_connection_destroyed cs_ipcs_connection_destroyed() 
Jun  5 15:30:24 vm1 corosync[4108]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cpg-response-4108-4127-25-header
Jun  5 15:30:24 vm1 corosync[4108]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cpg-event-4108-4127-25-header
Jun  5 15:30:24 vm1 corosync[4108]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cpg-request-4108-4127-25-header
Jun  5 15:30:24 vm1 corosync[4108]:   [QB    ] qb_ipcs_dispatch_connection_request HUP conn (4108-4127-24)
Jun  5 15:30:24 vm1 corosync[4108]:   [QB    ] qb_ipcs_disconnect qb_ipcs_disconnect(4108-4127-24) state:2
Jun  5 15:30:24 vm1 corosync[4108]:   [QB    ] _del epoll_ctl(del): Bad file descriptor (9)
Jun  5 15:30:24 vm1 corosync[4108]:   [MAIN  ] cs_ipcs_connection_closed cs_ipcs_connection_closed() 
Jun  5 15:30:24 vm1 corosync[4108]:   [MAIN  ] cs_ipcs_connection_destroyed cs_ipcs_connection_destroyed() 
Jun  5 15:30:24 vm1 corosync[4108]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cfg-response-4108-4127-24-header
Jun  5 15:30:24 vm1 corosync[4108]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cfg-event-4108-4127-24-header
Jun  5 15:30:24 vm1 corosync[4108]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cfg-request-4108-4127-24-header
Jun  5 15:30:24 vm1 corosync[4108]:   [CPG   ] message_handler_req_exec_cpg_procleave got procleave message from cluster node -2090489664
Jun  5 15:30:25 vm1 corosync[4108]:   [SERV  ] service_exit_schedwrk_handler Unloading all Corosync service engines.
Jun  5 15:30:25 vm1 corosync[4108]:   [QB    ] qb_ipcs_unref qb_ipcs_unref() - destroying
Jun  5 15:30:25 vm1 corosync[4108]:   [QB    ] qb_ipcs_us_withdraw withdrawing server sockets
Jun  5 15:30:25 vm1 corosync[4108]:   [SERV  ] corosync_service_unlink_and_exit_priority Service engine unloaded: corosync vote quorum service v1.0
Jun  5 15:30:26 vm1 corosync[4108]:   [QB    ] qb_ipcs_unref qb_ipcs_unref() - destroying
Jun  5 15:30:26 vm1 corosync[4108]:   [QB    ] qb_ipcs_us_withdraw withdrawing server sockets
Jun  5 15:30:26 vm1 corosync[4108]:   [SERV  ] corosync_service_unlink_and_exit_priority Service engine unloaded: corosync configuration map access
Jun  5 15:30:26 vm1 corosync[4108]:   [QB    ] qb_ipcs_unref qb_ipcs_unref() - destroying
Jun  5 15:30:26 vm1 corosync[4108]:   [QB    ] qb_ipcs_us_withdraw withdrawing server sockets
Jun  5 15:30:26 vm1 corosync[4108]:   [SERV  ] corosync_service_unlink_and_exit_priority Service engine unloaded: corosync configuration service
Jun  5 15:30:26 vm1 corosync[4108]:   [QB    ] qb_ipcs_unref qb_ipcs_unref() - destroying
Jun  5 15:30:26 vm1 corosync[4108]:   [QB    ] qb_ipcs_us_withdraw withdrawing server sockets
Jun  5 15:30:26 vm1 corosync[4108]:   [SERV  ] corosync_service_unlink_and_exit_priority Service engine unloaded: corosync cluster closed process group service v1.01
Jun  5 15:30:26 vm1 corosync[4108]:   [QB    ] qb_ipcs_unref qb_ipcs_unref() - destroying
Jun  5 15:30:26 vm1 corosync[4108]:   [QB    ] qb_ipcs_us_withdraw withdrawing server sockets
Jun  5 15:30:26 vm1 corosync[4108]:   [SERV  ] corosync_service_unlink_and_exit_priority Service engine unloaded: corosync cluster quorum service v0.1
Jun  5 15:30:26 vm1 corosync[4108]:   [SERV  ] corosync_service_unlink_and_exit_priority Service engine unloaded: corosync profile loading service
Jun  5 15:30:26 vm1 corosync[4108]:   [SERV  ] corosync_service_unlink_and_exit_priority Service engine unloaded: corosync watchdog service
Jun  5 15:30:26 vm1 corosync[4108]:   [TOTEM ] memb_leave_message_send sending join/leave message
Jun  5 15:30:26 vm1 corosync[4108]:   [MAIN  ] _corosync_exit_error Corosync Cluster Engine exiting normally
Jun  5 15:30:53 vm1 corosync[4554]:   [MAIN  ] main Corosync Cluster Engine ('2.3.0'): started and ready to provide service.
Jun  5 15:30:53 vm1 corosync[4554]:   [MAIN  ] main Corosync built-in features: debug watchdog pie relro bindnow
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] totempg_waiting_trans_ack_cb waiting_trans_ack changed to 1
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] totemsrp_initialize Token Timeout (1000 ms) retransmit timeout (238 ms)
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] totemsrp_initialize token hold (180 ms) retransmits before loss (4 retrans)
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] totemsrp_initialize join (50 ms) send_join (0 ms) consensus (1200 ms) merge (200 ms)
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] totemsrp_initialize downcheck (1000 ms) fail to recv const (2500 msgs)
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] totemsrp_initialize seqno unchanged const (30 rotations) Maximum network MTU 1401
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] totemsrp_initialize window size per rotation (50 messages) maximum messages per rotation (17 messages)
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] totemsrp_initialize missed count const (5 messages)
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] totemsrp_initialize send threads (0 threads)
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] totemsrp_initialize RRP token expired timeout (238 ms)
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] totemsrp_initialize RRP token problem counter (2000 ms)
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] totemsrp_initialize RRP threshold (10 problem count)
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] totemsrp_initialize RRP multicast threshold (100 problem count)
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] totemsrp_initialize RRP automatic recovery check timeout (1000 ms)
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] totemsrp_initialize RRP mode set to active.
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] totemsrp_initialize heartbeat_failures_allowed (0)
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] totemsrp_initialize max_network_delay (50 ms)
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] totemsrp_initialize HeartBeat is Disabled. To enable set heartbeat_failures_allowed > 0
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] totemnet_instance_initialize Initializing transport (UDP/IP Multicast).
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] init_nss Initializing transmit/receive security (NSS) crypto: none hash: none
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] totemnet_instance_initialize Initializing transport (UDP/IP Multicast).
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] init_nss Initializing transmit/receive security (NSS) crypto: none hash: none
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] totemudp_build_sockets_ip Receive multicast socket recv buffer size (320000 bytes).
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] totemudp_build_sockets_ip Transmit multicast socket send buffer size (320000 bytes).
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] totemudp_build_sockets_ip Local receive multicast loop socket recv buffer size (320000 bytes).
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] totemudp_build_sockets_ip Local transmit multicast loop socket send buffer size (320000 bytes).
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] timer_function_netif_check_timeout The network interface [192.168.101.131] is now up.
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] main_iface_change_fn Created or loaded sequence id 178.192.168.101.131 for this ring.
Jun  5 15:30:53 vm1 corosync[4555]:   [SERV  ] corosync_service_link_and_init Service engine loaded: corosync configuration map access [0]
Jun  5 15:30:53 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_service_init Initializing IPC on cmap [0]
Jun  5 15:30:53 vm1 corosync[4555]:   [MAIN  ] cs_get_ipc_type No configured qb.ipc_type. Using native ipc
Jun  5 15:30:53 vm1 corosync[4555]:   [QB    ] qb_ipcs_us_publish server name: cmap
Jun  5 15:30:53 vm1 corosync[4555]:   [SERV  ] corosync_service_link_and_init Service engine loaded: corosync configuration service [1]
Jun  5 15:30:53 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_service_init Initializing IPC on cfg [1]
Jun  5 15:30:53 vm1 corosync[4555]:   [MAIN  ] cs_get_ipc_type No configured qb.ipc_type. Using native ipc
Jun  5 15:30:53 vm1 corosync[4555]:   [QB    ] qb_ipcs_us_publish server name: cfg
Jun  5 15:30:53 vm1 corosync[4555]:   [SERV  ] corosync_service_link_and_init Service engine loaded: corosync cluster closed process group service v1.01 [2]
Jun  5 15:30:53 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_service_init Initializing IPC on cpg [2]
Jun  5 15:30:53 vm1 corosync[4555]:   [MAIN  ] cs_get_ipc_type No configured qb.ipc_type. Using native ipc
Jun  5 15:30:53 vm1 corosync[4555]:   [QB    ] qb_ipcs_us_publish server name: cpg
Jun  5 15:30:53 vm1 corosync[4555]:   [SERV  ] corosync_service_link_and_init Service engine loaded: corosync profile loading service [4]
Jun  5 15:30:53 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_service_init NOT Initializing IPC on pload [4]
Jun  5 15:30:53 vm1 corosync[4555]:   [WD    ] setup_watchdog No Watchdog, try modprobe <a watchdog>
Jun  5 15:30:53 vm1 corosync[4555]:   [WD    ] wd_scan_resources no resources configured.
Jun  5 15:30:53 vm1 corosync[4555]:   [SERV  ] corosync_service_link_and_init Service engine loaded: corosync watchdog service [7]
Jun  5 15:30:53 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_service_init NOT Initializing IPC on wd [7]
Jun  5 15:30:53 vm1 corosync[4555]:   [QUORUM] quorum_exec_init_fn Using quorum provider corosync_votequorum
Jun  5 15:30:53 vm1 corosync[4555]:   [VOTEQ ] votequorum_readconfig Reading configuration (runtime: 0)
Jun  5 15:30:53 vm1 corosync[4555]:   [VOTEQ ] votequorum_read_nodelist_configuration No nodelist defined or our node is not in the nodelist
Jun  5 15:30:53 vm1 corosync[4555]:   [VOTEQ ] recalculate_quorum total_votes=1, expected_votes=2
Jun  5 15:30:53 vm1 corosync[4555]:   [VOTEQ ] calculate_quorum node 2204477632 state=1, votes=1, expected=2
Jun  5 15:30:53 vm1 corosync[4555]:   [VOTEQ ] are_we_quorate Waiting for all cluster members. Current votes: 1 expected_votes: 2
Jun  5 15:30:53 vm1 corosync[4555]:   [VOTEQ ] decode_flags flags: quorate: No Leaving: No WFA Status: Yes First: Yes Qdevice: No QdeviceAlive: No QdeviceCastVote: No QdeviceMasterWins: No
Jun  5 15:30:53 vm1 corosync[4555]:   [SERV  ] corosync_service_link_and_init Service engine loaded: corosync vote quorum service v1.0 [5]
Jun  5 15:30:53 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_service_init Initializing IPC on votequorum [5]
Jun  5 15:30:53 vm1 corosync[4555]:   [MAIN  ] cs_get_ipc_type No configured qb.ipc_type. Using native ipc
Jun  5 15:30:53 vm1 corosync[4555]:   [QB    ] qb_ipcs_us_publish server name: votequorum
Jun  5 15:30:53 vm1 corosync[4555]:   [SERV  ] corosync_service_link_and_init Service engine loaded: corosync cluster quorum service v0.1 [3]
Jun  5 15:30:53 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_service_init Initializing IPC on quorum [3]
Jun  5 15:30:53 vm1 corosync[4555]:   [MAIN  ] cs_get_ipc_type No configured qb.ipc_type. Using native ipc
Jun  5 15:30:53 vm1 corosync[4555]:   [QB    ] qb_ipcs_us_publish server name: quorum
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] totemudp_build_sockets_ip Receive multicast socket recv buffer size (320000 bytes).
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] totemudp_build_sockets_ip Transmit multicast socket send buffer size (320000 bytes).
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] totemudp_build_sockets_ip Local receive multicast loop socket recv buffer size (320000 bytes).
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] totemudp_build_sockets_ip Local transmit multicast loop socket send buffer size (320000 bytes).
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] timer_function_netif_check_timeout The network interface [192.168.102.131] is now up.
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] memb_state_gather_enter entering GATHER state from 15.
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] memb_state_commit_token_create Creating commit token because I am the rep.
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] old_ring_state_save Saving state aru 0 high seq received 0
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] memb_ring_id_set_and_store Storing new sequence id for ring 17c
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] memb_state_commit_enter entering COMMIT state.
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] message_handler_memb_commit_token got commit token
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] memb_state_recovery_enter entering RECOVERY state.
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] memb_state_recovery_enter position [0] member 192.168.101.131:
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] memb_state_recovery_enter previous ring seq 178 rep 192.168.101.131
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] memb_state_recovery_enter aru 0 high delivered 0 received flag 1
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] memb_state_recovery_enter Did not need to originate any messages in recovery.
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] message_handler_memb_commit_token got commit token
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] message_handler_memb_commit_token Sending initial ORF token
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] message_handler_memb_commit_token got commit token
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] message_handler_memb_commit_token Sending initial ORF token
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] message_handler_memb_commit_token got commit token
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] message_handler_memb_commit_token Sending initial ORF token
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] message_handler_memb_commit_token got commit token
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] message_handler_memb_commit_token Sending initial ORF token
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] message_handler_memb_commit_token got commit token
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] message_handler_memb_commit_token Sending initial ORF token
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] message_handler_orf_token token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 0, aru 0
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] message_handler_orf_token install seq 0 aru 0 high seq received 0
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] timer_function_active_token_expired Incrementing problem counter for seqid 1 iface 192.168.102.131 to [1 of 10]
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] message_handler_orf_token token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 1, aru 0
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] message_handler_orf_token install seq 0 aru 0 high seq received 0
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] message_handler_orf_token token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 2, aru 0
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] message_handler_orf_token install seq 0 aru 0 high seq received 0
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] message_handler_orf_token token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 3, aru 0
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] message_handler_orf_token install seq 0 aru 0 high seq received 0
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] message_handler_orf_token retrans flag count 4 token aru 0 install seq 0 aru 0 0
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] old_ring_state_reset Resetting old ring state
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] deliver_messages_from_recovery_to_regular recovery to regular 1-0
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] totempg_waiting_trans_ack_cb waiting_trans_ack changed to 1
Jun  5 15:30:53 vm1 corosync[4555]:   [MAIN  ] member_object_joined Member joined: r(0) ip(192.168.101.131) r(1) ip(192.168.102.131) 
Jun  5 15:30:53 vm1 corosync[4555]:   [VOTEQ ] decode_flags flags: quorate: No Leaving: No WFA Status: Yes First: Yes Qdevice: No QdeviceAlive: No QdeviceCastVote: No QdeviceMasterWins: No
Jun  5 15:30:53 vm1 corosync[4555]:   [QUORUM] log_view_list Members[1]: -2090489664
Jun  5 15:30:53 vm1 corosync[4555]:   [QUORUM] send_library_notification sending quorum notification to (nil), length = 52
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] memb_state_operational_enter entering OPERATIONAL state.
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] memb_state_operational_enter A processor joined or left the membership and a new membership (192.168.101.131:380) was formed.
Jun  5 15:30:53 vm1 corosync[4555]:   [VOTEQ ] message_handler_req_exec_votequorum_nodeinfo got nodeinfo message from cluster node 2204477632
Jun  5 15:30:53 vm1 corosync[4555]:   [VOTEQ ] message_handler_req_exec_votequorum_nodeinfo nodeinfo message[2204477632]: votes: 1, expected: 2 flags: 12
Jun  5 15:30:53 vm1 corosync[4555]:   [VOTEQ ] decode_flags flags: quorate: No Leaving: No WFA Status: Yes First: Yes Qdevice: No QdeviceAlive: No QdeviceCastVote: No QdeviceMasterWins: No
Jun  5 15:30:53 vm1 corosync[4555]:   [VOTEQ ] recalculate_quorum total_votes=1, expected_votes=2
Jun  5 15:30:53 vm1 corosync[4555]:   [VOTEQ ] calculate_quorum node 2204477632 state=1, votes=1, expected=2
Jun  5 15:30:53 vm1 corosync[4555]:   [VOTEQ ] are_we_quorate Waiting for all cluster members. Current votes: 1 expected_votes: 2
Jun  5 15:30:53 vm1 corosync[4555]:   [VOTEQ ] message_handler_req_exec_votequorum_nodeinfo got nodeinfo message from cluster node 2204477632
Jun  5 15:30:53 vm1 corosync[4555]:   [VOTEQ ] message_handler_req_exec_votequorum_nodeinfo nodeinfo message[2204477632]: votes: 1, expected: 2 flags: 12
Jun  5 15:30:53 vm1 corosync[4555]:   [VOTEQ ] decode_flags flags: quorate: No Leaving: No WFA Status: Yes First: Yes Qdevice: No QdeviceAlive: No QdeviceCastVote: No QdeviceMasterWins: No
Jun  5 15:30:53 vm1 corosync[4555]:   [VOTEQ ] recalculate_quorum total_votes=1, expected_votes=2
Jun  5 15:30:53 vm1 corosync[4555]:   [VOTEQ ] calculate_quorum node 2204477632 state=1, votes=1, expected=2
Jun  5 15:30:53 vm1 corosync[4555]:   [VOTEQ ] are_we_quorate Waiting for all cluster members. Current votes: 1 expected_votes: 2
Jun  5 15:30:53 vm1 corosync[4555]:   [VOTEQ ] message_handler_req_exec_votequorum_nodeinfo got nodeinfo message from cluster node 2204477632
Jun  5 15:30:53 vm1 corosync[4555]:   [VOTEQ ] message_handler_req_exec_votequorum_nodeinfo nodeinfo message[0]: votes: 0, expected: 0 flags: 0
Jun  5 15:30:53 vm1 corosync[4555]:   [SYNC  ] sync_barrier_handler Committing synchronization for corosync configuration map access
Jun  5 15:30:53 vm1 corosync[4555]:   [CMAP  ] cmap_sync_activate Single node sync -> no action
Jun  5 15:30:53 vm1 corosync[4555]:   [CPG   ] downlist_log comparing: sender r(0) ip(192.168.101.131) r(1) ip(192.168.102.131) ; members(old:0 left:0)
Jun  5 15:30:53 vm1 corosync[4555]:   [CPG   ] downlist_log chosen downlist: sender r(0) ip(192.168.101.131) r(1) ip(192.168.102.131) ; members(old:0 left:0)
Jun  5 15:30:53 vm1 corosync[4555]:   [SYNC  ] sync_barrier_handler Committing synchronization for corosync cluster closed process group service v1.01
Jun  5 15:30:53 vm1 corosync[4555]:   [MAIN  ] corosync_sync_completed Completed service synchronization, ready to provide service.
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] totempg_waiting_trans_ack_cb waiting_trans_ack changed to 0
Jun  5 15:30:53 vm1 corosync[4555]:   [QB    ] handle_new_connection IPC credentials authenticated (4555-4559-24)
Jun  5 15:30:53 vm1 corosync[4555]:   [QB    ] qb_ipcs_shm_connect connecting to client [4559]
Jun  5 15:30:53 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:30:53 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:30:53 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:30:53 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_created connection created
Jun  5 15:30:53 vm1 corosync[4555]:   [QB    ] qb_ipcs_dispatch_connection_request HUP conn (4555-4559-24)
Jun  5 15:30:53 vm1 corosync[4555]:   [QB    ] qb_ipcs_disconnect qb_ipcs_disconnect(4555-4559-24) state:2
Jun  5 15:30:53 vm1 corosync[4555]:   [QB    ] _del epoll_ctl(del): Bad file descriptor (9)
Jun  5 15:30:53 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_closed cs_ipcs_connection_closed() 
Jun  5 15:30:53 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_destroyed cs_ipcs_connection_destroyed() 
Jun  5 15:30:53 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cfg-response-4555-4559-24-header
Jun  5 15:30:53 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cfg-event-4555-4559-24-header
Jun  5 15:30:53 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cfg-request-4555-4559-24-header
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] memb_state_gather_enter entering GATHER state from 9.
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] memb_state_commit_token_create Creating commit token because I am the rep.
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] old_ring_state_save Saving state aru 5 high seq received 5
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] memb_ring_id_set_and_store Storing new sequence id for ring 180
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] memb_state_commit_enter entering COMMIT state.
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] message_handler_memb_commit_token got commit token
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] message_handler_memb_commit_token got commit token
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] memb_state_recovery_enter entering RECOVERY state.
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] memb_state_recovery_enter TRANS [0] member 192.168.101.131:
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] memb_state_recovery_enter position [0] member 192.168.101.131:
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] memb_state_recovery_enter previous ring seq 17c rep 192.168.101.131
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] memb_state_recovery_enter aru 5 high delivered 5 received flag 1
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] memb_state_recovery_enter position [1] member 192.168.101.132:
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] memb_state_recovery_enter previous ring seq 178 rep 192.168.101.132
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] memb_state_recovery_enter aru 5 high delivered 5 received flag 1
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] memb_state_recovery_enter Did not need to originate any messages in recovery.
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] message_handler_memb_commit_token got commit token
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] message_handler_memb_commit_token Sending initial ORF token
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] message_handler_memb_commit_token got commit token
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] message_handler_memb_commit_token Sending initial ORF token
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] message_handler_memb_commit_token got commit token
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] message_handler_memb_commit_token Sending initial ORF token
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] message_handler_memb_commit_token got commit token
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] message_handler_memb_commit_token Sending initial ORF token
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] message_handler_orf_token token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 0, aru 0
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] message_handler_orf_token install seq 0 aru 0 high seq received 0
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] message_handler_orf_token token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 1, aru 0
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] message_handler_orf_token install seq 0 aru 0 high seq received 0
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] message_handler_orf_token token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 2, aru 0
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] message_handler_orf_token install seq 0 aru 0 high seq received 0
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] message_handler_orf_token token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 3, aru 0
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] message_handler_orf_token install seq 0 aru 0 high seq received 0
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] message_handler_orf_token retrans flag count 4 token aru 0 install seq 0 aru 0 0
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] old_ring_state_reset Resetting old ring state
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] deliver_messages_from_recovery_to_regular recovery to regular 1-0
Jun  5 15:30:53 vm1 corosync[4555]:   [VOTEQ ] decode_flags flags: quorate: No Leaving: No WFA Status: Yes First: Yes Qdevice: No QdeviceAlive: No QdeviceCastVote: No QdeviceMasterWins: No
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] totempg_waiting_trans_ack_cb waiting_trans_ack changed to 1
Jun  5 15:30:53 vm1 corosync[4555]:   [MAIN  ] member_object_joined Member joined: r(0) ip(192.168.101.132) r(1) ip(192.168.102.132) 
Jun  5 15:30:53 vm1 corosync[4555]:   [VOTEQ ] decode_flags flags: quorate: No Leaving: No WFA Status: Yes First: No Qdevice: No QdeviceAlive: No QdeviceCastVote: No QdeviceMasterWins: No
Jun  5 15:30:53 vm1 corosync[4555]:   [QUORUM] log_view_list Members[2]: -2090489664 -2073712448
Jun  5 15:30:53 vm1 corosync[4555]:   [QUORUM] send_library_notification sending quorum notification to (nil), length = 56
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] memb_state_operational_enter entering OPERATIONAL state.
Jun  5 15:30:53 vm1 corosync[4555]:   [TOTEM ] memb_state_operational_enter A processor joined or left the membership and a new membership (192.168.101.131:384) was formed.
Jun  5 15:30:53 vm1 corosync[4555]:   [VOTEQ ] message_handler_req_exec_votequorum_nodeinfo got nodeinfo message from cluster node 2221254848
Jun  5 15:30:53 vm1 corosync[4555]:   [VOTEQ ] message_handler_req_exec_votequorum_nodeinfo nodeinfo message[2221254848]: votes: 1, expected: 2 flags: 12
Jun  5 15:30:53 vm1 corosync[4555]:   [VOTEQ ] decode_flags flags: quorate: No Leaving: No WFA Status: Yes First: Yes Qdevice: No QdeviceAlive: No QdeviceCastVote: No QdeviceMasterWins: No
Jun  5 15:30:53 vm1 corosync[4555]:   [VOTEQ ] recalculate_quorum total_votes=2, expected_votes=2
Jun  5 15:30:53 vm1 corosync[4555]:   [VOTEQ ] calculate_quorum node 2204477632 state=1, votes=1, expected=2
Jun  5 15:30:53 vm1 corosync[4555]:   [VOTEQ ] calculate_quorum node 2221254848 state=1, votes=1, expected=2
Jun  5 15:30:53 vm1 corosync[4555]:   [VOTEQ ] get_lowest_node_id lowest node id: -2090489664 us: -2090489664
Jun  5 15:30:53 vm1 corosync[4555]:   [VOTEQ ] are_we_quorate quorum regained, resuming activity
Jun  5 15:30:53 vm1 corosync[4555]:   [QUORUM] quorum_api_set_quorum This node is within the primary component and will provide service.
Jun  5 15:30:53 vm1 corosync[4555]:   [QUORUM] log_view_list Members[2]: -2090489664 -2073712448
Jun  5 15:30:53 vm1 corosync[4555]:   [QUORUM] send_library_notification sending quorum notification to (nil), length = 56
Jun  5 15:30:53 vm1 corosync[4555]:   [VOTEQ ] message_handler_req_exec_votequorum_nodeinfo got nodeinfo message from cluster node 2221254848
Jun  5 15:30:53 vm1 corosync[4555]:   [VOTEQ ] message_handler_req_exec_votequorum_nodeinfo nodeinfo message[0]: votes: 0, expected: 0 flags: 0
Jun  5 15:30:53 vm1 corosync[4555]:   [VOTEQ ] message_handler_req_exec_votequorum_nodeinfo got nodeinfo message from cluster node 2221254848
Jun  5 15:30:53 vm1 corosync[4555]:   [VOTEQ ] message_handler_req_exec_votequorum_nodeinfo nodeinfo message[2221254848]: votes: 1, expected: 2 flags: 4
Jun  5 15:30:53 vm1 corosync[4555]:   [VOTEQ ] decode_flags flags: quorate: No Leaving: No WFA Status: Yes First: No Qdevice: No QdeviceAlive: No QdeviceCastVote: No QdeviceMasterWins: No
Jun  5 15:30:54 vm1 pacemakerd[4574]:   notice: crm_add_logfile: Additional logging available in /var/log/pacemaker.log
Jun  5 15:30:54 vm1 pacemakerd[4574]:     info: crm_log_init: Changed active directory to /var/lib/heartbeat/cores/root
Jun  5 15:30:54 vm1 pacemakerd[4574]:     info: crm_ipc_connect: Could not establish pacemakerd connection: Connection refused (111)
Jun  5 15:30:54 vm1 pacemakerd[4574]:     info: get_cluster_type: Detected an active 'corosync' cluster
Jun  5 15:30:54 vm1 pacemakerd[4574]:     info: read_config: Reading configure for stack: corosync
Jun  5 15:30:54 vm1 pacemakerd[4574]:   notice: read_config: Configured corosync to accept connections from group 492: OK (1)
Jun  5 15:30:54 vm1 pacemakerd[4574]:   notice: crm_add_logfile: Additional logging available in /var/log/pacemaker.log
Jun  5 15:30:54 vm1 pacemakerd[4574]:   notice: main: Starting Pacemaker 1.1.9 (Build: 7209c02):  generated-manpages agent-manpages ascii-docs ncurses libqb-logging libqb-ipc lha-fencing nagios  corosync-native snmp
Jun  5 15:30:54 vm1 pacemakerd[4574]:     info: main: Maximum core file size is: 18446744073709551615
Jun  5 15:30:54 vm1 pacemakerd[4574]:     info: qb_ipcs_us_publish: server name: pacemakerd
Jun  5 15:30:54 vm1 pacemakerd[4574]:   notice: corosync_node_name: Unable to get node name for nodeid 0
Jun  5 15:30:54 vm1 pacemakerd[4574]:   notice: get_local_node_name: Defaulting to uname -n for the local corosync node name
Jun  5 15:30:54 vm1 pacemakerd[4574]:   notice: update_node_processes: 0x2266e50 Node 2204477632 now known as vm1, was: 
Jun  5 15:30:54 vm1 pacemakerd[4574]:     info: start_child: Using uid=496 and group=492 for process cib
Jun  5 15:30:54 vm1 pacemakerd[4574]:     info: start_child: Forked child 4576 for process cib
Jun  5 15:30:54 vm1 pacemakerd[4574]:     info: start_child: Forked child 4577 for process stonith-ng
Jun  5 15:30:54 vm1 pacemakerd[4574]:     info: start_child: Forked child 4578 for process lrmd
Jun  5 15:30:54 vm1 pacemakerd[4574]:     info: start_child: Using uid=496 and group=492 for process attrd
Jun  5 15:30:54 vm1 pacemakerd[4574]:     info: start_child: Forked child 4579 for process attrd
Jun  5 15:30:54 vm1 pacemakerd[4574]:     info: start_child: Using uid=496 and group=492 for process pengine
Jun  5 15:30:54 vm1 pacemakerd[4574]:     info: start_child: Forked child 4580 for process pengine
Jun  5 15:30:54 vm1 pacemakerd[4574]:     info: start_child: Using uid=496 and group=492 for process crmd
Jun  5 15:30:54 vm1 pacemakerd[4574]:     info: start_child: Forked child 4581 for process crmd
Jun  5 15:30:54 vm1 pacemakerd[4574]:     info: main: Starting mainloop
Jun  5 15:30:54 vm1 pacemakerd[4574]:   notice: update_node_processes: 0x22695b0 Node 2221254848 now known as vm2, was: 
Jun  5 15:30:54 vm1 lrmd[4578]:   notice: crm_add_logfile: Additional logging available in /var/log/pacemaker.log
Jun  5 15:30:54 vm1 attrd[4579]:   notice: crm_add_logfile: Additional logging available in /var/log/pacemaker.log
Jun  5 15:30:54 vm1 attrd[4579]:    debug: crm_update_callsites: Enabling callsites based on priority=7, files=(null), functions=(null), formats=(null), tags=(null)
Jun  5 15:30:54 vm1 cib[4576]:   notice: crm_add_logfile: Additional logging available in /var/log/pacemaker.log
Jun  5 15:30:54 vm1 cib[4576]:    debug: crm_update_callsites: Enabling callsites based on priority=7, files=(null), functions=(null), formats=(null), tags=(null)
Jun  5 15:30:54 vm1 cib[4576]:     info: crm_log_init: Changed active directory to /var/lib/heartbeat/cores/hacluster
Jun  5 15:30:54 vm1 lrmd[4578]:    debug: crm_update_callsites: Enabling callsites based on priority=7, files=(null), functions=(null), formats=(null), tags=(null)
Jun  5 15:30:54 vm1 lrmd[4578]:     info: crm_log_init: Changed active directory to /var/lib/heartbeat/cores/root
Jun  5 15:30:54 vm1 lrmd[4578]:     info: qb_ipcs_us_publish: server name: lrmd
Jun  5 15:30:54 vm1 lrmd[4578]:     info: main: Starting
Jun  5 15:30:54 vm1 pengine[4580]:   notice: crm_add_logfile: Additional logging available in /var/log/pacemaker.log
Jun  5 15:30:54 vm1 pengine[4580]:    debug: crm_update_callsites: Enabling callsites based on priority=7, files=(null), functions=(null), formats=(null), tags=(null)
Jun  5 15:30:54 vm1 pengine[4580]:     info: crm_log_init: Changed active directory to /var/lib/heartbeat/cores/hacluster
Jun  5 15:30:54 vm1 pengine[4580]:     info: qb_ipcs_us_publish: server name: pengine
Jun  5 15:30:54 vm1 pengine[4580]:     info: main: Starting pengine
Jun  5 15:30:54 vm1 attrd[4579]:     info: crm_log_init: Changed active directory to /var/lib/heartbeat/cores/hacluster
Jun  5 15:30:54 vm1 attrd[4579]:     info: main: Starting up
Jun  5 15:30:54 vm1 attrd[4579]:     info: get_cluster_type: Verifying cluster type: 'corosync'
Jun  5 15:30:54 vm1 attrd[4579]:     info: get_cluster_type: Assuming an active 'corosync' cluster
Jun  5 15:30:54 vm1 attrd[4579]:   notice: crm_cluster_connect: Connecting to cluster infrastructure: corosync
Jun  5 15:30:54 vm1 stonith-ng[4577]:   notice: crm_add_logfile: Additional logging available in /var/log/pacemaker.log
Jun  5 15:30:54 vm1 stonith-ng[4577]:    debug: crm_update_callsites: Enabling callsites based on priority=7, files=(null), functions=(null), formats=(null), tags=(null)
Jun  5 15:30:54 vm1 stonith-ng[4577]:     info: crm_log_init: Changed active directory to /var/lib/heartbeat/cores/root
Jun  5 15:30:54 vm1 stonith-ng[4577]:     info: get_cluster_type: Verifying cluster type: 'corosync'
Jun  5 15:30:54 vm1 stonith-ng[4577]:     info: get_cluster_type: Assuming an active 'corosync' cluster
Jun  5 15:30:54 vm1 stonith-ng[4577]:   notice: crm_cluster_connect: Connecting to cluster infrastructure: corosync
Jun  5 15:30:54 vm1 cib[4576]:     info: get_cluster_type: Verifying cluster type: 'corosync'
Jun  5 15:30:54 vm1 cib[4576]:     info: get_cluster_type: Assuming an active 'corosync' cluster
Jun  5 15:30:54 vm1 cib[4576]:     info: retrieveCib: Reading cluster configuration from: /var/lib/pacemaker/cib/cib.xml (digest: /var/lib/pacemaker/cib/cib.xml.sig)
Jun  5 15:30:54 vm1 cib[4576]:     info: validate_with_relaxng: Creating RNG parser context
Jun  5 15:30:54 vm1 crmd[4581]:   notice: crm_add_logfile: Additional logging available in /var/log/pacemaker.log
Jun  5 15:30:54 vm1 crmd[4581]:    debug: crm_update_callsites: Enabling callsites based on priority=7, files=(null), functions=(null), formats=(null), tags=(null)
Jun  5 15:30:54 vm1 crmd[4581]:     info: crm_log_init: Changed active directory to /var/lib/heartbeat/cores/hacluster
Jun  5 15:30:54 vm1 crmd[4581]:   notice: main: CRM Git Version: 7209c02
Jun  5 15:30:54 vm1 crmd[4581]:     info: do_log: FSA: Input I_STARTUP from crmd_init() received in state S_STARTING
Jun  5 15:30:54 vm1 crmd[4581]:     info: get_cluster_type: Verifying cluster type: 'corosync'
Jun  5 15:30:54 vm1 crmd[4581]:     info: get_cluster_type: Assuming an active 'corosync' cluster
Jun  5 15:30:54 vm1 crmd[4581]:     info: crm_ipc_connect: Could not establish cib_shm connection: Connection refused (111)
Jun  5 15:30:54 vm1 attrd[4579]:     info: crm_get_peer: Node <null> now has id: 2204477632
Jun  5 15:30:54 vm1 attrd[4579]:     info: crm_update_peer_proc: init_cpg_connection: Node (null)[2204477632] - corosync-cpg is now online
Jun  5 15:30:54 vm1 stonith-ng[4577]:     info: crm_get_peer: Node <null> now has id: 2204477632
Jun  5 15:30:54 vm1 stonith-ng[4577]:     info: crm_update_peer_proc: init_cpg_connection: Node (null)[2204477632] - corosync-cpg is now online
Jun  5 15:30:54 vm1 attrd[4579]:   notice: corosync_node_name: Unable to get node name for nodeid 2204477632
Jun  5 15:30:54 vm1 attrd[4579]:   notice: get_local_node_name: Defaulting to uname -n for the local corosync node name
Jun  5 15:30:54 vm1 attrd[4579]:     info: init_cs_connection_once: Connection to 'corosync': established
Jun  5 15:30:54 vm1 attrd[4579]:     info: crm_get_peer: Node 2204477632 is now known as vm1
Jun  5 15:30:54 vm1 attrd[4579]:     info: crm_get_peer: Node 2204477632 has uuid 2204477632
Jun  5 15:30:54 vm1 attrd[4579]:     info: main: Cluster connection active
Jun  5 15:30:54 vm1 attrd[4579]:     info: qb_ipcs_us_publish: server name: attrd
Jun  5 15:30:54 vm1 attrd[4579]:     info: main: Accepting attribute updates
Jun  5 15:30:54 vm1 attrd[4579]:   notice: main: Starting mainloop...
Jun  5 15:30:54 vm1 cib[4576]:     info: startCib: CIB Initialization completed successfully
Jun  5 15:30:54 vm1 cib[4576]:   notice: crm_cluster_connect: Connecting to cluster infrastructure: corosync
Jun  5 15:30:54 vm1 stonith-ng[4577]:   notice: corosync_node_name: Unable to get node name for nodeid 2204477632
Jun  5 15:30:54 vm1 stonith-ng[4577]:   notice: get_local_node_name: Defaulting to uname -n for the local corosync node name
Jun  5 15:30:54 vm1 stonith-ng[4577]:     info: init_cs_connection_once: Connection to 'corosync': established
Jun  5 15:30:54 vm1 stonith-ng[4577]:     info: crm_get_peer: Node 2204477632 is now known as vm1
Jun  5 15:30:54 vm1 stonith-ng[4577]:     info: crm_get_peer: Node 2204477632 has uuid 2204477632
Jun  5 15:30:54 vm1 stonith-ng[4577]:     info: crm_ipc_connect: Could not establish cib_rw connection: Connection refused (111)
Jun  5 15:30:54 vm1 cib[4576]:     info: crm_get_peer: Node <null> now has id: 2204477632
Jun  5 15:30:54 vm1 cib[4576]:     info: crm_update_peer_proc: init_cpg_connection: Node (null)[2204477632] - corosync-cpg is now online
Jun  5 15:30:54 vm1 cib[4576]:   notice: corosync_node_name: Unable to get node name for nodeid 2204477632
Jun  5 15:30:54 vm1 cib[4576]:   notice: get_local_node_name: Defaulting to uname -n for the local corosync node name
Jun  5 15:30:54 vm1 cib[4576]:     info: init_cs_connection_once: Connection to 'corosync': established
Jun  5 15:30:54 vm1 cib[4576]:     info: crm_get_peer: Node 2204477632 is now known as vm1
Jun  5 15:30:54 vm1 cib[4576]:     info: crm_get_peer: Node 2204477632 has uuid 2204477632
Jun  5 15:30:54 vm1 cib[4576]:     info: qb_ipcs_us_publish: server name: cib_ro
Jun  5 15:30:54 vm1 cib[4576]:     info: qb_ipcs_us_publish: server name: cib_rw
Jun  5 15:30:54 vm1 cib[4576]:     info: qb_ipcs_us_publish: server name: cib_shm
Jun  5 15:30:54 vm1 cib[4576]:     info: cib_init: Starting cib mainloop
Jun  5 15:30:54 vm1 attrd[4579]:     info: pcmk_cpg_membership: Joined[0.0] attrd.2204477632 
Jun  5 15:30:54 vm1 attrd[4579]:     info: pcmk_cpg_membership: Member[0.0] attrd.2204477632 
Jun  5 15:30:54 vm1 attrd[4579]:     info: crm_get_peer: Node <null> now has id: 2221254848
Jun  5 15:30:54 vm1 attrd[4579]:     info: pcmk_cpg_membership: Member[0.1] attrd.2221254848 
Jun  5 15:30:54 vm1 attrd[4579]:     info: crm_update_peer_proc: pcmk_cpg_membership: Node (null)[2221254848] - corosync-cpg is now online
Jun  5 15:30:54 vm1 cib[4576]:     info: pcmk_cpg_membership: Joined[0.0] cib.2204477632 
Jun  5 15:30:54 vm1 cib[4576]:     info: pcmk_cpg_membership: Member[0.0] cib.2204477632 
Jun  5 15:30:55 vm1 cib[4576]:     info: pcmk_cpg_membership: Joined[1.0] cib.2221254848 
Jun  5 15:30:55 vm1 cib[4576]:     info: pcmk_cpg_membership: Member[1.0] cib.2204477632 
Jun  5 15:30:55 vm1 cib[4576]:     info: crm_get_peer: Node <null> now has id: 2221254848
Jun  5 15:30:55 vm1 cib[4576]:     info: pcmk_cpg_membership: Member[1.1] cib.2221254848 
Jun  5 15:30:55 vm1 cib[4576]:     info: crm_update_peer_proc: pcmk_cpg_membership: Node (null)[2221254848] - corosync-cpg is now online
Jun  5 15:30:55 vm1 cib[4582]:     info: write_cib_contents: Archived previous version as /var/lib/pacemaker/cib/cib-12.raw
Jun  5 15:30:55 vm1 cib[4576]:     info: crm_client_new: Connecting 0x1141e00 for uid=0 gid=0 pid=4583 id=a3da89bc-caf4-4487-8a99-4ce49dcf416f
Jun  5 15:30:55 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crm_mon/2, version=0.20.0)
Jun  5 15:30:55 vm1 cib[4582]:     info: write_cib_contents: Wrote version 0.20.0 of the CIB to disk (digest: a9c454e95c6abea858af00f63b830b15)
Jun  5 15:30:55 vm1 cib[4582]:     info: retrieveCib: Reading cluster configuration from: /var/lib/pacemaker/cib/cib.4bBMON (digest: /var/lib/pacemaker/cib/cib.2V9lJi)
Jun  5 15:30:55 vm1 cib[4576]:     info: crm_client_new: Connecting 0x11c2d40 for uid=496 gid=492 pid=4581 id=5bb878d4-65e3-4a3d-a334-791219a11ff1
Jun  5 15:30:55 vm1 cib[4576]:     info: crm_client_new: Connecting 0x11c3280 for uid=0 gid=0 pid=4577 id=9cd6ef5b-9a4b-498a-a949-c057f06bbeb9
Jun  5 15:30:55 vm1 crmd[4581]:     info: do_cib_control: CIB connection established
Jun  5 15:30:55 vm1 crmd[4581]:   notice: crm_cluster_connect: Connecting to cluster infrastructure: corosync
Jun  5 15:30:55 vm1 stonith-ng[4577]:   notice: setup_cib: Watching for stonith topology changes
Jun  5 15:30:55 vm1 stonith-ng[4577]:     info: qb_ipcs_us_publish: server name: stonith-ng
Jun  5 15:30:55 vm1 stonith-ng[4577]:     info: main: Starting stonith-ng mainloop
Jun  5 15:30:55 vm1 stonith-ng[4577]:     info: pcmk_cpg_membership: Joined[0.0] stonith-ng.2204477632 
Jun  5 15:30:55 vm1 stonith-ng[4577]:     info: pcmk_cpg_membership: Member[0.0] stonith-ng.2204477632 
Jun  5 15:30:55 vm1 stonith-ng[4577]:     info: pcmk_cpg_membership: Joined[1.0] stonith-ng.2221254848 
Jun  5 15:30:55 vm1 stonith-ng[4577]:     info: pcmk_cpg_membership: Member[1.0] stonith-ng.2204477632 
Jun  5 15:30:55 vm1 stonith-ng[4577]:     info: crm_get_peer: Node <null> now has id: 2221254848
Jun  5 15:30:55 vm1 stonith-ng[4577]:     info: pcmk_cpg_membership: Member[1.1] stonith-ng.2221254848 
Jun  5 15:30:55 vm1 stonith-ng[4577]:     info: crm_update_peer_proc: pcmk_cpg_membership: Node (null)[2221254848] - corosync-cpg is now online
Jun  5 15:30:55 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/2, version=0.20.0)
Jun  5 15:30:55 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/2, version=0.20.0)
Jun  5 15:30:55 vm1 stonith-ng[4577]:     info: init_cib_cache_cb: Updating device list from the cib: init
Jun  5 15:30:55 vm1 stonith-ng[4577]:   notice: unpack_config: On loss of CCM Quorum: Ignore
Jun  5 15:30:55 vm1 stonith-ng[4577]:  warning: unpack_nodes: Blind faith: not fencing unseen nodes
Jun  5 15:30:55 vm1 stonith-ng[4577]:     info: cib_device_update: Device st1:0 is allowed on vm1: score=0
Jun  5 15:30:55 vm1 stonith-ng[4577]:     info: stonith_action_create: Initiating action metadata for agent fence_rhevm (target=(null))
Jun  5 15:30:55 vm1 crmd[4581]:     info: crm_get_peer: Node <null> now has id: 2204477632
Jun  5 15:30:55 vm1 crmd[4581]:     info: crm_update_peer_proc: init_cpg_connection: Node (null)[2204477632] - corosync-cpg is now online
Jun  5 15:30:55 vm1 crmd[4581]:   notice: corosync_node_name: Unable to get node name for nodeid 2204477632
Jun  5 15:30:55 vm1 crmd[4581]:   notice: get_local_node_name: Defaulting to uname -n for the local corosync node name
Jun  5 15:30:55 vm1 crmd[4581]:     info: init_cs_connection_once: Connection to 'corosync': established
Jun  5 15:30:55 vm1 crmd[4581]:     info: crm_get_peer: Node 2204477632 is now known as vm1
Jun  5 15:30:55 vm1 crmd[4581]:     info: peer_update_callback: vm1 is now (null)
Jun  5 15:30:55 vm1 crmd[4581]:     info: crm_get_peer: Node 2204477632 has uuid 2204477632
Jun  5 15:30:55 vm1 crmd[4581]:   notice: init_quorum_connection: Quorum acquired
Jun  5 15:30:55 vm1 crmd[4581]:     info: do_ha_control: Connected to the cluster
Jun  5 15:30:55 vm1 crmd[4581]:     info: lrmd_ipc_connect: Connecting to lrmd
Jun  5 15:30:55 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section nodes: OK (rc=0, origin=local/crmd/3, version=0.20.0)
Jun  5 15:30:55 vm1 lrmd[4578]:     info: crm_client_new: Connecting 0x21138c0 for uid=496 gid=492 pid=4581 id=60e37d11-d61c-4555-9514-decd1c943ad2
Jun  5 15:30:55 vm1 crmd[4581]:     info: do_lrm_control: LRM connection established
Jun  5 15:30:55 vm1 crmd[4581]:     info: do_started: Delaying start, no membership data (0000000000100000)
Jun  5 15:30:55 vm1 crmd[4581]:     info: pcmk_quorum_notification: Membership 384: quorum retained (2)
Jun  5 15:30:55 vm1 crmd[4581]:   notice: crm_update_peer_state: pcmk_quorum_notification: Node vm1[2204477632] - state is now member (was (null))
Jun  5 15:30:55 vm1 crmd[4581]:     info: peer_update_callback: vm1 is now member (was (null))
Jun  5 15:30:55 vm1 crmd[4581]:     info: crm_get_peer: Node <null> now has id: 2221254848
Jun  5 15:30:55 vm1 crmd[4581]:     info: pcmk_quorum_notification: Obtaining name for new node 2221254848
Jun  5 15:30:55 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section crm_config: OK (rc=0, origin=local/crmd/4, version=0.20.0)
Jun  5 15:30:55 vm1 crmd[4581]:   notice: corosync_node_name: Unable to get node name for nodeid 2221254848
Jun  5 15:30:55 vm1 crmd[4581]:   notice: crm_update_peer_state: pcmk_quorum_notification: Node (null)[2221254848] - state is now member (was (null))
Jun  5 15:30:55 vm1 crmd[4581]:     info: do_started: Delaying start, Config not read (0000000000000040)
Jun  5 15:30:55 vm1 crmd[4581]:     info: qb_ipcs_us_publish: server name: crmd
Jun  5 15:30:55 vm1 crmd[4581]:   notice: do_started: The local CRM is operational
Jun  5 15:30:55 vm1 crmd[4581]:     info: do_log: FSA: Input I_PENDING from do_started() received in state S_STARTING
Jun  5 15:30:55 vm1 crmd[4581]:     info: do_state_transition: State transition S_STARTING -> S_PENDING [ input=I_PENDING cause=C_FSA_INTERNAL origin=do_started ]
Jun  5 15:30:55 vm1 cib[4576]:     info: cib_process_request: Completed cib_slave operation for section 'all': OK (rc=0, origin=local/crmd/5, version=0.20.0)
Jun  5 15:30:56 vm1 stonith-ng[4577]:   notice: stonith_device_register: Added 'st1:0' to the device list (1 active devices)
Jun  5 15:30:56 vm1 stonith-ng[4577]:     info: crm_get_peer: Node 2221254848 is now known as vm2
Jun  5 15:30:56 vm1 stonith-ng[4577]:     info: crm_get_peer: Node 2221254848 has uuid 2221254848
Jun  5 15:30:56 vm1 crmd[4581]:     info: pcmk_cpg_membership: Joined[0.0] crmd.2204477632 
Jun  5 15:30:56 vm1 crmd[4581]:     info: pcmk_cpg_membership: Member[0.0] crmd.2204477632 
Jun  5 15:30:56 vm1 crmd[4581]:     info: pcmk_cpg_membership: Member[0.1] crmd.2221254848 
Jun  5 15:30:56 vm1 crmd[4581]:     info: crm_update_peer_proc: pcmk_cpg_membership: Node (null)[2221254848] - corosync-cpg is now online
Jun  5 15:30:56 vm1 crmd[4581]:     info: crm_get_peer: Node 2221254848 is now known as vm2
Jun  5 15:30:56 vm1 crmd[4581]:     info: peer_update_callback: vm2 is now member
Jun  5 15:30:56 vm1 crmd[4581]:     info: crm_get_peer: Node 2221254848 has uuid 2221254848
Jun  5 15:30:57 vm1 stonith-ng[4577]:     info: crm_client_new: Connecting 0x186ed50 for uid=496 gid=492 pid=4581 id=ab209bc2-4e9a-486a-80f6-40bc82167f45
Jun  5 15:30:57 vm1 stonith-ng[4577]:     info: stonith_command: Processed register from crmd.4581: OK (0)
Jun  5 15:30:57 vm1 stonith-ng[4577]:     info: stonith_command: Processed st_notify from crmd.4581: OK (0)
Jun  5 15:30:57 vm1 stonith-ng[4577]:     info: stonith_command: Processed st_notify from crmd.4581: OK (0)
Jun  5 15:30:59 vm1 cib[4576]:     info: crm_client_new: Connecting 0xf87950 for uid=496 gid=492 pid=4579 id=fdc12cb7-53ca-4369-87af-6297519a90fd
Jun  5 15:30:59 vm1 attrd[4579]:     info: cib_connect: Connected to the CIB after 1 signon attempts
Jun  5 15:30:59 vm1 attrd[4579]:     info: cib_connect: Sending full refresh now that we're connected to the cib
Jun  5 15:31:16 vm1 crmd[4581]:     info: do_election_count_vote: Election 2 (owner: 2221254848) pass: vote from vm2 (Host name)
Jun  5 15:31:16 vm1 crmd[4581]:     info: do_state_transition: State transition S_PENDING -> S_ELECTION [ input=I_ELECTION cause=C_FSA_INTERNAL origin=do_election_count_vote ]
Jun  5 15:31:16 vm1 crmd[4581]:     info: do_log: FSA: Input I_ELECTION_DC from do_election_check() received in state S_ELECTION
Jun  5 15:31:16 vm1 crmd[4581]:   notice: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_FSA_INTERNAL origin=do_election_check ]
Jun  5 15:31:16 vm1 crmd[4581]:     info: do_te_control: Registering TE UUID: 307badcf-a5e3-4581-9cdd-2dd8b4b237df
Jun  5 15:31:16 vm1 crmd[4581]:     info: set_graph_functions: Setting custom graph functions
Jun  5 15:31:16 vm1 pengine[4580]:     info: crm_client_new: Connecting 0x2290f90 for uid=496 gid=492 pid=4581 id=e4d8ecf6-3207-4d8a-8b9e-a46fa54c5dec
Jun  5 15:31:16 vm1 crmd[4581]:     info: do_dc_takeover: Taking over DC status for this partition
Jun  5 15:31:16 vm1 cib[4576]:     info: cib_process_readwrite: We are now in R/W mode
Jun  5 15:31:16 vm1 cib[4576]:     info: cib_process_request: Completed cib_master operation for section 'all': OK (rc=0, origin=local/crmd/6, version=0.20.0)
Jun  5 15:31:16 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section cib: OK (rc=0, origin=local/crmd/7, version=0.20.0)
Jun  5 15:31:16 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section //cib/configuration/crm_config//cluster_property_set//nvpair[@name='dc-version']: OK (rc=0, origin=local/crmd/8, version=0.20.0)
Jun  5 15:31:16 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section crm_config: OK (rc=0, origin=local/crmd/9, version=0.20.0)
Jun  5 15:31:16 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section //cib/configuration/crm_config//cluster_property_set//nvpair[@name='cluster-infrastructure']: OK (rc=0, origin=local/crmd/10, version=0.20.0)
Jun  5 15:31:16 vm1 crmd[4581]:     info: join_make_offer: Making join offers based on membership 384
Jun  5 15:31:16 vm1 crmd[4581]:     info: join_make_offer: join-1: Sending offer to vm1
Jun  5 15:31:16 vm1 crmd[4581]:     info: crm_update_peer_join: join_make_offer: Node vm1[2204477632] - join-1 phase 0 -> 1
Jun  5 15:31:16 vm1 crmd[4581]:     info: join_make_offer: join-1: Sending offer to vm2
Jun  5 15:31:16 vm1 crmd[4581]:     info: crm_update_peer_join: join_make_offer: Node vm2[2221254848] - join-1 phase 0 -> 1
Jun  5 15:31:16 vm1 crmd[4581]:     info: do_dc_join_offer_all: join-1: Waiting on 2 outstanding join acks
Jun  5 15:31:16 vm1 crmd[4581]:     info: update_dc: Set DC to vm1 (3.0.7)
Jun  5 15:31:16 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section crm_config: OK (rc=0, origin=local/crmd/11, version=0.20.0)
Jun  5 15:31:16 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section crm_config: OK (rc=0, origin=local/crmd/12, version=0.20.0)
Jun  5 15:31:16 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/13, version=0.20.0)
Jun  5 15:31:16 vm1 crmd[4581]:     info: crm_update_peer_join: do_dc_join_filter_offer: Node vm2[2221254848] - join-1 phase 1 -> 2
Jun  5 15:31:16 vm1 crmd[4581]:     info: crm_update_peer_expected: do_dc_join_filter_offer: Node vm2[2221254848] - expected state is now member
Jun  5 15:31:16 vm1 crmd[4581]:     info: crm_update_peer_join: do_dc_join_filter_offer: Node vm1[2204477632] - join-1 phase 1 -> 2
Jun  5 15:31:16 vm1 crmd[4581]:     info: crm_update_peer_expected: do_dc_join_filter_offer: Node vm1[2204477632] - expected state is now member
Jun  5 15:31:16 vm1 crmd[4581]:     info: do_state_transition: State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_FSA_INTERNAL origin=check_join_state ]
Jun  5 15:31:16 vm1 crmd[4581]:     info: crmd_join_phase_log: join-1: vm1=integrated
Jun  5 15:31:16 vm1 crmd[4581]:     info: crmd_join_phase_log: join-1: vm2=integrated
Jun  5 15:31:16 vm1 crmd[4581]:     info: do_dc_join_finalize: join-1: Syncing our CIB to the rest of the cluster
Jun  5 15:31:16 vm1 cib[4576]:     info: cib_process_request: Completed cib_sync operation for section 'all': OK (rc=0, origin=local/crmd/14, version=0.20.0)
Jun  5 15:31:16 vm1 crmd[4581]:     info: crm_update_peer_join: finalize_join_for: Node vm1[2204477632] - join-1 phase 2 -> 3
Jun  5 15:31:16 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section nodes: OK (rc=0, origin=local/crmd/15, version=0.20.0)
Jun  5 15:31:16 vm1 crmd[4581]:     info: crm_update_peer_join: finalize_join_for: Node vm2[2221254848] - join-1 phase 2 -> 3
Jun  5 15:31:16 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section nodes: OK (rc=0, origin=local/crmd/16, version=0.20.0)
Jun  5 15:31:16 vm1 crmd[4581]:     info: erase_status_tag: Deleting xpath: //node_state[@uname='vm1']/transient_attributes
Jun  5 15:31:16 vm1 crmd[4581]:     info: update_attrd: Connecting to attrd... 5 retries remaining
Jun  5 15:31:16 vm1 cib[4576]:     info: cib_process_request: Completed cib_delete operation for section //node_state[@uname='vm1']/transient_attributes: OK (rc=0, origin=local/crmd/17, version=0.20.0)
Jun  5 15:31:16 vm1 attrd[4579]:     info: crm_client_new: Connecting 0x915d10 for uid=496 gid=492 pid=4581 id=2ab5c4dd-dd29-4acf-a229-864fff7e3842
Jun  5 15:31:16 vm1 attrd[4579]:     info: find_hash_entry: Creating hash entry for terminate
Jun  5 15:31:16 vm1 attrd[4579]:     info: find_hash_entry: Creating hash entry for shutdown
Jun  5 15:31:16 vm1 crmd[4581]:     info: crm_update_peer_join: do_dc_join_ack: Node vm1[2204477632] - join-1 phase 3 -> 4
Jun  5 15:31:16 vm1 crmd[4581]:     info: do_dc_join_ack: join-1: Updating node state to member for vm1
Jun  5 15:31:16 vm1 crmd[4581]:     info: erase_status_tag: Deleting xpath: //node_state[@uname='vm1']/lrm
Jun  5 15:31:16 vm1 cib[4576]:     info: cib_process_request: Completed cib_delete operation for section //node_state[@uname='vm1']/lrm: OK (rc=0, origin=local/crmd/18, version=0.20.0)
Jun  5 15:31:16 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/crmd/19, version=0.20.1)
Jun  5 15:31:17 vm1 crmd[4581]:     info: crm_update_peer_join: do_dc_join_ack: Node vm2[2221254848] - join-1 phase 3 -> 4
Jun  5 15:31:17 vm1 crmd[4581]:     info: do_dc_join_ack: join-1: Updating node state to member for vm2
Jun  5 15:31:17 vm1 crmd[4581]:     info: erase_status_tag: Deleting xpath: //node_state[@uname='vm2']/lrm
Jun  5 15:31:17 vm1 cib[4576]:     info: cib_process_request: Completed cib_delete operation for section //node_state[@uname='vm2']/lrm: OK (rc=0, origin=local/crmd/20, version=0.20.1)
Jun  5 15:31:17 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/crmd/21, version=0.20.2)
Jun  5 15:31:17 vm1 cib[4576]:     info: crm_get_peer: Node 2221254848 is now known as vm2
Jun  5 15:31:17 vm1 cib[4576]:     info: crm_get_peer: Node 2221254848 has uuid 2221254848
Jun  5 15:31:17 vm1 cib[4576]:     info: cib_process_request: Completed cib_delete operation for section //node_state[@uname='vm2']/transient_attributes: OK (rc=0, origin=vm2/crmd/8, version=0.20.2)
Jun  5 15:31:17 vm1 crmd[4581]:     info: do_state_transition: State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state ]
Jun  5 15:31:17 vm1 crmd[4581]:     info: abort_transition_graph: do_te_invoke:155 - Triggered transition abort (complete=1) : Peer Cancelled
Jun  5 15:31:17 vm1 attrd[4579]:   notice: attrd_local_callback: Sending full refresh (origin=crmd)
Jun  5 15:31:17 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section nodes: OK (rc=0, origin=local/crmd/22, version=0.20.2)
Jun  5 15:31:17 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/crmd/23, version=0.20.2)
Jun  5 15:31:17 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section cib: OK (rc=0, origin=local/crmd/24, version=0.20.3)
Jun  5 15:31:17 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/25, version=0.20.3)
Jun  5 15:31:17 vm1 pengine[4580]:   notice: unpack_config: On loss of CCM Quorum: Ignore
Jun  5 15:31:17 vm1 pengine[4580]:  warning: unpack_nodes: Blind faith: not fencing unseen nodes
Jun  5 15:31:17 vm1 pengine[4580]:     info: determine_online_status_fencing: Node vm1 is active
Jun  5 15:31:17 vm1 pengine[4580]:     info: determine_online_status: Node vm1 is online
Jun  5 15:31:17 vm1 pengine[4580]:     info: determine_online_status_fencing: Node vm2 is active
Jun  5 15:31:17 vm1 pengine[4580]:     info: determine_online_status: Node vm2 is online
Jun  5 15:31:17 vm1 pengine[4580]:     info: clone_print:  Clone Set: cl1 [st1]
Jun  5 15:31:17 vm1 pengine[4580]:     info: short_print:      Stopped: [ vm1 vm2 ]
Jun  5 15:31:17 vm1 pengine[4580]:     info: native_print: prmDummy#011(ocf::pacemaker:Dummy):#011Stopped 
Jun  5 15:31:17 vm1 pengine[4580]:     info: clone_print:  Clone Set: clnPing [prmPing]
Jun  5 15:31:17 vm1 pengine[4580]:     info: short_print:      Stopped: [ vm1 vm2 ]
Jun  5 15:31:17 vm1 pengine[4580]:     info: native_color: Resource prmDummy cannot run anywhere
Jun  5 15:31:17 vm1 pengine[4580]:     info: RecurringOp:  Start recurring monitor (10s) for prmPing:0 on vm1
Jun  5 15:31:17 vm1 pengine[4580]:     info: RecurringOp:  Start recurring monitor (10s) for prmPing:1 on vm2
Jun  5 15:31:17 vm1 pengine[4580]:   notice: LogActions: Start   st1:0#011(vm1)
Jun  5 15:31:17 vm1 pengine[4580]:   notice: LogActions: Start   st1:1#011(vm2)
Jun  5 15:31:17 vm1 pengine[4580]:     info: LogActions: Leave   prmDummy#011(Stopped)
Jun  5 15:31:17 vm1 pengine[4580]:   notice: LogActions: Start   prmPing:0#011(vm1)
Jun  5 15:31:17 vm1 pengine[4580]:   notice: LogActions: Start   prmPing:1#011(vm2)
Jun  5 15:31:17 vm1 crmd[4581]:     info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Jun  5 15:31:17 vm1 crmd[4581]:     info: do_te_invoke: Processing graph 0 (ref=pe_calc-dc-1370413877-9) derived from /var/lib/pacemaker/pengine/pe-input-0.bz2
Jun  5 15:31:17 vm1 crmd[4581]:   notice: te_rsc_command: Initiating action 4: monitor st1:0_monitor_0 on vm1 (local)
Jun  5 15:31:17 vm1 lrmd[4578]:     info: process_lrmd_get_rsc_info: Resource 'st1' not found (0 active resources)
Jun  5 15:31:17 vm1 lrmd[4578]:     info: process_lrmd_get_rsc_info: Resource 'st1:0' not found (0 active resources)
Jun  5 15:31:17 vm1 lrmd[4578]:     info: process_lrmd_rsc_register: Added 'st1' to the rsc list (1 active resources)
Jun  5 15:31:17 vm1 crmd[4581]:     info: do_lrm_rsc_op: Performing key=4:0:7:307badcf-a5e3-4581-9cdd-2dd8b4b237df op=st1_monitor_0
Jun  5 15:31:17 vm1 stonith-ng[4577]:     info: crm_client_new: Connecting 0x18677e0 for uid=0 gid=0 pid=4578 id=9a0a0843-b168-4c09-9ca3-55571f619e0a
Jun  5 15:31:17 vm1 stonith-ng[4577]:     info: stonith_command: Processed register from lrmd.4578: OK (0)
Jun  5 15:31:17 vm1 stonith-ng[4577]:     info: stonith_command: Processed st_notify from lrmd.4578: OK (0)
Jun  5 15:31:17 vm1 crmd[4581]:   notice: te_rsc_command: Initiating action 8: monitor st1:1_monitor_0 on vm2
Jun  5 15:31:17 vm1 crmd[4581]:   notice: te_rsc_command: Initiating action 9: monitor prmDummy_monitor_0 on vm2
Jun  5 15:31:17 vm1 crmd[4581]:   notice: te_rsc_command: Initiating action 5: monitor prmDummy_monitor_0 on vm1 (local)
Jun  5 15:31:17 vm1 lrmd[4578]:     info: process_lrmd_get_rsc_info: Resource 'prmDummy' not found (1 active resources)
Jun  5 15:31:17 vm1 lrmd[4578]:     info: process_lrmd_rsc_register: Added 'prmDummy' to the rsc list (2 active resources)
Jun  5 15:31:17 vm1 crmd[4581]:     info: do_lrm_rsc_op: Performing key=5:0:7:307badcf-a5e3-4581-9cdd-2dd8b4b237df op=prmDummy_monitor_0
Jun  5 15:31:17 vm1 crmd[4581]:   notice: te_rsc_command: Initiating action 6: monitor prmPing:0_monitor_0 on vm1 (local)
Jun  5 15:31:17 vm1 lrmd[4578]:     info: process_lrmd_get_rsc_info: Resource 'prmPing' not found (2 active resources)
Jun  5 15:31:17 vm1 lrmd[4578]:     info: process_lrmd_get_rsc_info: Resource 'prmPing:0' not found (2 active resources)
Jun  5 15:31:17 vm1 lrmd[4578]:     info: process_lrmd_rsc_register: Added 'prmPing' to the rsc list (3 active resources)
Jun  5 15:31:17 vm1 crmd[4581]:     info: do_lrm_rsc_op: Performing key=6:0:7:307badcf-a5e3-4581-9cdd-2dd8b4b237df op=prmPing_monitor_0
Jun  5 15:31:17 vm1 crmd[4581]:   notice: te_rsc_command: Initiating action 10: monitor prmPing:1_monitor_0 on vm2
Jun  5 15:31:17 vm1 pengine[4580]:   notice: process_pe_message: Calculated Transition 0: /var/lib/pacemaker/pengine/pe-input-0.bz2
Jun  5 15:31:17 vm1 crmd[4581]:     info: stonith_action_create: Initiating action metadata for agent fence_rhevm (target=(null))
Jun  5 15:31:17 vm1 Dummy(prmDummy)[4597]: DEBUG: prmDummy monitor : 7
Jun  5 15:31:18 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/crmd/26, version=0.20.4)
Jun  5 15:31:18 vm1 crmd[4581]:     info: process_lrm_event: LRM operation st1_monitor_0 (call=6, rc=7, cib-update=26, confirmed=true) not running
Jun  5 15:31:18 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=vm2/crmd/9, version=0.20.5)
Jun  5 15:31:18 vm1 crmd[4581]:     info: services_os_action_execute: Managed Dummy_meta-data_0 process 4617 exited with rc=0
Jun  5 15:31:18 vm1 crmd[4581]:   notice: process_lrm_event: LRM operation prmDummy_monitor_0 (call=10, rc=7, cib-update=27, confirmed=true) not running
Jun  5 15:31:18 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/crmd/27, version=0.20.6)
Jun  5 15:31:18 vm1 crmd[4581]:     info: services_os_action_execute: Managed ping_meta-data_0 process 4621 exited with rc=0
Jun  5 15:31:18 vm1 crmd[4581]:   notice: process_lrm_event: LRM operation prmPing_monitor_0 (call=15, rc=7, cib-update=28, confirmed=true) not running
Jun  5 15:31:18 vm1 crmd[4581]:     info: match_graph_event: Action st1_monitor_0 (4) confirmed on vm1 (rc=0)
Jun  5 15:31:18 vm1 crmd[4581]:     info: match_graph_event: Action st1_monitor_0 (8) confirmed on vm2 (rc=0)
Jun  5 15:31:18 vm1 crmd[4581]:     info: match_graph_event: Action prmDummy_monitor_0 (5) confirmed on vm1 (rc=0)
Jun  5 15:31:18 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/crmd/28, version=0.20.7)
Jun  5 15:31:18 vm1 crmd[4581]:     info: match_graph_event: Action prmPing_monitor_0 (6) confirmed on vm1 (rc=0)
Jun  5 15:31:18 vm1 crmd[4581]:   notice: te_rsc_command: Initiating action 3: probe_complete probe_complete on vm1 (local) - no waiting
Jun  5 15:31:18 vm1 attrd[4579]:     info: find_hash_entry: Creating hash entry for probe_complete
Jun  5 15:31:18 vm1 attrd[4579]:   notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Jun  5 15:31:18 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section //cib/status//node_state[@id='2204477632']//transient_attributes//nvpair[@name='probe_complete']: No such device or address (rc=-6, origin=local/attrd/2, version=0.20.7)
Jun  5 15:31:18 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section /cib: OK (rc=0, origin=local/attrd/3, version=0.20.7)
Jun  5 15:31:18 vm1 attrd[4579]:   notice: attrd_perform_update: Sent update 4: probe_complete=true
Jun  5 15:31:18 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/attrd/4, version=0.20.8)
Jun  5 15:31:18 vm1 crmd[4581]:     info: te_rsc_command: Action 3 confirmed - no wait
Jun  5 15:31:18 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=vm2/crmd/10, version=0.20.9)
Jun  5 15:31:18 vm1 crmd[4581]:     info: match_graph_event: Action prmDummy_monitor_0 (9) confirmed on vm2 (rc=0)
Jun  5 15:31:18 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=vm2/crmd/11, version=0.20.10)
Jun  5 15:31:18 vm1 crmd[4581]:     info: match_graph_event: Action prmPing_monitor_0 (10) confirmed on vm2 (rc=0)
Jun  5 15:31:18 vm1 crmd[4581]:   notice: te_rsc_command: Initiating action 7: probe_complete probe_complete on vm2 - no waiting
Jun  5 15:31:18 vm1 crmd[4581]:     info: te_rsc_command: Action 7 confirmed - no wait
Jun  5 15:31:18 vm1 crmd[4581]:   notice: te_rsc_command: Initiating action 11: start st1:0_start_0 on vm1 (local)
Jun  5 15:31:18 vm1 crmd[4581]:     info: do_lrm_rsc_op: Performing key=11:0:0:307badcf-a5e3-4581-9cdd-2dd8b4b237df op=st1_start_0
Jun  5 15:31:18 vm1 lrmd[4578]:     info: log_execute: executing - rsc:st1 action:start call_id:20
Jun  5 15:31:18 vm1 stonith-ng[4577]:     info: stonith_action_create: Initiating action metadata for agent fence_rhevm (target=(null))
Jun  5 15:31:18 vm1 crmd[4581]:   notice: te_rsc_command: Initiating action 12: start st1:1_start_0 on vm2
Jun  5 15:31:18 vm1 crmd[4581]:   notice: te_rsc_command: Initiating action 17: start prmPing:0_start_0 on vm1 (local)
Jun  5 15:31:18 vm1 attrd[4579]:     info: crm_get_peer: Node 2221254848 is now known as vm2
Jun  5 15:31:18 vm1 attrd[4579]:     info: crm_get_peer: Node 2221254848 has uuid 2221254848
Jun  5 15:31:18 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=vm2/attrd/5, version=0.20.11)
Jun  5 15:31:18 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section //cib/status//node_state[@id='2204477632']//transient_attributes//nvpair[@name='probe_complete']: OK (rc=0, origin=local/attrd/5, version=0.20.11)
Jun  5 15:31:18 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/attrd/6, version=0.20.11)
Jun  5 15:31:19 vm1 stonith-ng[4577]:   notice: stonith_device_register: Added 'st1' to the device list (2 active devices)
Jun  5 15:31:19 vm1 stonith-ng[4577]:     info: stonith_command: Processed st_device_register from lrmd.4578: OK (0)
Jun  5 15:31:19 vm1 crmd[4581]:     info: do_lrm_rsc_op: Performing key=17:0:0:307badcf-a5e3-4581-9cdd-2dd8b4b237df op=prmPing_start_0
Jun  5 15:31:19 vm1 lrmd[4578]:     info: log_execute: executing - rsc:prmPing action:start call_id:22
Jun  5 15:31:19 vm1 crmd[4581]:   notice: te_rsc_command: Initiating action 19: start prmPing:1_start_0 on vm2
Jun  5 15:31:19 vm1 stonith-ng[4577]:     info: stonith_command: Processed st_execute from lrmd.4578: Operation now in progress (-115)
Jun  5 15:31:19 vm1 stonith-ng[4577]:     info: stonith_action_create: Initiating action monitor for agent fence_rhevm (target=(null))
Jun  5 15:31:19 vm1 lrmd[4578]:     info: log_finished: finished - rsc:st1 action:start call_id:20  exit-code:0 exec-time:1262ms queue-time:0ms
Jun  5 15:31:19 vm1 crmd[4581]:   notice: process_lrm_event: LRM operation st1_start_0 (call=20, rc=0, cib-update=29, confirmed=true) ok
Jun  5 15:31:19 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/crmd/29, version=0.20.12)
Jun  5 15:31:19 vm1 crmd[4581]:     info: match_graph_event: Action st1_start_0 (11) confirmed on vm1 (rc=0)
Jun  5 15:31:19 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=vm2/crmd/12, version=0.20.13)
Jun  5 15:31:19 vm1 crmd[4581]:     info: match_graph_event: Action st1_start_0 (12) confirmed on vm2 (rc=0)
Jun  5 15:31:21 vm1 attrd_updater[4652]:   notice: crm_add_logfile: Additional logging available in /var/log/pacemaker.log
Jun  5 15:31:21 vm1 attrd_updater[4652]:    debug: crm_update_callsites: Enabling callsites based on priority=7, files=(null), functions=(null), formats=(null), tags=(null)
Jun  5 15:31:21 vm1 attrd[4579]:     info: crm_client_new: Connecting 0x9267f0 for uid=0 gid=0 pid=4652 id=31cec868-70d9-49b5-a8d2-25a3e0f869bb
Jun  5 15:31:21 vm1 attrd[4579]:     info: find_hash_entry: Creating hash entry for default_ping_set(1)
Jun  5 15:31:21 vm1 lrmd[4578]:     info: log_finished: finished - rsc:prmPing action:start call_id:22 pid:4629 exit-code:0 exec-time:2054ms queue-time:0ms
Jun  5 15:31:21 vm1 crmd[4581]:   notice: process_lrm_event: LRM operation prmPing_start_0 (call=22, rc=0, cib-update=30, confirmed=true) ok
Jun  5 15:31:21 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/crmd/30, version=0.20.14)
Jun  5 15:31:21 vm1 crmd[4581]:     info: match_graph_event: Action prmPing_start_0 (17) confirmed on vm1 (rc=0)
Jun  5 15:31:21 vm1 crmd[4581]:   notice: te_rsc_command: Initiating action 18: monitor prmPing:0_monitor_10000 on vm1 (local)
Jun  5 15:31:21 vm1 crmd[4581]:     info: do_lrm_rsc_op: Performing key=18:0:0:307badcf-a5e3-4581-9cdd-2dd8b4b237df op=prmPing_monitor_10000
Jun  5 15:31:21 vm1 attrd[4579]:     info: crm_client_destroy: Destroying 0 events
Jun  5 15:31:21 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=vm2/crmd/13, version=0.20.15)
Jun  5 15:31:21 vm1 crmd[4581]:     info: match_graph_event: Action prmPing_start_0 (19) confirmed on vm2 (rc=0)
Jun  5 15:31:21 vm1 crmd[4581]:   notice: te_rsc_command: Initiating action 20: monitor prmPing:1_monitor_10000 on vm2
Jun  5 15:31:23 vm1 attrd_updater[4674]:   notice: crm_add_logfile: Additional logging available in /var/log/pacemaker.log
Jun  5 15:31:23 vm1 attrd_updater[4674]:    debug: crm_update_callsites: Enabling callsites based on priority=7, files=(null), functions=(null), formats=(null), tags=(null)
Jun  5 15:31:23 vm1 attrd[4579]:     info: crm_client_new: Connecting 0x9267f0 for uid=0 gid=0 pid=4674 id=ee8f9989-0d6b-4c47-b031-92932c8b3cd2
Jun  5 15:31:23 vm1 crmd[4581]:   notice: process_lrm_event: LRM operation prmPing_monitor_10000 (call=26, rc=0, cib-update=31, confirmed=false) ok
Jun  5 15:31:23 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/crmd/31, version=0.20.16)
Jun  5 15:31:23 vm1 crmd[4581]:     info: match_graph_event: Action prmPing_monitor_10000 (18) confirmed on vm1 (rc=0)
Jun  5 15:31:23 vm1 attrd[4579]:     info: crm_client_destroy: Destroying 0 events
Jun  5 15:31:23 vm1 crmd[4581]:     info: match_graph_event: Action prmPing_monitor_10000 (20) confirmed on vm2 (rc=0)
Jun  5 15:31:23 vm1 crmd[4581]:   notice: run_graph: Transition 0 (Complete=19, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-0.bz2): Complete
Jun  5 15:31:23 vm1 crmd[4581]:     info: do_log: FSA: Input I_TE_SUCCESS from notify_crmd() received in state S_TRANSITION_ENGINE
Jun  5 15:31:23 vm1 crmd[4581]:   notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
Jun  5 15:31:23 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=vm2/crmd/14, version=0.20.17)
Jun  5 15:31:26 vm1 attrd[4579]:   notice: attrd_trigger_update: Sending flush op to all hosts for: default_ping_set(1) (100)
Jun  5 15:31:26 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section //cib/status//node_state[@id='2204477632']//transient_attributes//nvpair[@name='default_ping_set(1)']: No such device or address (rc=-6, origin=local/attrd/7, version=0.20.17)
Jun  5 15:31:26 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section /cib: OK (rc=0, origin=local/attrd/8, version=0.20.17)
Jun  5 15:31:26 vm1 attrd[4579]:   notice: attrd_perform_update: Sent update 9: default_ping_set(1)=100
Jun  5 15:31:26 vm1 crmd[4581]:     info: abort_transition_graph: te_update_diff:172 - Triggered transition abort (complete=1, node=vm1, tag=nvpair, id=status-2204477632-default_ping_set(1), name=default_ping_set(1), value=100, magic=NA, cib=0.20.18) : Transient attribute: update
Jun  5 15:31:26 vm1 crmd[4581]:   notice: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Jun  5 15:31:26 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/attrd/9, version=0.20.18)
Jun  5 15:31:26 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/32, version=0.20.18)
Jun  5 15:31:26 vm1 pengine[4580]:   notice: unpack_config: On loss of CCM Quorum: Ignore
Jun  5 15:31:26 vm1 pengine[4580]:  warning: unpack_nodes: Blind faith: not fencing unseen nodes
Jun  5 15:31:26 vm1 pengine[4580]:     info: determine_online_status_fencing: Node vm1 is active
Jun  5 15:31:26 vm1 pengine[4580]:     info: determine_online_status: Node vm1 is online
Jun  5 15:31:26 vm1 pengine[4580]:     info: determine_online_status_fencing: Node vm2 is active
Jun  5 15:31:26 vm1 pengine[4580]:     info: determine_online_status: Node vm2 is online
Jun  5 15:31:26 vm1 pengine[4580]:     info: clone_print:  Clone Set: cl1 [st1]
Jun  5 15:31:26 vm1 pengine[4580]:     info: short_print:      Started: [ vm1 vm2 ]
Jun  5 15:31:26 vm1 pengine[4580]:     info: native_print: prmDummy#011(ocf::pacemaker:Dummy):#011Stopped 
Jun  5 15:31:26 vm1 pengine[4580]:     info: clone_print:  Clone Set: clnPing [prmPing]
Jun  5 15:31:26 vm1 pengine[4580]:     info: short_print:      Started: [ vm1 vm2 ]
Jun  5 15:31:26 vm1 pengine[4580]:     info: RecurringOp:  Start recurring monitor (10s) for prmDummy on vm1
Jun  5 15:31:26 vm1 pengine[4580]:     info: LogActions: Leave   st1:0#011(Started vm1)
Jun  5 15:31:26 vm1 pengine[4580]:     info: LogActions: Leave   st1:1#011(Started vm2)
Jun  5 15:31:26 vm1 pengine[4580]:   notice: LogActions: Start   prmDummy#011(vm1)
Jun  5 15:31:26 vm1 pengine[4580]:     info: LogActions: Leave   prmPing:0#011(Started vm1)
Jun  5 15:31:26 vm1 pengine[4580]:     info: LogActions: Leave   prmPing:1#011(Started vm2)
Jun  5 15:31:26 vm1 crmd[4581]:     info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Jun  5 15:31:26 vm1 crmd[4581]:     info: do_te_invoke: Processing graph 1 (ref=pe_calc-dc-1370413886-24) derived from /var/lib/pacemaker/pengine/pe-input-1.bz2
Jun  5 15:31:26 vm1 crmd[4581]:   notice: te_rsc_command: Initiating action 15: start prmDummy_start_0 on vm1 (local)
Jun  5 15:31:26 vm1 crmd[4581]:     info: do_lrm_rsc_op: Performing key=15:1:0:307badcf-a5e3-4581-9cdd-2dd8b4b237df op=prmDummy_start_0
Jun  5 15:31:26 vm1 lrmd[4578]:     info: log_execute: executing - rsc:prmDummy action:start call_id:29
Jun  5 15:31:26 vm1 pengine[4580]:   notice: process_pe_message: Calculated Transition 1: /var/lib/pacemaker/pengine/pe-input-1.bz2
Jun  5 15:31:26 vm1 crmd[4581]:     info: abort_transition_graph: te_update_diff:172 - Triggered transition abort (complete=0, node=vm2, tag=nvpair, id=status-2221254848-default_ping_set(1), name=default_ping_set(1), value=100, magic=NA, cib=0.20.19) : Transient attribute: update
Jun  5 15:31:26 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=vm2/attrd/8, version=0.20.19)
Jun  5 15:31:26 vm1 Dummy(prmDummy)[4675]: DEBUG: prmDummy start : 0
Jun  5 15:31:26 vm1 lrmd[4578]:     info: log_finished: finished - rsc:prmDummy action:start call_id:29 pid:4675 exit-code:0 exec-time:27ms queue-time:0ms
Jun  5 15:31:26 vm1 crmd[4581]:   notice: process_lrm_event: LRM operation prmDummy_start_0 (call=29, rc=0, cib-update=33, confirmed=true) ok
Jun  5 15:31:26 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/crmd/33, version=0.20.20)
Jun  5 15:31:26 vm1 crmd[4581]:     info: match_graph_event: Action prmDummy_start_0 (15) confirmed on vm1 (rc=0)
Jun  5 15:31:26 vm1 crmd[4581]:   notice: run_graph: Transition 1 (Complete=1, Pending=0, Fired=0, Skipped=1, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-1.bz2): Stopped
Jun  5 15:31:26 vm1 crmd[4581]:     info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_crmd ]
Jun  5 15:31:26 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/34, version=0.20.20)
Jun  5 15:31:26 vm1 pengine[4580]:   notice: unpack_config: On loss of CCM Quorum: Ignore
Jun  5 15:31:26 vm1 pengine[4580]:  warning: unpack_nodes: Blind faith: not fencing unseen nodes
Jun  5 15:31:26 vm1 pengine[4580]:     info: determine_online_status_fencing: Node vm1 is active
Jun  5 15:31:26 vm1 pengine[4580]:     info: determine_online_status: Node vm1 is online
Jun  5 15:31:26 vm1 pengine[4580]:     info: determine_online_status_fencing: Node vm2 is active
Jun  5 15:31:26 vm1 pengine[4580]:     info: determine_online_status: Node vm2 is online
Jun  5 15:31:26 vm1 pengine[4580]:     info: clone_print:  Clone Set: cl1 [st1]
Jun  5 15:31:26 vm1 pengine[4580]:     info: short_print:      Started: [ vm1 vm2 ]
Jun  5 15:31:26 vm1 pengine[4580]:     info: native_print: prmDummy#011(ocf::pacemaker:Dummy):#011Started vm1 
Jun  5 15:31:26 vm1 pengine[4580]:     info: clone_print:  Clone Set: clnPing [prmPing]
Jun  5 15:31:26 vm1 pengine[4580]:     info: short_print:      Started: [ vm1 vm2 ]
Jun  5 15:31:26 vm1 pengine[4580]:     info: RecurringOp:  Start recurring monitor (10s) for prmDummy on vm1
Jun  5 15:31:26 vm1 pengine[4580]:     info: LogActions: Leave   st1:0#011(Started vm1)
Jun  5 15:31:26 vm1 pengine[4580]:     info: LogActions: Leave   st1:1#011(Started vm2)
Jun  5 15:31:26 vm1 pengine[4580]:     info: LogActions: Leave   prmDummy#011(Started vm1)
Jun  5 15:31:26 vm1 pengine[4580]:     info: LogActions: Leave   prmPing:0#011(Started vm1)
Jun  5 15:31:26 vm1 pengine[4580]:     info: LogActions: Leave   prmPing:1#011(Started vm2)
Jun  5 15:31:26 vm1 crmd[4581]:     info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Jun  5 15:31:26 vm1 crmd[4581]:     info: do_te_invoke: Processing graph 2 (ref=pe_calc-dc-1370413886-26) derived from /var/lib/pacemaker/pengine/pe-input-2.bz2
Jun  5 15:31:26 vm1 crmd[4581]:   notice: te_rsc_command: Initiating action 17: monitor prmDummy_monitor_10000 on vm1 (local)
Jun  5 15:31:26 vm1 crmd[4581]:     info: do_lrm_rsc_op: Performing key=17:2:0:307badcf-a5e3-4581-9cdd-2dd8b4b237df op=prmDummy_monitor_10000
Jun  5 15:31:26 vm1 pengine[4580]:   notice: process_pe_message: Calculated Transition 2: /var/lib/pacemaker/pengine/pe-input-2.bz2
Jun  5 15:31:26 vm1 Dummy(prmDummy)[4684]: DEBUG: prmDummy monitor : 0
Jun  5 15:31:26 vm1 crmd[4581]:   notice: process_lrm_event: LRM operation prmDummy_monitor_10000 (call=32, rc=0, cib-update=35, confirmed=false) ok
Jun  5 15:31:26 vm1 crmd[4581]:     info: match_graph_event: Action prmDummy_monitor_10000 (17) confirmed on vm1 (rc=0)
Jun  5 15:31:26 vm1 crmd[4581]:   notice: run_graph: Transition 2 (Complete=1, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-2.bz2): Complete
Jun  5 15:31:26 vm1 crmd[4581]:     info: do_log: FSA: Input I_TE_SUCCESS from notify_crmd() received in state S_TRANSITION_ENGINE
Jun  5 15:31:26 vm1 crmd[4581]:   notice: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
Jun  5 15:31:26 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/crmd/35, version=0.20.21)
Jun  5 15:31:35 vm1 attrd_updater[4713]:   notice: crm_add_logfile: Additional logging available in /var/log/pacemaker.log
Jun  5 15:31:35 vm1 attrd_updater[4713]:    debug: crm_update_callsites: Enabling callsites based on priority=7, files=(null), functions=(null), formats=(null), tags=(null)
Jun  5 15:31:35 vm1 attrd[4579]:     info: crm_client_new: Connecting 0x925970 for uid=0 gid=0 pid=4713 id=23661182-88b7-4041-b3df-0fe74d796353
Jun  5 15:31:35 vm1 attrd[4579]:     info: crm_client_destroy: Destroying 0 events
Jun  5 15:31:36 vm1 Dummy(prmDummy)[4714]: DEBUG: prmDummy monitor : 0
Jun  5 15:31:42 vm1 corosync[4555]:   [TOTEM ] timer_function_orf_token_timeout The token was lost in the OPERATIONAL state.
Jun  5 15:31:42 vm1 corosync[4555]:   [TOTEM ] timer_function_orf_token_timeout A processor failed, forming new configuration.
Jun  5 15:31:42 vm1 corosync[4555]:   [TOTEM ] totemudp_build_sockets_ip Receive multicast socket recv buffer size (320000 bytes).
Jun  5 15:31:42 vm1 corosync[4555]:   [TOTEM ] totemudp_build_sockets_ip Transmit multicast socket send buffer size (320000 bytes).
Jun  5 15:31:42 vm1 corosync[4555]:   [TOTEM ] totemudp_build_sockets_ip Local receive multicast loop socket recv buffer size (320000 bytes).
Jun  5 15:31:42 vm1 corosync[4555]:   [TOTEM ] totemudp_build_sockets_ip Local transmit multicast loop socket send buffer size (320000 bytes).
Jun  5 15:31:42 vm1 corosync[4555]:   [TOTEM ] totemudp_build_sockets_ip Receive multicast socket recv buffer size (320000 bytes).
Jun  5 15:31:42 vm1 corosync[4555]:   [TOTEM ] totemudp_build_sockets_ip Transmit multicast socket send buffer size (320000 bytes).
Jun  5 15:31:42 vm1 corosync[4555]:   [TOTEM ] totemudp_build_sockets_ip Local receive multicast loop socket recv buffer size (320000 bytes).
Jun  5 15:31:42 vm1 corosync[4555]:   [TOTEM ] totemudp_build_sockets_ip Local transmit multicast loop socket send buffer size (320000 bytes).
Jun  5 15:31:42 vm1 corosync[4555]:   [TOTEM ] memb_state_gather_enter entering GATHER state from 2.
Jun  5 15:31:43 vm1 corosync[4555]:   [TOTEM ] memb_state_gather_enter entering GATHER state from 0.
Jun  5 15:31:43 vm1 corosync[4555]:   [TOTEM ] memb_state_commit_token_create Creating commit token because I am the rep.
Jun  5 15:31:43 vm1 corosync[4555]:   [TOTEM ] old_ring_state_save Saving state aru 89 high seq received 89
Jun  5 15:31:43 vm1 corosync[4555]:   [TOTEM ] memb_ring_id_set_and_store Storing new sequence id for ring 184
Jun  5 15:31:43 vm1 corosync[4555]:   [TOTEM ] memb_state_commit_enter entering COMMIT state.
Jun  5 15:31:43 vm1 corosync[4555]:   [TOTEM ] message_handler_memb_commit_token got commit token
Jun  5 15:31:43 vm1 corosync[4555]:   [TOTEM ] memb_state_recovery_enter entering RECOVERY state.
Jun  5 15:31:43 vm1 corosync[4555]:   [TOTEM ] memb_state_recovery_enter TRANS [0] member 192.168.101.131:
Jun  5 15:31:43 vm1 corosync[4555]:   [TOTEM ] memb_state_recovery_enter position [0] member 192.168.101.131:
Jun  5 15:31:43 vm1 corosync[4555]:   [TOTEM ] memb_state_recovery_enter previous ring seq 180 rep 192.168.101.131
Jun  5 15:31:43 vm1 corosync[4555]:   [TOTEM ] memb_state_recovery_enter aru 89 high delivered 89 received flag 1
Jun  5 15:31:43 vm1 corosync[4555]:   [TOTEM ] memb_state_recovery_enter Did not need to originate any messages in recovery.
Jun  5 15:31:43 vm1 corosync[4555]:   [TOTEM ] message_handler_memb_commit_token got commit token
Jun  5 15:31:43 vm1 corosync[4555]:   [TOTEM ] message_handler_memb_commit_token Sending initial ORF token
Jun  5 15:31:43 vm1 corosync[4555]:   [TOTEM ] message_handler_memb_commit_token got commit token
Jun  5 15:31:43 vm1 corosync[4555]:   [TOTEM ] message_handler_memb_commit_token Sending initial ORF token
Jun  5 15:31:43 vm1 corosync[4555]:   [TOTEM ] message_handler_memb_commit_token got commit token
Jun  5 15:31:43 vm1 corosync[4555]:   [TOTEM ] message_handler_memb_commit_token Sending initial ORF token
Jun  5 15:31:43 vm1 corosync[4555]:   [TOTEM ] message_handler_memb_commit_token got commit token
Jun  5 15:31:43 vm1 corosync[4555]:   [TOTEM ] message_handler_memb_commit_token Sending initial ORF token
Jun  5 15:31:43 vm1 corosync[4555]:   [TOTEM ] message_handler_orf_token token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 0, aru 0
Jun  5 15:31:43 vm1 corosync[4555]:   [TOTEM ] message_handler_orf_token install seq 0 aru 0 high seq received 0
Jun  5 15:31:43 vm1 corosync[4555]:   [TOTEM ] message_handler_orf_token token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 1, aru 0
Jun  5 15:31:43 vm1 corosync[4555]:   [TOTEM ] message_handler_orf_token install seq 0 aru 0 high seq received 0
Jun  5 15:31:43 vm1 corosync[4555]:   [TOTEM ] message_handler_orf_token token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 2, aru 0
Jun  5 15:31:43 vm1 corosync[4555]:   [TOTEM ] message_handler_orf_token install seq 0 aru 0 high seq received 0
Jun  5 15:31:43 vm1 corosync[4555]:   [TOTEM ] message_handler_orf_token token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 3, aru 0
Jun  5 15:31:43 vm1 corosync[4555]:   [TOTEM ] message_handler_orf_token install seq 0 aru 0 high seq received 0
Jun  5 15:31:43 vm1 corosync[4555]:   [TOTEM ] message_handler_orf_token retrans flag count 4 token aru 0 install seq 0 aru 0 0
Jun  5 15:31:43 vm1 corosync[4555]:   [TOTEM ] old_ring_state_reset Resetting old ring state
Jun  5 15:31:43 vm1 corosync[4555]:   [TOTEM ] deliver_messages_from_recovery_to_regular recovery to regular 1-0
Jun  5 15:31:43 vm1 corosync[4555]:   [MAIN  ] member_object_left Member left: r(0) ip(192.168.101.132) r(1) ip(192.168.102.132) 
Jun  5 15:31:43 vm1 corosync[4555]:   [VOTEQ ] decode_flags flags: quorate: Yes Leaving: No WFA Status: No First: No Qdevice: No QdeviceAlive: No QdeviceCastVote: No QdeviceMasterWins: No
Jun  5 15:31:43 vm1 corosync[4555]:   [VOTEQ ] recalculate_quorum total_votes=1, expected_votes=2
Jun  5 15:31:43 vm1 corosync[4555]:   [VOTEQ ] calculate_quorum node 2204477632 state=1, votes=1, expected=2
Jun  5 15:31:43 vm1 corosync[4555]:   [VOTEQ ] calculate_quorum node 2221254848 state=2, votes=1, expected=2
Jun  5 15:31:43 vm1 corosync[4555]:   [VOTEQ ] get_lowest_node_id lowest node id: -2090489664 us: -2090489664
Jun  5 15:31:43 vm1 corosync[4555]:   [TOTEM ] totempg_waiting_trans_ack_cb waiting_trans_ack changed to 1
Jun  5 15:31:43 vm1 corosync[4555]:   [VOTEQ ] decode_flags flags: quorate: Yes Leaving: No WFA Status: No First: No Qdevice: No QdeviceAlive: No QdeviceCastVote: No QdeviceMasterWins: No
Jun  5 15:31:43 vm1 corosync[4555]:   [QUORUM] log_view_list Members[1]: -2090489664
Jun  5 15:31:43 vm1 corosync[4555]:   [QUORUM] send_library_notification sending quorum notification to (nil), length = 52
Jun  5 15:31:43 vm1 crmd[4581]:     info: pcmk_quorum_notification: Membership 388: quorum retained (1)
Jun  5 15:31:43 vm1 crmd[4581]:   notice: corosync_mark_unseen_peer_dead: Node -2073712448/vm2 was not seen in the previous transition
Jun  5 15:31:43 vm1 crmd[4581]:   notice: crm_update_peer_state: corosync_mark_unseen_peer_dead: Node vm2[2221254848] - state is now lost (was member)
Jun  5 15:31:43 vm1 crmd[4581]:     info: peer_update_callback: vm2 is now lost (was member)
Jun  5 15:31:43 vm1 crmd[4581]:  warning: match_down_event: No match for shutdown action on 2221254848
Jun  5 15:31:43 vm1 crmd[4581]:   notice: peer_update_callback: Stonith/shutdown of vm2 not matched
Jun  5 15:31:43 vm1 crmd[4581]:     info: crm_update_peer_join: erase_node_from_join: Node vm2[2221254848] - join-1 phase 4 -> 0
Jun  5 15:31:43 vm1 crmd[4581]:     info: abort_transition_graph: peer_update_callback:214 - Triggered transition abort (complete=1) : Node failure
Jun  5 15:31:43 vm1 crmd[4581]:     info: crm_cs_flush: Sent 0 CPG messages  (1 remaining, last=15): Try again
Jun  5 15:31:43 vm1 crmd[4581]:   notice: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Jun  5 15:31:43 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/crmd/36, version=0.20.21)
Jun  5 15:31:43 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section nodes: OK (rc=0, origin=local/crmd/37, version=0.20.21)
Jun  5 15:31:43 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/crmd/38, version=0.20.22)
Jun  5 15:31:43 vm1 cib[4576]:     info: crm_cs_flush: Sent 0 CPG messages  (1 remaining, last=22): Try again
Jun  5 15:31:43 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/39, version=0.20.22)
Jun  5 15:31:43 vm1 corosync[4555]:   [TOTEM ] memb_state_operational_enter entering OPERATIONAL state.
Jun  5 15:31:43 vm1 corosync[4555]:   [TOTEM ] memb_state_operational_enter A processor joined or left the membership and a new membership (192.168.101.131:388) was formed.
Jun  5 15:31:43 vm1 corosync[4555]:   [VOTEQ ] message_handler_req_exec_votequorum_nodeinfo got nodeinfo message from cluster node 2204477632
Jun  5 15:31:43 vm1 corosync[4555]:   [VOTEQ ] message_handler_req_exec_votequorum_nodeinfo nodeinfo message[2204477632]: votes: 1, expected: 2 flags: 1
Jun  5 15:31:43 vm1 corosync[4555]:   [VOTEQ ] decode_flags flags: quorate: Yes Leaving: No WFA Status: No First: No Qdevice: No QdeviceAlive: No QdeviceCastVote: No QdeviceMasterWins: No
Jun  5 15:31:43 vm1 corosync[4555]:   [VOTEQ ] recalculate_quorum total_votes=1, expected_votes=2
Jun  5 15:31:43 vm1 corosync[4555]:   [VOTEQ ] calculate_quorum node 2204477632 state=1, votes=1, expected=2
Jun  5 15:31:43 vm1 corosync[4555]:   [VOTEQ ] calculate_quorum node 2221254848 state=2, votes=1, expected=2
Jun  5 15:31:43 vm1 corosync[4555]:   [VOTEQ ] get_lowest_node_id lowest node id: -2090489664 us: -2090489664
Jun  5 15:31:43 vm1 corosync[4555]:   [VOTEQ ] message_handler_req_exec_votequorum_nodeinfo got nodeinfo message from cluster node 2204477632
Jun  5 15:31:43 vm1 corosync[4555]:   [VOTEQ ] message_handler_req_exec_votequorum_nodeinfo nodeinfo message[0]: votes: 0, expected: 0 flags: 0
Jun  5 15:31:43 vm1 corosync[4555]:   [VOTEQ ] message_handler_req_exec_votequorum_nodeinfo got nodeinfo message from cluster node 2204477632
Jun  5 15:31:43 vm1 corosync[4555]:   [VOTEQ ] message_handler_req_exec_votequorum_nodeinfo nodeinfo message[2204477632]: votes: 1, expected: 2 flags: 1
Jun  5 15:31:43 vm1 corosync[4555]:   [VOTEQ ] decode_flags flags: quorate: Yes Leaving: No WFA Status: No First: No Qdevice: No QdeviceAlive: No QdeviceCastVote: No QdeviceMasterWins: No
Jun  5 15:31:43 vm1 corosync[4555]:   [VOTEQ ] recalculate_quorum total_votes=1, expected_votes=2
Jun  5 15:31:43 vm1 corosync[4555]:   [VOTEQ ] calculate_quorum node 2204477632 state=1, votes=1, expected=2
Jun  5 15:31:43 vm1 corosync[4555]:   [VOTEQ ] calculate_quorum node 2221254848 state=2, votes=1, expected=2
Jun  5 15:31:43 vm1 corosync[4555]:   [VOTEQ ] get_lowest_node_id lowest node id: -2090489664 us: -2090489664
Jun  5 15:31:43 vm1 corosync[4555]:   [VOTEQ ] message_handler_req_exec_votequorum_nodeinfo got nodeinfo message from cluster node 2204477632
Jun  5 15:31:43 vm1 corosync[4555]:   [VOTEQ ] message_handler_req_exec_votequorum_nodeinfo nodeinfo message[0]: votes: 0, expected: 0 flags: 0
Jun  5 15:31:43 vm1 corosync[4555]:   [SYNC  ] sync_barrier_handler Committing synchronization for corosync configuration map access
Jun  5 15:31:43 vm1 corosync[4555]:   [CMAP  ] cmap_sync_activate Not first sync -> no action
Jun  5 15:31:43 vm1 corosync[4555]:   [CPG   ] downlist_log comparing: sender r(0) ip(192.168.101.131) r(1) ip(192.168.102.131) ; members(old:2 left:1)
Jun  5 15:31:43 vm1 corosync[4555]:   [CPG   ] downlist_log chosen downlist: sender r(0) ip(192.168.101.131) r(1) ip(192.168.102.131) ; members(old:2 left:1)
Jun  5 15:31:43 vm1 corosync[4555]:   [CPG   ] downlist_master_choose_and_send left_list_entries:1
Jun  5 15:31:43 vm1 corosync[4555]:   [CPG   ] downlist_master_choose_and_send left_list[0] group:attrd\x00, ip:r(0) ip(192.168.101.132) r(1) ip(192.168.102.132) , pid:2472
Jun  5 15:31:43 vm1 attrd[4579]:     info: pcmk_cpg_membership: Left[1.0] attrd.2221254848 
Jun  5 15:31:43 vm1 attrd[4579]:     info: crm_update_peer_proc: pcmk_cpg_membership: Node vm2[2221254848] - corosync-cpg is now offline
Jun  5 15:31:43 vm1 attrd[4579]:     info: pcmk_cpg_membership: Member[1.0] attrd.2204477632 
Jun  5 15:31:43 vm1 corosync[4555]:   [CPG   ] downlist_master_choose_and_send left_list_entries:1
Jun  5 15:31:43 vm1 corosync[4555]:   [CPG   ] downlist_master_choose_and_send left_list[0] group:cib\x00, ip:r(0) ip(192.168.101.132) r(1) ip(192.168.102.132) , pid:2469
Jun  5 15:31:43 vm1 corosync[4555]:   [CPG   ] downlist_master_choose_and_send left_list_entries:1
Jun  5 15:31:43 vm1 cib[4576]:     info: pcmk_cpg_membership: Left[2.0] cib.2221254848 
Jun  5 15:31:43 vm1 cib[4576]:     info: crm_update_peer_proc: pcmk_cpg_membership: Node vm2[2221254848] - corosync-cpg is now offline
Jun  5 15:31:43 vm1 cib[4576]:     info: pcmk_cpg_membership: Member[2.0] cib.2204477632 
Jun  5 15:31:43 vm1 corosync[4555]:   [CPG   ] downlist_master_choose_and_send left_list[0] group:crmd\x00, ip:r(0) ip(192.168.101.132) r(1) ip(192.168.102.132) , pid:2474
Jun  5 15:31:43 vm1 corosync[4555]:   [CPG   ] downlist_master_choose_and_send left_list_entries:1
Jun  5 15:31:43 vm1 corosync[4555]:   [CPG   ] downlist_master_choose_and_send left_list[0] group:pcmk\x00, ip:r(0) ip(192.168.101.132) r(1) ip(192.168.102.132) , pid:2467
Jun  5 15:31:43 vm1 corosync[4555]:   [CPG   ] downlist_master_choose_and_send left_list_entries:1
Jun  5 15:31:43 vm1 corosync[4555]:   [CPG   ] downlist_master_choose_and_send left_list[0] group:stonith-ng\x00, ip:r(0) ip(192.168.101.132) r(1) ip(192.168.102.132) , pid:2470
Jun  5 15:31:43 vm1 stonith-ng[4577]:     info: pcmk_cpg_membership: Left[2.0] stonith-ng.2221254848 
Jun  5 15:31:43 vm1 stonith-ng[4577]:     info: crm_update_peer_proc: pcmk_cpg_membership: Node vm2[2221254848] - corosync-cpg is now offline
Jun  5 15:31:43 vm1 stonith-ng[4577]:     info: crm_cs_flush: Sent 0 CPG messages  (1 remaining, last=1): Try again
Jun  5 15:31:43 vm1 stonith-ng[4577]:     info: pcmk_cpg_membership: Member[2.0] stonith-ng.2204477632 
Jun  5 15:31:43 vm1 corosync[4555]:   [CPG   ] message_handler_req_exec_cpg_joinlist got joinlist message from node 8365a8c0
Jun  5 15:31:43 vm1 corosync[4555]:   [SYNC  ] sync_barrier_handler Committing synchronization for corosync cluster closed process group service v1.01
Jun  5 15:31:43 vm1 corosync[4555]:   [CPG   ] joinlist_inform_clients joinlist_messages[0] group:crmd\x00, ip:r(0) ip(192.168.101.131) r(1) ip(192.168.102.131) , pid:4581
Jun  5 15:31:43 vm1 corosync[4555]:   [CPG   ] joinlist_inform_clients joinlist_messages[1] group:attrd\x00, ip:r(0) ip(192.168.101.131) r(1) ip(192.168.102.131) , pid:4579
Jun  5 15:31:43 vm1 corosync[4555]:   [CPG   ] joinlist_inform_clients joinlist_messages[2] group:stonith-ng\x00, ip:r(0) ip(192.168.101.131) r(1) ip(192.168.102.131) , pid:4577
Jun  5 15:31:43 vm1 corosync[4555]:   [CPG   ] joinlist_inform_clients joinlist_messages[3] group:cib\x00, ip:r(0) ip(192.168.101.131) r(1) ip(192.168.102.131) , pid:4576
Jun  5 15:31:43 vm1 corosync[4555]:   [CPG   ] joinlist_inform_clients joinlist_messages[4] group:pcmk\x00, ip:r(0) ip(192.168.101.131) r(1) ip(192.168.102.131) , pid:4574
Jun  5 15:31:43 vm1 corosync[4555]:   [MAIN  ] corosync_sync_completed Completed service synchronization, ready to provide service.
Jun  5 15:31:43 vm1 corosync[4555]:   [TOTEM ] totempg_waiting_trans_ack_cb waiting_trans_ack changed to 0
Jun  5 15:31:44 vm1 crmd[4581]:     info: pcmk_cpg_membership: Left[1.0] crmd.2221254848 
Jun  5 15:31:44 vm1 crmd[4581]:     info: crm_update_peer_proc: pcmk_cpg_membership: Node vm2[2221254848] - corosync-cpg is now offline
Jun  5 15:31:44 vm1 crmd[4581]:     info: peer_update_callback: Client vm2/peer now has status [offline] (DC=true)
Jun  5 15:31:44 vm1 crmd[4581]:  warning: match_down_event: No match for shutdown action on 2221254848
Jun  5 15:31:44 vm1 crmd[4581]:   notice: peer_update_callback: Stonith/shutdown of vm2 not matched
Jun  5 15:31:44 vm1 crmd[4581]:     info: abort_transition_graph: peer_update_callback:214 - Triggered transition abort (complete=1) : Node failure
Jun  5 15:31:44 vm1 crmd[4581]:     info: pcmk_cpg_membership: Member[1.0] crmd.2204477632 
Jun  5 15:31:44 vm1 crmd[4581]:     info: do_state_transition: State transition S_POLICY_ENGINE -> S_INTEGRATION [ input=I_NODE_JOIN cause=C_FSA_INTERNAL origin=check_join_state ]
Jun  5 15:31:44 vm1 crmd[4581]:     info: do_dc_join_offer_one: An unknown node joined - (re-)offer to any unconfirmed nodes
Jun  5 15:31:44 vm1 crmd[4581]:     info: join_make_offer: Making join offers based on membership 388
Jun  5 15:31:44 vm1 crmd[4581]:     info: join_make_offer: Skipping vm1: already known 4
Jun  5 15:31:44 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/40, version=0.20.22)
Jun  5 15:31:44 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/crmd/41, version=0.20.23)
Jun  5 15:31:44 vm1 crmd[4581]:     info: do_state_transition: State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_FSA_INTERNAL origin=check_join_state ]
Jun  5 15:31:44 vm1 crmd[4581]:     info: crmd_join_phase_log: join-1: vm1=confirmed
Jun  5 15:31:44 vm1 crmd[4581]:     info: crmd_join_phase_log: join-1: vm2=none
Jun  5 15:31:44 vm1 crmd[4581]:     info: do_state_transition: State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state ]
Jun  5 15:31:44 vm1 crmd[4581]:     info: abort_transition_graph: do_te_invoke:155 - Triggered transition abort (complete=1) : Peer Cancelled
Jun  5 15:31:44 vm1 attrd[4579]:   notice: attrd_local_callback: Sending full refresh (origin=crmd)
Jun  5 15:31:44 vm1 attrd[4579]:   notice: attrd_trigger_update: Sending flush op to all hosts for: default_ping_set(1) (100)
Jun  5 15:31:44 vm1 attrd[4579]:   notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Jun  5 15:31:44 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section nodes: OK (rc=0, origin=local/crmd/42, version=0.20.23)
Jun  5 15:31:44 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/crmd/43, version=0.20.24)
Jun  5 15:31:44 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section cib: OK (rc=0, origin=local/crmd/44, version=0.20.24)
Jun  5 15:31:44 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/45, version=0.20.24)
Jun  5 15:31:44 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section //cib/status//node_state[@id='2204477632']//transient_attributes//nvpair[@name='probe_complete']: OK (rc=0, origin=local/attrd/10, version=0.20.24)
Jun  5 15:31:44 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/attrd/11, version=0.20.24)
Jun  5 15:31:44 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section //cib/status//node_state[@id='2204477632']//transient_attributes//nvpair[@name='default_ping_set(1)']: OK (rc=0, origin=local/attrd/12, version=0.20.24)
Jun  5 15:31:44 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/attrd/13, version=0.20.24)
Jun  5 15:31:45 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/46, version=0.20.24)
Jun  5 15:31:45 vm1 pengine[4580]:   notice: unpack_config: On loss of CCM Quorum: Ignore
Jun  5 15:31:45 vm1 pengine[4580]:  warning: unpack_nodes: Blind faith: not fencing unseen nodes
Jun  5 15:31:45 vm1 pengine[4580]:     info: determine_online_status_fencing: Node vm1 is active
Jun  5 15:31:45 vm1 pengine[4580]:     info: determine_online_status: Node vm1 is online
Jun  5 15:31:45 vm1 pengine[4580]:  warning: pe_fence_node: Node vm2 will be fenced because the node is no longer part of the cluster
Jun  5 15:31:45 vm1 pengine[4580]:  warning: determine_online_status: Node vm2 is unclean
Jun  5 15:31:45 vm1 pengine[4580]:     info: clone_print:  Clone Set: cl1 [st1]
Jun  5 15:31:45 vm1 pengine[4580]:     info: short_print:      Started: [ vm1 vm2 ]
Jun  5 15:31:45 vm1 pengine[4580]:     info: native_print: prmDummy#011(ocf::pacemaker:Dummy):#011Started vm1 
Jun  5 15:31:45 vm1 pengine[4580]:     info: clone_print:  Clone Set: clnPing [prmPing]
Jun  5 15:31:45 vm1 pengine[4580]:     info: short_print:      Started: [ vm1 vm2 ]
Jun  5 15:31:45 vm1 pengine[4580]:     info: native_color: Resource st1:1 cannot run anywhere
Jun  5 15:31:45 vm1 pengine[4580]:     info: native_color: Resource prmPing:1 cannot run anywhere
Jun  5 15:31:45 vm1 pengine[4580]:  warning: custom_action: Action st1:1_stop_0 on vm2 is unrunnable (offline)
Jun  5 15:31:45 vm1 pengine[4580]:  warning: custom_action: Action st1:1_stop_0 on vm2 is unrunnable (offline)
Jun  5 15:31:45 vm1 pengine[4580]:  warning: custom_action: Action prmPing:1_stop_0 on vm2 is unrunnable (offline)
Jun  5 15:31:45 vm1 pengine[4580]:  warning: custom_action: Action prmPing:1_stop_0 on vm2 is unrunnable (offline)
Jun  5 15:31:45 vm1 pengine[4580]:  warning: stage6: Scheduling Node vm2 for STONITH
Jun  5 15:31:45 vm1 pengine[4580]:     info: native_stop_constraints: st1:1_stop_0 is implicit after vm2 is fenced
Jun  5 15:31:45 vm1 pengine[4580]:     info: native_stop_constraints: prmPing:1_stop_0 is implicit after vm2 is fenced
Jun  5 15:31:45 vm1 pengine[4580]:     info: LogActions: Leave   st1:0#011(Started vm1)
Jun  5 15:31:45 vm1 pengine[4580]:   notice: LogActions: Stop    st1:1#011(vm2)
Jun  5 15:31:45 vm1 pengine[4580]:     info: LogActions: Leave   prmDummy#011(Started vm1)
Jun  5 15:31:45 vm1 pengine[4580]:     info: LogActions: Leave   prmPing:0#011(Started vm1)
Jun  5 15:31:45 vm1 pengine[4580]:   notice: LogActions: Stop    prmPing:1#011(vm2)
Jun  5 15:31:45 vm1 crmd[4581]:    error: crm_xml_err: XML Error: Entity: line 1: parser error : Specification mandate value for attribute CRM_meta_default_ping_set
Jun  5 15:31:45 vm1 crmd[4581]:    error: crm_xml_err: XML Error: 2" on_node="vm2" on_node_uuid="2221254848"><attributes CRM_meta_default_ping_set
Jun  5 15:31:45 vm1 crmd[4581]:    error: crm_xml_err: XML Error:                                                                                ^
Jun  5 15:31:45 vm1 crmd[4581]:    error: crm_xml_err: XML Error: Entity: line 1: parser error : attributes construct error
Jun  5 15:31:45 vm1 crmd[4581]:    error: crm_xml_err: XML Error: 2" on_node="vm2" on_node_uuid="2221254848"><attributes CRM_meta_default_ping_set
Jun  5 15:31:45 vm1 crmd[4581]:    error: crm_xml_err: XML Error:                                                                                ^
Jun  5 15:31:45 vm1 crmd[4581]:    error: crm_xml_err: XML Error: Entity: line 1: parser error : Couldn't find end of Start Tag attributes line 1
Jun  5 15:31:45 vm1 crmd[4581]:    error: crm_xml_err: XML Error: 2" on_node="vm2" on_node_uuid="2221254848"><attributes CRM_meta_default_ping_set
Jun  5 15:31:45 vm1 crmd[4581]:    error: crm_xml_err: XML Error:                                                                                ^
Jun  5 15:31:45 vm1 crmd[4581]:  warning: string2xml: Parsing failed (domain=1, level=3, code=73): Couldn't find end of Start Tag attributes line 1
Jun  5 15:31:45 vm1 pengine[4580]:  warning: process_pe_message: Calculated Transition 3: /var/lib/pacemaker/pengine/pe-warn-0.bz2
Jun  5 15:31:46 vm1 Dummy(prmDummy)[4742]: DEBUG: prmDummy monitor : 0
Jun  5 15:31:47 vm1 cib[4576]:     info: crm_client_destroy: Destroying 0 events
Jun  5 15:31:47 vm1 lrmd[4578]:     info: cancel_recurring_action: Cancelling operation prmDummy_monitor_10000
Jun  5 15:31:47 vm1 lrmd[4578]:  warning: qb_ipcs_event_sendv: new_event_notification (4578-4581-6): Bad file descriptor (9)
Jun  5 15:31:47 vm1 lrmd[4578]:  warning: send_client_notify: Notification of client crmd/60e37d11-d61c-4555-9514-decd1c943ad2 failed
Jun  5 15:31:47 vm1 lrmd[4578]:     info: services_action_cancel: Cancelling op: prmPing_monitor_10000 will occur once operation completes
Jun  5 15:31:47 vm1 lrmd[4578]:     info: crm_client_destroy: Destroying 1 events
Jun  5 15:31:47 vm1 stonith-ng[4577]:     info: crm_client_destroy: Destroying 0 events
Jun  5 15:31:47 vm1 pengine[4580]:     info: crm_client_destroy: Destroying 0 events
Jun  5 15:31:47 vm1 attrd[4579]:     info: crm_client_destroy: Destroying 0 events
Jun  5 15:31:47 vm1 pacemakerd[4574]:    error: child_death_dispatch: Managed process 4581 (crmd) dumped core
Jun  5 15:31:47 vm1 pacemakerd[4574]:   notice: pcmk_child_exit: Child process crmd terminated with signal 11 (pid=4581, core=1)
Jun  5 15:31:47 vm1 corosync[4555]:   [QB    ] qb_ipcs_dispatch_connection_request HUP conn (4555-4581-29)
Jun  5 15:31:47 vm1 corosync[4555]:   [QB    ] qb_ipcs_disconnect qb_ipcs_disconnect(4555-4581-29) state:2
Jun  5 15:31:47 vm1 corosync[4555]:   [QB    ] _del epoll_ctl(del): Bad file descriptor (9)
Jun  5 15:31:47 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_closed cs_ipcs_connection_closed() 
Jun  5 15:31:47 vm1 corosync[4555]:   [CPG   ] cpg_lib_exit_fn exit_fn for conn=0x7fab5fe78e40
Jun  5 15:31:47 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_destroyed cs_ipcs_connection_destroyed() 
Jun  5 15:31:47 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cpg-response-4555-4581-29-header
Jun  5 15:31:47 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cpg-event-4555-4581-29-header
Jun  5 15:31:47 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cpg-request-4555-4581-29-header
Jun  5 15:31:47 vm1 corosync[4555]:   [QB    ] qb_ipcs_dispatch_connection_request HUP conn (4555-4581-30)
Jun  5 15:31:47 vm1 corosync[4555]:   [QB    ] qb_ipcs_disconnect qb_ipcs_disconnect(4555-4581-30) state:2
Jun  5 15:31:47 vm1 corosync[4555]:   [QB    ] _del epoll_ctl(del): Bad file descriptor (9)
Jun  5 15:31:47 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_closed cs_ipcs_connection_closed() 
Jun  5 15:31:47 vm1 corosync[4555]:   [QUORUM] quorum_lib_exit_fn lib_exit_fn: conn=0x7fab5fe7a700
Jun  5 15:31:47 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_destroyed cs_ipcs_connection_destroyed() 
Jun  5 15:31:47 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-quorum-response-4555-4581-30-header
Jun  5 15:31:47 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-quorum-event-4555-4581-30-header
Jun  5 15:31:47 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-quorum-request-4555-4581-30-header
Jun  5 15:31:47 vm1 corosync[4555]:   [CPG   ] message_handler_req_exec_cpg_procleave got procleave message from cluster node -2090489664
Jun  5 15:31:47 vm1 pacemakerd[4574]:   notice: pcmk_process_exit: Respawning failed child process: crmd
Jun  5 15:31:47 vm1 pacemakerd[4574]:     info: start_child: Using uid=496 and group=492 for process crmd
Jun  5 15:31:47 vm1 pacemakerd[4574]:     info: start_child: Forked child 4753 for process crmd
Jun  5 15:31:47 vm1 attrd_updater[4754]:   notice: crm_add_logfile: Additional logging available in /var/log/pacemaker.log
Jun  5 15:31:47 vm1 attrd_updater[4754]:    debug: crm_update_callsites: Enabling callsites based on priority=7, files=(null), functions=(null), formats=(null), tags=(null)
Jun  5 15:31:47 vm1 crmd[4753]:   notice: crm_add_logfile: Additional logging available in /var/log/pacemaker.log
Jun  5 15:31:47 vm1 crmd[4753]:    debug: crm_update_callsites: Enabling callsites based on priority=7, files=(null), functions=(null), formats=(null), tags=(null)
Jun  5 15:31:47 vm1 crmd[4753]:     info: crm_log_init: Changed active directory to /var/lib/heartbeat/cores/hacluster
Jun  5 15:31:47 vm1 attrd[4579]:     info: crm_client_new: Connecting 0x915d10 for uid=0 gid=0 pid=4754 id=074f15d8-2ddb-4099-95f7-db99a1864b8b
Jun  5 15:31:47 vm1 crmd[4753]:   notice: main: CRM Git Version: 7209c02
Jun  5 15:31:47 vm1 crmd[4753]:     info: do_log: FSA: Input I_STARTUP from crmd_init() received in state S_STARTING
Jun  5 15:31:47 vm1 attrd[4579]:     info: crm_client_destroy: Destroying 0 events
Jun  5 15:31:47 vm1 crmd[4753]:     info: get_cluster_type: Verifying cluster type: 'corosync'
Jun  5 15:31:47 vm1 crmd[4753]:     info: get_cluster_type: Assuming an active 'corosync' cluster
Jun  5 15:31:47 vm1 cib[4576]:     info: crm_client_new: Connecting 0x11c2d40 for uid=496 gid=492 pid=4753 id=6e78c00b-1d46-4f45-87d5-073eeccbbc5f
Jun  5 15:31:47 vm1 crmd[4753]:     info: do_cib_control: CIB connection established
Jun  5 15:31:47 vm1 crmd[4753]:   notice: crm_cluster_connect: Connecting to cluster infrastructure: corosync
Jun  5 15:31:47 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/2, version=0.20.24)
Jun  5 15:31:47 vm1 corosync[4555]:   [QB    ] handle_new_connection IPC credentials authenticated (4555-4753-29)
Jun  5 15:31:47 vm1 corosync[4555]:   [QB    ] qb_ipcs_shm_connect connecting to client [4753]
Jun  5 15:31:47 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:31:47 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:31:47 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:31:47 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_created connection created
Jun  5 15:31:47 vm1 corosync[4555]:   [CPG   ] cpg_lib_init_fn lib_init_fn: conn=0x7fab5fe79ab0, cpd=0x7fab5fd796b4
Jun  5 15:31:47 vm1 crmd[4753]:     info: crm_get_peer: Node <null> now has id: 2204477632
Jun  5 15:31:47 vm1 crmd[4753]:     info: crm_update_peer_proc: init_cpg_connection: Node (null)[2204477632] - corosync-cpg is now online
Jun  5 15:31:47 vm1 corosync[4555]:   [QB    ] handle_new_connection IPC credentials authenticated (4555-4753-30)
Jun  5 15:31:47 vm1 corosync[4555]:   [QB    ] qb_ipcs_shm_connect connecting to client [4753]
Jun  5 15:31:47 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:31:47 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:31:47 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:31:47 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_created connection created
Jun  5 15:31:47 vm1 corosync[4555]:   [CMAP  ] cmap_lib_init_fn lib_init_fn: conn=0x7fab5fe78d10
Jun  5 15:31:47 vm1 crmd[4753]:   notice: corosync_node_name: Unable to get node name for nodeid 2204477632
Jun  5 15:31:47 vm1 crmd[4753]:   notice: get_local_node_name: Defaulting to uname -n for the local corosync node name
Jun  5 15:31:47 vm1 crmd[4753]:     info: init_cs_connection_once: Connection to 'corosync': established
Jun  5 15:31:47 vm1 crmd[4753]:     info: crm_get_peer: Node 2204477632 is now known as vm1
Jun  5 15:31:47 vm1 crmd[4753]:     info: peer_update_callback: vm1 is now (null)
Jun  5 15:31:47 vm1 crmd[4753]:     info: crm_get_peer: Node 2204477632 has uuid 2204477632
Jun  5 15:31:47 vm1 corosync[4555]:   [CPG   ] message_handler_req_exec_cpg_procjoin got procjoin message from cluster node -2090489664 (r(0) ip(192.168.101.131) r(1) ip(192.168.102.131) ) for pid 4753
Jun  5 15:31:47 vm1 corosync[4555]:   [QB    ] qb_ipcs_dispatch_connection_request HUP conn (4555-4753-30)
Jun  5 15:31:47 vm1 corosync[4555]:   [QB    ] qb_ipcs_disconnect qb_ipcs_disconnect(4555-4753-30) state:2
Jun  5 15:31:47 vm1 corosync[4555]:   [QB    ] _del epoll_ctl(del): Bad file descriptor (9)
Jun  5 15:31:47 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_closed cs_ipcs_connection_closed() 
Jun  5 15:31:47 vm1 corosync[4555]:   [CMAP  ] cmap_lib_exit_fn exit_fn for conn=0x7fab5fe78d10
Jun  5 15:31:47 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_destroyed cs_ipcs_connection_destroyed() 
Jun  5 15:31:47 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-response-4555-4753-30-header
Jun  5 15:31:47 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-event-4555-4753-30-header
Jun  5 15:31:47 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-request-4555-4753-30-header
Jun  5 15:31:47 vm1 corosync[4555]:   [QB    ] handle_new_connection IPC credentials authenticated (4555-4753-30)
Jun  5 15:31:47 vm1 corosync[4555]:   [QB    ] qb_ipcs_shm_connect connecting to client [4753]
Jun  5 15:31:47 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:31:47 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:31:47 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:31:47 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_created connection created
Jun  5 15:31:47 vm1 corosync[4555]:   [QUORUM] quorum_lib_init_fn lib_init_fn: conn=0x7fab5fe78d10
Jun  5 15:31:47 vm1 corosync[4555]:   [QUORUM] message_handler_req_lib_quorum_gettype got quorum_type request on 0x7fab5fe78d10
Jun  5 15:31:47 vm1 corosync[4555]:   [QUORUM] message_handler_req_lib_quorum_getquorate got quorate request on 0x7fab5fe78d10
Jun  5 15:31:47 vm1 crmd[4753]:   notice: init_quorum_connection: Quorum acquired
Jun  5 15:31:47 vm1 corosync[4555]:   [QUORUM] message_handler_req_lib_quorum_trackstart got trackstart request on 0x7fab5fe78d10
Jun  5 15:31:47 vm1 corosync[4555]:   [QUORUM] message_handler_req_lib_quorum_trackstart sending initial status to 0x7fab5fe78d10
Jun  5 15:31:47 vm1 corosync[4555]:   [QUORUM] send_library_notification sending quorum notification to 0x7fab5fe78d10, length = 52
Jun  5 15:31:47 vm1 corosync[4555]:   [QB    ] handle_new_connection IPC credentials authenticated (4555-4753-31)
Jun  5 15:31:47 vm1 corosync[4555]:   [QB    ] qb_ipcs_shm_connect connecting to client [4753]
Jun  5 15:31:47 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:31:47 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:31:47 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:31:47 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_created connection created
Jun  5 15:31:47 vm1 corosync[4555]:   [CMAP  ] cmap_lib_init_fn lib_init_fn: conn=0x7fab5fe7b6d0
Jun  5 15:31:47 vm1 corosync[4555]:   [QB    ] qb_ipcs_dispatch_connection_request HUP conn (4555-4753-31)
Jun  5 15:31:47 vm1 corosync[4555]:   [QB    ] qb_ipcs_disconnect qb_ipcs_disconnect(4555-4753-31) state:2
Jun  5 15:31:47 vm1 corosync[4555]:   [QB    ] _del epoll_ctl(del): Bad file descriptor (9)
Jun  5 15:31:47 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_closed cs_ipcs_connection_closed() 
Jun  5 15:31:47 vm1 corosync[4555]:   [CMAP  ] cmap_lib_exit_fn exit_fn for conn=0x7fab5fe7b6d0
Jun  5 15:31:47 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_destroyed cs_ipcs_connection_destroyed() 
Jun  5 15:31:47 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-response-4555-4753-31-header
Jun  5 15:31:47 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-event-4555-4753-31-header
Jun  5 15:31:47 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-request-4555-4753-31-header
Jun  5 15:31:47 vm1 corosync[4555]:   [QB    ] handle_new_connection IPC credentials authenticated (4555-4753-31)
Jun  5 15:31:47 vm1 corosync[4555]:   [QB    ] qb_ipcs_shm_connect connecting to client [4753]
Jun  5 15:31:47 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:31:47 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:31:47 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:31:47 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_created connection created
Jun  5 15:31:47 vm1 corosync[4555]:   [CMAP  ] cmap_lib_init_fn lib_init_fn: conn=0x7fab5fe7cce0
Jun  5 15:31:47 vm1 crmd[4753]:     info: do_ha_control: Connected to the cluster
Jun  5 15:31:47 vm1 crmd[4753]:     info: lrmd_ipc_connect: Connecting to lrmd
Jun  5 15:31:47 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section nodes: OK (rc=0, origin=local/crmd/3, version=0.20.24)
Jun  5 15:31:47 vm1 lrmd[4578]:     info: crm_client_new: Connecting 0x2125d50 for uid=496 gid=492 pid=4753 id=cdc36ca0-0c08-458f-8ab1-4b0c74bb2c9c
Jun  5 15:31:47 vm1 crmd[4753]:     info: do_lrm_control: LRM connection established
Jun  5 15:31:47 vm1 crmd[4753]:     info: do_started: Delaying start, no membership data (0000000000100000)
Jun  5 15:31:47 vm1 crmd[4753]:     info: pcmk_quorum_notification: Membership 388: quorum retained (1)
Jun  5 15:31:47 vm1 crmd[4753]:   notice: crm_update_peer_state: pcmk_quorum_notification: Node vm1[2204477632] - state is now member (was (null))
Jun  5 15:31:47 vm1 crmd[4753]:     info: peer_update_callback: vm1 is now member (was (null))
Jun  5 15:31:47 vm1 crmd[4753]:     info: do_started: Delaying start, Config not read (0000000000000040)
Jun  5 15:31:47 vm1 crmd[4753]:     info: pcmk_cpg_membership: Joined[0.0] crmd.2204477632 
Jun  5 15:31:47 vm1 crmd[4753]:     info: pcmk_cpg_membership: Member[0.0] crmd.2204477632 
Jun  5 15:31:47 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section crm_config: OK (rc=0, origin=local/crmd/4, version=0.20.24)
Jun  5 15:31:47 vm1 crmd[4753]:     info: qb_ipcs_us_publish: server name: crmd
Jun  5 15:31:47 vm1 crmd[4753]:   notice: do_started: The local CRM is operational
Jun  5 15:31:47 vm1 crmd[4753]:     info: do_log: FSA: Input I_PENDING from do_started() received in state S_STARTING
Jun  5 15:31:47 vm1 crmd[4753]:     info: do_state_transition: State transition S_STARTING -> S_PENDING [ input=I_PENDING cause=C_FSA_INTERNAL origin=do_started ]
Jun  5 15:31:47 vm1 cib[4576]:     info: cib_process_readwrite: We are now in R/O mode
Jun  5 15:31:47 vm1 cib[4576]:     info: cib_process_request: Completed cib_slave operation for section 'all': OK (rc=0, origin=local/crmd/5, version=0.20.24)
Jun  5 15:31:47 vm1 corosync[4555]:   [QB    ] qb_ipcs_dispatch_connection_request HUP conn (4555-4753-31)
Jun  5 15:31:47 vm1 corosync[4555]:   [QB    ] qb_ipcs_disconnect qb_ipcs_disconnect(4555-4753-31) state:2
Jun  5 15:31:47 vm1 corosync[4555]:   [QB    ] _del epoll_ctl(del): Bad file descriptor (9)
Jun  5 15:31:47 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_closed cs_ipcs_connection_closed() 
Jun  5 15:31:47 vm1 corosync[4555]:   [CMAP  ] cmap_lib_exit_fn exit_fn for conn=0x7fab5fe7cce0
Jun  5 15:31:47 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_destroyed cs_ipcs_connection_destroyed() 
Jun  5 15:31:47 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-response-4555-4753-31-header
Jun  5 15:31:47 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-event-4555-4753-31-header
Jun  5 15:31:47 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-request-4555-4753-31-header
Jun  5 15:31:49 vm1 stonith-ng[4577]:     info: crm_client_new: Connecting 0x1867d30 for uid=496 gid=492 pid=4753 id=acb69de2-b64f-4628-83ea-ddcd0426d715
Jun  5 15:31:49 vm1 stonith-ng[4577]:     info: stonith_command: Processed register from crmd.4753: OK (0)
Jun  5 15:31:49 vm1 stonith-ng[4577]:     info: stonith_command: Processed st_notify from crmd.4753: OK (0)
Jun  5 15:31:49 vm1 stonith-ng[4577]:     info: stonith_command: Processed st_notify from crmd.4753: OK (0)
Jun  5 15:32:08 vm1 crmd[4753]:     info: crm_timer_popped: Election Trigger (I_DC_TIMEOUT) just popped (20000ms)
Jun  5 15:32:08 vm1 crmd[4753]:  warning: do_log: FSA: Input I_DC_TIMEOUT from crm_timer_popped() received in state S_PENDING
Jun  5 15:32:08 vm1 crmd[4753]:     info: do_state_transition: State transition S_PENDING -> S_ELECTION [ input=I_DC_TIMEOUT cause=C_TIMER_POPPED origin=crm_timer_popped ]
Jun  5 15:32:08 vm1 crmd[4753]:     info: do_log: FSA: Input I_ELECTION_DC from do_election_check() received in state S_ELECTION
Jun  5 15:32:08 vm1 crmd[4753]:   notice: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_FSA_INTERNAL origin=do_election_check ]
Jun  5 15:32:08 vm1 crmd[4753]:     info: do_te_control: Registering TE UUID: bd2d039e-b794-4db0-b9fd-01b45754ecc0
Jun  5 15:32:08 vm1 crmd[4753]:     info: set_graph_functions: Setting custom graph functions
Jun  5 15:32:08 vm1 pengine[4580]:     info: crm_client_new: Connecting 0x2290f90 for uid=496 gid=492 pid=4753 id=dd5e39cc-c31a-41d8-9a11-e10a04a602d9
Jun  5 15:32:08 vm1 crmd[4753]:     info: do_dc_takeover: Taking over DC status for this partition
Jun  5 15:32:08 vm1 cib[4576]:     info: cib_process_readwrite: We are now in R/W mode
Jun  5 15:32:08 vm1 cib[4576]:     info: cib_process_request: Completed cib_master operation for section 'all': OK (rc=0, origin=local/crmd/6, version=0.20.24)
Jun  5 15:32:08 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section cib: OK (rc=0, origin=local/crmd/7, version=0.20.24)
Jun  5 15:32:08 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section //cib/configuration/crm_config//cluster_property_set//nvpair[@name='dc-version']: OK (rc=0, origin=local/crmd/8, version=0.20.24)
Jun  5 15:32:08 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section crm_config: OK (rc=0, origin=local/crmd/9, version=0.20.24)
Jun  5 15:32:08 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section //cib/configuration/crm_config//cluster_property_set//nvpair[@name='cluster-infrastructure']: OK (rc=0, origin=local/crmd/10, version=0.20.24)
Jun  5 15:32:08 vm1 crmd[4753]:     info: join_make_offer: Making join offers based on membership 388
Jun  5 15:32:08 vm1 crmd[4753]:     info: join_make_offer: join-1: Sending offer to vm1
Jun  5 15:32:08 vm1 crmd[4753]:     info: crm_update_peer_join: join_make_offer: Node vm1[2204477632] - join-1 phase 0 -> 1
Jun  5 15:32:08 vm1 crmd[4753]:     info: do_dc_join_offer_all: join-1: Waiting on 1 outstanding join acks
Jun  5 15:32:08 vm1 crmd[4753]:     info: update_dc: Set DC to vm1 (3.0.7)
Jun  5 15:32:08 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section crm_config: OK (rc=0, origin=local/crmd/11, version=0.20.24)
Jun  5 15:32:08 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section crm_config: OK (rc=0, origin=local/crmd/12, version=0.20.24)
Jun  5 15:32:08 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/13, version=0.20.24)
Jun  5 15:32:08 vm1 crmd[4753]:     info: crm_update_peer_join: do_dc_join_filter_offer: Node vm1[2204477632] - join-1 phase 1 -> 2
Jun  5 15:32:08 vm1 crmd[4753]:     info: crm_update_peer_expected: do_dc_join_filter_offer: Node vm1[2204477632] - expected state is now member
Jun  5 15:32:08 vm1 crmd[4753]:     info: do_state_transition: State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_FSA_INTERNAL origin=check_join_state ]
Jun  5 15:32:08 vm1 crmd[4753]:     info: crmd_join_phase_log: join-1: vm1=integrated
Jun  5 15:32:08 vm1 crmd[4753]:     info: do_dc_join_finalize: join-1: Syncing our CIB to the rest of the cluster
Jun  5 15:32:08 vm1 cib[4576]:     info: cib_process_request: Completed cib_sync operation for section 'all': OK (rc=0, origin=local/crmd/14, version=0.20.24)
Jun  5 15:32:08 vm1 crmd[4753]:     info: crm_update_peer_join: finalize_join_for: Node vm1[2204477632] - join-1 phase 2 -> 3
Jun  5 15:32:08 vm1 crmd[4753]:     info: erase_status_tag: Deleting xpath: //node_state[@uname='vm1']/transient_attributes
Jun  5 15:32:08 vm1 crmd[4753]:     info: update_attrd: Connecting to attrd... 5 retries remaining
Jun  5 15:32:08 vm1 attrd[4579]:     info: crm_client_new: Connecting 0x915d10 for uid=496 gid=492 pid=4753 id=aa39ede4-8337-4d8f-820c-b396503cc83e
Jun  5 15:32:08 vm1 crmd[4753]:     info: crm_update_peer_join: do_dc_join_ack: Node vm1[2204477632] - join-1 phase 3 -> 4
Jun  5 15:32:08 vm1 crmd[4753]:     info: do_dc_join_ack: join-1: Updating node state to member for vm1
Jun  5 15:32:08 vm1 crmd[4753]:     info: erase_status_tag: Deleting xpath: //node_state[@uname='vm1']/lrm
Jun  5 15:32:08 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section nodes: OK (rc=0, origin=local/crmd/15, version=0.20.24)
Jun  5 15:32:08 vm1 cib[4576]:     info: cib_process_request: Completed cib_delete operation for section //node_state[@uname='vm1']/transient_attributes: OK (rc=0, origin=local/crmd/16, version=0.20.25)
Jun  5 15:32:08 vm1 cib[4576]:     info: cib_process_request: Completed cib_delete operation for section //node_state[@uname='vm1']/lrm: OK (rc=0, origin=local/crmd/17, version=0.20.26)
Jun  5 15:32:08 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/crmd/18, version=0.20.27)
Jun  5 15:32:08 vm1 crmd[4753]:     info: do_state_transition: State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state ]
Jun  5 15:32:08 vm1 crmd[4753]:     info: abort_transition_graph: do_te_invoke:155 - Triggered transition abort (complete=1) : Peer Cancelled
Jun  5 15:32:08 vm1 attrd[4579]:   notice: attrd_local_callback: Sending full refresh (origin=crmd)
Jun  5 15:32:08 vm1 attrd[4579]:   notice: attrd_trigger_update: Sending flush op to all hosts for: default_ping_set(1) (100)
Jun  5 15:32:08 vm1 attrd[4579]:   notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Jun  5 15:32:08 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section //cib/status//node_state[@id='2204477632']//transient_attributes//nvpair[@name='probe_complete']: No such device or address (rc=-6, origin=local/attrd/14, version=0.20.27)
Jun  5 15:32:08 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section nodes: OK (rc=0, origin=local/crmd/19, version=0.20.27)
Jun  5 15:32:08 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/crmd/20, version=0.20.27)
Jun  5 15:32:08 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section cib: OK (rc=0, origin=local/crmd/21, version=0.20.27)
Jun  5 15:32:08 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/22, version=0.20.27)
Jun  5 15:32:08 vm1 pengine[4580]:   notice: unpack_config: On loss of CCM Quorum: Ignore
Jun  5 15:32:08 vm1 pengine[4580]:  warning: unpack_nodes: Blind faith: not fencing unseen nodes
Jun  5 15:32:08 vm1 pengine[4580]:     info: determine_online_status_fencing: Node vm1 is active
Jun  5 15:32:08 vm1 pengine[4580]:     info: determine_online_status: Node vm1 is online
Jun  5 15:32:08 vm1 pengine[4580]:  warning: pe_fence_node: Node vm2 will be fenced because the node is no longer part of the cluster
Jun  5 15:32:08 vm1 pengine[4580]:  warning: determine_online_status: Node vm2 is unclean
Jun  5 15:32:08 vm1 pengine[4580]:     info: clone_print:  Clone Set: cl1 [st1]
Jun  5 15:32:08 vm1 pengine[4580]:     info: short_print:      Started: [ vm2 ]
Jun  5 15:32:08 vm1 pengine[4580]:     info: short_print:      Stopped: [ vm1 ]
Jun  5 15:32:08 vm1 pengine[4580]:     info: native_print: prmDummy#011(ocf::pacemaker:Dummy):#011Stopped 
Jun  5 15:32:08 vm1 pengine[4580]:     info: clone_print:  Clone Set: clnPing [prmPing]
Jun  5 15:32:08 vm1 pengine[4580]:     info: short_print:      Started: [ vm2 ]
Jun  5 15:32:08 vm1 pengine[4580]:     info: short_print:      Stopped: [ vm1 ]
Jun  5 15:32:08 vm1 pengine[4580]:     info: native_color: Resource st1:1 cannot run anywhere
Jun  5 15:32:08 vm1 pengine[4580]:     info: native_color: Resource prmDummy cannot run anywhere
Jun  5 15:32:08 vm1 pengine[4580]:     info: native_color: Resource prmPing:1 cannot run anywhere
Jun  5 15:32:08 vm1 pengine[4580]:  warning: custom_action: Action st1:0_stop_0 on vm2 is unrunnable (offline)
Jun  5 15:32:08 vm1 pengine[4580]:  warning: custom_action: Action prmPing:0_stop_0 on vm2 is unrunnable (offline)
Jun  5 15:32:08 vm1 pengine[4580]:     info: RecurringOp:  Start recurring monitor (10s) for prmPing:0 on vm1
Jun  5 15:32:08 vm1 pengine[4580]:  warning: stage6: Scheduling Node vm2 for STONITH
Jun  5 15:32:08 vm1 pengine[4580]:     info: native_stop_constraints: st1:0_stop_0 is implicit after vm2 is fenced
Jun  5 15:32:08 vm1 pengine[4580]:     info: native_stop_constraints: prmPing:0_stop_0 is implicit after vm2 is fenced
Jun  5 15:32:08 vm1 pengine[4580]:   notice: LogActions: Move    st1:0#011(Started vm2 -> vm1)
Jun  5 15:32:08 vm1 pengine[4580]:     info: LogActions: Leave   st1:1#011(Stopped)
Jun  5 15:32:08 vm1 pengine[4580]:     info: LogActions: Leave   prmDummy#011(Stopped)
Jun  5 15:32:08 vm1 pengine[4580]:   notice: LogActions: Move    prmPing:0#011(Started vm2 -> vm1)
Jun  5 15:32:08 vm1 pengine[4580]:     info: LogActions: Leave   prmPing:1#011(Stopped)
Jun  5 15:32:08 vm1 crmd[4753]:    error: crm_xml_err: XML Error: Entity: line 1: parser error : Specification mandate value for attribute CRM_meta_default_ping_set
Jun  5 15:32:08 vm1 crmd[4753]:    error: crm_xml_err: XML Error: 2" on_node="vm2" on_node_uuid="2221254848"><attributes CRM_meta_default_ping_set
Jun  5 15:32:08 vm1 crmd[4753]:    error: crm_xml_err: XML Error:                                                                                ^
Jun  5 15:32:08 vm1 crmd[4753]:    error: crm_xml_err: XML Error: Entity: line 1: parser error : attributes construct error
Jun  5 15:32:08 vm1 crmd[4753]:    error: crm_xml_err: XML Error: 2" on_node="vm2" on_node_uuid="2221254848"><attributes CRM_meta_default_ping_set
Jun  5 15:32:08 vm1 crmd[4753]:    error: crm_xml_err: XML Error:                                                                                ^
Jun  5 15:32:08 vm1 crmd[4753]:    error: crm_xml_err: XML Error: Entity: line 1: parser error : Couldn't find end of Start Tag attributes line 1
Jun  5 15:32:08 vm1 crmd[4753]:    error: crm_xml_err: XML Error: 2" on_node="vm2" on_node_uuid="2221254848"><attributes CRM_meta_default_ping_set
Jun  5 15:32:08 vm1 crmd[4753]:    error: crm_xml_err: XML Error:                                                                                ^
Jun  5 15:32:08 vm1 crmd[4753]:  warning: string2xml: Parsing failed (domain=1, level=3, code=73): Couldn't find end of Start Tag attributes line 1
Jun  5 15:32:08 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section /cib: OK (rc=0, origin=local/attrd/15, version=0.20.27)
Jun  5 15:32:08 vm1 pengine[4580]:  warning: process_pe_message: Calculated Transition 4: /var/lib/pacemaker/pengine/pe-warn-1.bz2
Jun  5 15:32:08 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/attrd/16, version=0.20.28)
Jun  5 15:32:08 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section //cib/status//node_state[@id='2204477632']//transient_attributes//nvpair[@name='default_ping_set(1)']: No such device or address (rc=-6, origin=local/attrd/17, version=0.20.28)
Jun  5 15:32:08 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section /cib: OK (rc=0, origin=local/attrd/18, version=0.20.28)
Jun  5 15:32:08 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/attrd/19, version=0.20.29)
Jun  5 15:32:09 vm1 pengine[4580]:     info: crm_client_destroy: Destroying 0 events
Jun  5 15:32:09 vm1 lrmd[4578]:     info: crm_client_destroy: Destroying 0 events
Jun  5 15:32:09 vm1 pacemakerd[4574]:    error: child_death_dispatch: Managed process 4753 (crmd) dumped core
Jun  5 15:32:09 vm1 pacemakerd[4574]:   notice: pcmk_child_exit: Child process crmd terminated with signal 11 (pid=4753, core=1)
Jun  5 15:32:09 vm1 pacemakerd[4574]:   notice: pcmk_process_exit: Respawning failed child process: crmd
Jun  5 15:32:09 vm1 pacemakerd[4574]:     info: start_child: Using uid=496 and group=492 for process crmd
Jun  5 15:32:09 vm1 pacemakerd[4574]:     info: start_child: Forked child 4764 for process crmd
Jun  5 15:32:09 vm1 cib[4576]:     info: crm_client_destroy: Destroying 0 events
Jun  5 15:32:09 vm1 attrd[4579]:     info: crm_client_destroy: Destroying 0 events
Jun  5 15:32:09 vm1 stonith-ng[4577]:     info: crm_client_destroy: Destroying 0 events
Jun  5 15:32:09 vm1 corosync[4555]:   [QB    ] qb_ipcs_dispatch_connection_request HUP conn (4555-4753-29)
Jun  5 15:32:09 vm1 corosync[4555]:   [QB    ] qb_ipcs_disconnect qb_ipcs_disconnect(4555-4753-29) state:2
Jun  5 15:32:09 vm1 corosync[4555]:   [QB    ] _del epoll_ctl(del): Bad file descriptor (9)
Jun  5 15:32:09 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_closed cs_ipcs_connection_closed() 
Jun  5 15:32:09 vm1 corosync[4555]:   [CPG   ] cpg_lib_exit_fn exit_fn for conn=0x7fab5fe79ab0
Jun  5 15:32:09 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_destroyed cs_ipcs_connection_destroyed() 
Jun  5 15:32:09 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cpg-response-4555-4753-29-header
Jun  5 15:32:09 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cpg-event-4555-4753-29-header
Jun  5 15:32:09 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cpg-request-4555-4753-29-header
Jun  5 15:32:09 vm1 corosync[4555]:   [QB    ] qb_ipcs_dispatch_connection_request HUP conn (4555-4753-30)
Jun  5 15:32:09 vm1 corosync[4555]:   [QB    ] qb_ipcs_disconnect qb_ipcs_disconnect(4555-4753-30) state:2
Jun  5 15:32:09 vm1 corosync[4555]:   [QB    ] _del epoll_ctl(del): Bad file descriptor (9)
Jun  5 15:32:09 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_closed cs_ipcs_connection_closed() 
Jun  5 15:32:09 vm1 corosync[4555]:   [QUORUM] quorum_lib_exit_fn lib_exit_fn: conn=0x7fab5fe78d10
Jun  5 15:32:09 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_destroyed cs_ipcs_connection_destroyed() 
Jun  5 15:32:09 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-quorum-response-4555-4753-30-header
Jun  5 15:32:09 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-quorum-event-4555-4753-30-header
Jun  5 15:32:09 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-quorum-request-4555-4753-30-header
Jun  5 15:32:09 vm1 corosync[4555]:   [CPG   ] message_handler_req_exec_cpg_procleave got procleave message from cluster node -2090489664
Jun  5 15:32:09 vm1 crmd[4764]:   notice: crm_add_logfile: Additional logging available in /var/log/pacemaker.log
Jun  5 15:32:09 vm1 crmd[4764]:    debug: crm_update_callsites: Enabling callsites based on priority=7, files=(null), functions=(null), formats=(null), tags=(null)
Jun  5 15:32:09 vm1 crmd[4764]:     info: crm_log_init: Changed active directory to /var/lib/heartbeat/cores/hacluster
Jun  5 15:32:09 vm1 crmd[4764]:   notice: main: CRM Git Version: 7209c02
Jun  5 15:32:09 vm1 crmd[4764]:     info: do_log: FSA: Input I_STARTUP from crmd_init() received in state S_STARTING
Jun  5 15:32:09 vm1 crmd[4764]:     info: get_cluster_type: Verifying cluster type: 'corosync'
Jun  5 15:32:09 vm1 crmd[4764]:     info: get_cluster_type: Assuming an active 'corosync' cluster
Jun  5 15:32:09 vm1 cib[4576]:     info: crm_client_new: Connecting 0x11c2d40 for uid=496 gid=492 pid=4764 id=fc835884-baac-4b7b-856b-ae7ed5208000
Jun  5 15:32:09 vm1 crmd[4764]:     info: do_cib_control: CIB connection established
Jun  5 15:32:09 vm1 crmd[4764]:   notice: crm_cluster_connect: Connecting to cluster infrastructure: corosync
Jun  5 15:32:09 vm1 corosync[4555]:   [QB    ] handle_new_connection IPC credentials authenticated (4555-4764-29)
Jun  5 15:32:09 vm1 corosync[4555]:   [QB    ] qb_ipcs_shm_connect connecting to client [4764]
Jun  5 15:32:09 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:32:09 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:32:09 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:32:09 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_created connection created
Jun  5 15:32:09 vm1 corosync[4555]:   [CPG   ] cpg_lib_init_fn lib_init_fn: conn=0x7fab5fe79ab0, cpd=0x7fab5fd78584
Jun  5 15:32:09 vm1 crmd[4764]:     info: crm_get_peer: Node <null> now has id: 2204477632
Jun  5 15:32:09 vm1 crmd[4764]:     info: crm_update_peer_proc: init_cpg_connection: Node (null)[2204477632] - corosync-cpg is now online
Jun  5 15:32:09 vm1 corosync[4555]:   [QB    ] handle_new_connection IPC credentials authenticated (4555-4764-30)
Jun  5 15:32:09 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/2, version=0.20.29)
Jun  5 15:32:09 vm1 corosync[4555]:   [QB    ] qb_ipcs_shm_connect connecting to client [4764]
Jun  5 15:32:09 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:32:09 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:32:09 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:32:09 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_created connection created
Jun  5 15:32:09 vm1 corosync[4555]:   [CMAP  ] cmap_lib_init_fn lib_init_fn: conn=0x7fab5fe78d10
Jun  5 15:32:09 vm1 crmd[4764]:   notice: corosync_node_name: Unable to get node name for nodeid 2204477632
Jun  5 15:32:09 vm1 crmd[4764]:   notice: get_local_node_name: Defaulting to uname -n for the local corosync node name
Jun  5 15:32:09 vm1 crmd[4764]:     info: init_cs_connection_once: Connection to 'corosync': established
Jun  5 15:32:09 vm1 crmd[4764]:     info: crm_get_peer: Node 2204477632 is now known as vm1
Jun  5 15:32:09 vm1 crmd[4764]:     info: peer_update_callback: vm1 is now (null)
Jun  5 15:32:09 vm1 crmd[4764]:     info: crm_get_peer: Node 2204477632 has uuid 2204477632
Jun  5 15:32:09 vm1 corosync[4555]:   [CPG   ] message_handler_req_exec_cpg_procjoin got procjoin message from cluster node -2090489664 (r(0) ip(192.168.101.131) r(1) ip(192.168.102.131) ) for pid 4764
Jun  5 15:32:09 vm1 corosync[4555]:   [QB    ] qb_ipcs_dispatch_connection_request HUP conn (4555-4764-30)
Jun  5 15:32:09 vm1 corosync[4555]:   [QB    ] qb_ipcs_disconnect qb_ipcs_disconnect(4555-4764-30) state:2
Jun  5 15:32:09 vm1 corosync[4555]:   [QB    ] _del epoll_ctl(del): Bad file descriptor (9)
Jun  5 15:32:09 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_closed cs_ipcs_connection_closed() 
Jun  5 15:32:09 vm1 corosync[4555]:   [CMAP  ] cmap_lib_exit_fn exit_fn for conn=0x7fab5fe78d10
Jun  5 15:32:09 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_destroyed cs_ipcs_connection_destroyed() 
Jun  5 15:32:09 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-response-4555-4764-30-header
Jun  5 15:32:09 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-event-4555-4764-30-header
Jun  5 15:32:09 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-request-4555-4764-30-header
Jun  5 15:32:09 vm1 corosync[4555]:   [QB    ] handle_new_connection IPC credentials authenticated (4555-4764-30)
Jun  5 15:32:09 vm1 corosync[4555]:   [QB    ] qb_ipcs_shm_connect connecting to client [4764]
Jun  5 15:32:09 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:32:09 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:32:09 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:32:09 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_created connection created
Jun  5 15:32:09 vm1 corosync[4555]:   [QUORUM] quorum_lib_init_fn lib_init_fn: conn=0x7fab5fe78d10
Jun  5 15:32:09 vm1 corosync[4555]:   [QUORUM] message_handler_req_lib_quorum_gettype got quorum_type request on 0x7fab5fe78d10
Jun  5 15:32:09 vm1 corosync[4555]:   [QUORUM] message_handler_req_lib_quorum_getquorate got quorate request on 0x7fab5fe78d10
Jun  5 15:32:09 vm1 crmd[4764]:   notice: init_quorum_connection: Quorum acquired
Jun  5 15:32:09 vm1 corosync[4555]:   [QUORUM] message_handler_req_lib_quorum_trackstart got trackstart request on 0x7fab5fe78d10
Jun  5 15:32:09 vm1 corosync[4555]:   [QUORUM] message_handler_req_lib_quorum_trackstart sending initial status to 0x7fab5fe78d10
Jun  5 15:32:09 vm1 corosync[4555]:   [QUORUM] send_library_notification sending quorum notification to 0x7fab5fe78d10, length = 52
Jun  5 15:32:09 vm1 corosync[4555]:   [QB    ] handle_new_connection IPC credentials authenticated (4555-4764-31)
Jun  5 15:32:09 vm1 corosync[4555]:   [QB    ] qb_ipcs_shm_connect connecting to client [4764]
Jun  5 15:32:09 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:32:09 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:32:09 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:32:09 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_created connection created
Jun  5 15:32:09 vm1 corosync[4555]:   [CMAP  ] cmap_lib_init_fn lib_init_fn: conn=0x7fab5fe7b420
Jun  5 15:32:09 vm1 corosync[4555]:   [QB    ] qb_ipcs_dispatch_connection_request HUP conn (4555-4764-31)
Jun  5 15:32:09 vm1 corosync[4555]:   [QB    ] qb_ipcs_disconnect qb_ipcs_disconnect(4555-4764-31) state:2
Jun  5 15:32:09 vm1 corosync[4555]:   [QB    ] _del epoll_ctl(del): Bad file descriptor (9)
Jun  5 15:32:09 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_closed cs_ipcs_connection_closed() 
Jun  5 15:32:09 vm1 corosync[4555]:   [CMAP  ] cmap_lib_exit_fn exit_fn for conn=0x7fab5fe7b420
Jun  5 15:32:09 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_destroyed cs_ipcs_connection_destroyed() 
Jun  5 15:32:09 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-response-4555-4764-31-header
Jun  5 15:32:09 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-event-4555-4764-31-header
Jun  5 15:32:09 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-request-4555-4764-31-header
Jun  5 15:32:09 vm1 corosync[4555]:   [QB    ] handle_new_connection IPC credentials authenticated (4555-4764-31)
Jun  5 15:32:09 vm1 corosync[4555]:   [QB    ] qb_ipcs_shm_connect connecting to client [4764]
Jun  5 15:32:09 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:32:09 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:32:09 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:32:09 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_created connection created
Jun  5 15:32:09 vm1 corosync[4555]:   [CMAP  ] cmap_lib_init_fn lib_init_fn: conn=0x7fab5fe7cc20
Jun  5 15:32:09 vm1 crmd[4764]:     info: do_ha_control: Connected to the cluster
Jun  5 15:32:09 vm1 crmd[4764]:     info: lrmd_ipc_connect: Connecting to lrmd
Jun  5 15:32:09 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section nodes: OK (rc=0, origin=local/crmd/3, version=0.20.29)
Jun  5 15:32:09 vm1 lrmd[4578]:     info: crm_client_new: Connecting 0x2125d50 for uid=496 gid=492 pid=4764 id=36ef47b5-a1bf-4e03-a664-095f1bdaa116
Jun  5 15:32:09 vm1 crmd[4764]:     info: do_lrm_control: LRM connection established
Jun  5 15:32:09 vm1 crmd[4764]:     info: do_started: Delaying start, no membership data (0000000000100000)
Jun  5 15:32:09 vm1 crmd[4764]:     info: pcmk_quorum_notification: Membership 388: quorum retained (1)
Jun  5 15:32:09 vm1 crmd[4764]:   notice: crm_update_peer_state: pcmk_quorum_notification: Node vm1[2204477632] - state is now member (was (null))
Jun  5 15:32:09 vm1 crmd[4764]:     info: peer_update_callback: vm1 is now member (was (null))
Jun  5 15:32:09 vm1 crmd[4764]:     info: do_started: Delaying start, Config not read (0000000000000040)
Jun  5 15:32:09 vm1 crmd[4764]:     info: pcmk_cpg_membership: Joined[0.0] crmd.2204477632 
Jun  5 15:32:09 vm1 crmd[4764]:     info: pcmk_cpg_membership: Member[0.0] crmd.2204477632 
Jun  5 15:32:09 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section crm_config: OK (rc=0, origin=local/crmd/4, version=0.20.29)
Jun  5 15:32:09 vm1 crmd[4764]:     info: qb_ipcs_us_publish: server name: crmd
Jun  5 15:32:09 vm1 crmd[4764]:   notice: do_started: The local CRM is operational
Jun  5 15:32:09 vm1 crmd[4764]:     info: do_log: FSA: Input I_PENDING from do_started() received in state S_STARTING
Jun  5 15:32:09 vm1 crmd[4764]:     info: do_state_transition: State transition S_STARTING -> S_PENDING [ input=I_PENDING cause=C_FSA_INTERNAL origin=do_started ]
Jun  5 15:32:09 vm1 cib[4576]:     info: cib_process_readwrite: We are now in R/O mode
Jun  5 15:32:09 vm1 cib[4576]:     info: cib_process_request: Completed cib_slave operation for section 'all': OK (rc=0, origin=local/crmd/5, version=0.20.29)
Jun  5 15:32:09 vm1 corosync[4555]:   [QB    ] qb_ipcs_dispatch_connection_request HUP conn (4555-4764-31)
Jun  5 15:32:09 vm1 corosync[4555]:   [QB    ] qb_ipcs_disconnect qb_ipcs_disconnect(4555-4764-31) state:2
Jun  5 15:32:09 vm1 corosync[4555]:   [QB    ] _del epoll_ctl(del): Bad file descriptor (9)
Jun  5 15:32:09 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_closed cs_ipcs_connection_closed() 
Jun  5 15:32:09 vm1 corosync[4555]:   [CMAP  ] cmap_lib_exit_fn exit_fn for conn=0x7fab5fe7cc20
Jun  5 15:32:09 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_destroyed cs_ipcs_connection_destroyed() 
Jun  5 15:32:09 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-response-4555-4764-31-header
Jun  5 15:32:09 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-event-4555-4764-31-header
Jun  5 15:32:09 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-request-4555-4764-31-header
Jun  5 15:32:11 vm1 stonith-ng[4577]:     info: crm_client_new: Connecting 0x1867d30 for uid=496 gid=492 pid=4764 id=f9a1d133-5ae9-4823-9f19-e5790298b198
Jun  5 15:32:11 vm1 stonith-ng[4577]:     info: stonith_command: Processed register from crmd.4764: OK (0)
Jun  5 15:32:11 vm1 stonith-ng[4577]:     info: stonith_command: Processed st_notify from crmd.4764: OK (0)
Jun  5 15:32:11 vm1 stonith-ng[4577]:     info: stonith_command: Processed st_notify from crmd.4764: OK (0)
Jun  5 15:32:30 vm1 crmd[4764]:     info: crm_timer_popped: Election Trigger (I_DC_TIMEOUT) just popped (20000ms)
Jun  5 15:32:30 vm1 crmd[4764]:  warning: do_log: FSA: Input I_DC_TIMEOUT from crm_timer_popped() received in state S_PENDING
Jun  5 15:32:30 vm1 crmd[4764]:     info: do_state_transition: State transition S_PENDING -> S_ELECTION [ input=I_DC_TIMEOUT cause=C_TIMER_POPPED origin=crm_timer_popped ]
Jun  5 15:32:30 vm1 crmd[4764]:     info: do_log: FSA: Input I_ELECTION_DC from do_election_check() received in state S_ELECTION
Jun  5 15:32:30 vm1 crmd[4764]:   notice: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_FSA_INTERNAL origin=do_election_check ]
Jun  5 15:32:30 vm1 crmd[4764]:     info: do_te_control: Registering TE UUID: 306126ec-56ff-4022-8c61-dad9f4b5dfbf
Jun  5 15:32:30 vm1 crmd[4764]:     info: set_graph_functions: Setting custom graph functions
Jun  5 15:32:30 vm1 pengine[4580]:     info: crm_client_new: Connecting 0x2290f90 for uid=496 gid=492 pid=4764 id=6f88a08a-cef5-4f18-9ec4-e526b64ac00c
Jun  5 15:32:30 vm1 crmd[4764]:     info: do_dc_takeover: Taking over DC status for this partition
Jun  5 15:32:30 vm1 cib[4576]:     info: cib_process_readwrite: We are now in R/W mode
Jun  5 15:32:30 vm1 cib[4576]:     info: cib_process_request: Completed cib_master operation for section 'all': OK (rc=0, origin=local/crmd/6, version=0.20.29)
Jun  5 15:32:30 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section cib: OK (rc=0, origin=local/crmd/7, version=0.20.29)
Jun  5 15:32:30 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section //cib/configuration/crm_config//cluster_property_set//nvpair[@name='dc-version']: OK (rc=0, origin=local/crmd/8, version=0.20.29)
Jun  5 15:32:30 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section crm_config: OK (rc=0, origin=local/crmd/9, version=0.20.29)
Jun  5 15:32:30 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section //cib/configuration/crm_config//cluster_property_set//nvpair[@name='cluster-infrastructure']: OK (rc=0, origin=local/crmd/10, version=0.20.29)
Jun  5 15:32:30 vm1 crmd[4764]:     info: join_make_offer: Making join offers based on membership 388
Jun  5 15:32:30 vm1 crmd[4764]:     info: join_make_offer: join-1: Sending offer to vm1
Jun  5 15:32:30 vm1 crmd[4764]:     info: crm_update_peer_join: join_make_offer: Node vm1[2204477632] - join-1 phase 0 -> 1
Jun  5 15:32:30 vm1 crmd[4764]:     info: do_dc_join_offer_all: join-1: Waiting on 1 outstanding join acks
Jun  5 15:32:30 vm1 crmd[4764]:     info: update_dc: Set DC to vm1 (3.0.7)
Jun  5 15:32:30 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section crm_config: OK (rc=0, origin=local/crmd/11, version=0.20.29)
Jun  5 15:32:30 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section crm_config: OK (rc=0, origin=local/crmd/12, version=0.20.29)
Jun  5 15:32:30 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/13, version=0.20.29)
Jun  5 15:32:30 vm1 crmd[4764]:     info: crm_update_peer_join: do_dc_join_filter_offer: Node vm1[2204477632] - join-1 phase 1 -> 2
Jun  5 15:32:30 vm1 crmd[4764]:     info: crm_update_peer_expected: do_dc_join_filter_offer: Node vm1[2204477632] - expected state is now member
Jun  5 15:32:30 vm1 crmd[4764]:     info: do_state_transition: State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_FSA_INTERNAL origin=check_join_state ]
Jun  5 15:32:30 vm1 crmd[4764]:     info: crmd_join_phase_log: join-1: vm1=integrated
Jun  5 15:32:30 vm1 crmd[4764]:     info: do_dc_join_finalize: join-1: Syncing our CIB to the rest of the cluster
Jun  5 15:32:30 vm1 cib[4576]:     info: cib_process_request: Completed cib_sync operation for section 'all': OK (rc=0, origin=local/crmd/14, version=0.20.29)
Jun  5 15:32:30 vm1 crmd[4764]:     info: crm_update_peer_join: finalize_join_for: Node vm1[2204477632] - join-1 phase 2 -> 3
Jun  5 15:32:30 vm1 crmd[4764]:     info: erase_status_tag: Deleting xpath: //node_state[@uname='vm1']/transient_attributes
Jun  5 15:32:30 vm1 crmd[4764]:     info: update_attrd: Connecting to attrd... 5 retries remaining
Jun  5 15:32:30 vm1 attrd[4579]:     info: crm_client_new: Connecting 0x915d10 for uid=496 gid=492 pid=4764 id=41c1029f-04a7-437d-b71d-e490a9fd005b
Jun  5 15:32:30 vm1 crmd[4764]:     info: crm_update_peer_join: do_dc_join_ack: Node vm1[2204477632] - join-1 phase 3 -> 4
Jun  5 15:32:30 vm1 crmd[4764]:     info: do_dc_join_ack: join-1: Updating node state to member for vm1
Jun  5 15:32:30 vm1 crmd[4764]:     info: erase_status_tag: Deleting xpath: //node_state[@uname='vm1']/lrm
Jun  5 15:32:30 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section nodes: OK (rc=0, origin=local/crmd/15, version=0.20.29)
Jun  5 15:32:30 vm1 cib[4576]:     info: cib_process_request: Completed cib_delete operation for section //node_state[@uname='vm1']/transient_attributes: OK (rc=0, origin=local/crmd/16, version=0.20.30)
Jun  5 15:32:30 vm1 cib[4576]:     info: cib_process_request: Completed cib_delete operation for section //node_state[@uname='vm1']/lrm: OK (rc=0, origin=local/crmd/17, version=0.20.31)
Jun  5 15:32:30 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/crmd/18, version=0.20.32)
Jun  5 15:32:30 vm1 crmd[4764]:     info: do_state_transition: State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state ]
Jun  5 15:32:30 vm1 crmd[4764]:     info: abort_transition_graph: do_te_invoke:155 - Triggered transition abort (complete=1) : Peer Cancelled
Jun  5 15:32:30 vm1 attrd[4579]:   notice: attrd_local_callback: Sending full refresh (origin=crmd)
Jun  5 15:32:30 vm1 attrd[4579]:   notice: attrd_trigger_update: Sending flush op to all hosts for: default_ping_set(1) (100)
Jun  5 15:32:30 vm1 attrd[4579]:   notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Jun  5 15:32:30 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section //cib/status//node_state[@id='2204477632']//transient_attributes//nvpair[@name='probe_complete']: No such device or address (rc=-6, origin=local/attrd/20, version=0.20.32)
Jun  5 15:32:30 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section nodes: OK (rc=0, origin=local/crmd/19, version=0.20.32)
Jun  5 15:32:30 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/crmd/20, version=0.20.32)
Jun  5 15:32:30 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section cib: OK (rc=0, origin=local/crmd/21, version=0.20.32)
Jun  5 15:32:30 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/22, version=0.20.32)
Jun  5 15:32:30 vm1 pengine[4580]:   notice: unpack_config: On loss of CCM Quorum: Ignore
Jun  5 15:32:30 vm1 pengine[4580]:  warning: unpack_nodes: Blind faith: not fencing unseen nodes
Jun  5 15:32:30 vm1 pengine[4580]:     info: determine_online_status_fencing: Node vm1 is active
Jun  5 15:32:30 vm1 pengine[4580]:     info: determine_online_status: Node vm1 is online
Jun  5 15:32:30 vm1 pengine[4580]:  warning: pe_fence_node: Node vm2 will be fenced because the node is no longer part of the cluster
Jun  5 15:32:30 vm1 pengine[4580]:  warning: determine_online_status: Node vm2 is unclean
Jun  5 15:32:30 vm1 pengine[4580]:     info: clone_print:  Clone Set: cl1 [st1]
Jun  5 15:32:30 vm1 pengine[4580]:     info: short_print:      Started: [ vm2 ]
Jun  5 15:32:30 vm1 pengine[4580]:     info: short_print:      Stopped: [ vm1 ]
Jun  5 15:32:30 vm1 pengine[4580]:     info: native_print: prmDummy#011(ocf::pacemaker:Dummy):#011Stopped 
Jun  5 15:32:30 vm1 pengine[4580]:     info: clone_print:  Clone Set: clnPing [prmPing]
Jun  5 15:32:30 vm1 pengine[4580]:     info: short_print:      Started: [ vm2 ]
Jun  5 15:32:30 vm1 pengine[4580]:     info: short_print:      Stopped: [ vm1 ]
Jun  5 15:32:30 vm1 pengine[4580]:     info: native_color: Resource st1:1 cannot run anywhere
Jun  5 15:32:30 vm1 pengine[4580]:     info: native_color: Resource prmDummy cannot run anywhere
Jun  5 15:32:30 vm1 pengine[4580]:     info: native_color: Resource prmPing:1 cannot run anywhere
Jun  5 15:32:30 vm1 pengine[4580]:  warning: custom_action: Action st1:0_stop_0 on vm2 is unrunnable (offline)
Jun  5 15:32:30 vm1 pengine[4580]:  warning: custom_action: Action prmPing:0_stop_0 on vm2 is unrunnable (offline)
Jun  5 15:32:30 vm1 pengine[4580]:     info: RecurringOp:  Start recurring monitor (10s) for prmPing:0 on vm1
Jun  5 15:32:30 vm1 pengine[4580]:  warning: stage6: Scheduling Node vm2 for STONITH
Jun  5 15:32:30 vm1 pengine[4580]:     info: native_stop_constraints: st1:0_stop_0 is implicit after vm2 is fenced
Jun  5 15:32:30 vm1 pengine[4580]:     info: native_stop_constraints: prmPing:0_stop_0 is implicit after vm2 is fenced
Jun  5 15:32:30 vm1 pengine[4580]:   notice: LogActions: Move    st1:0#011(Started vm2 -> vm1)
Jun  5 15:32:30 vm1 pengine[4580]:     info: LogActions: Leave   st1:1#011(Stopped)
Jun  5 15:32:30 vm1 pengine[4580]:     info: LogActions: Leave   prmDummy#011(Stopped)
Jun  5 15:32:30 vm1 pengine[4580]:   notice: LogActions: Move    prmPing:0#011(Started vm2 -> vm1)
Jun  5 15:32:30 vm1 pengine[4580]:     info: LogActions: Leave   prmPing:1#011(Stopped)
Jun  5 15:32:30 vm1 crmd[4764]:    error: crm_xml_err: XML Error: Entity: line 1: parser error : Specification mandate value for attribute CRM_meta_default_ping_set
Jun  5 15:32:30 vm1 crmd[4764]:    error: crm_xml_err: XML Error: 2" on_node="vm2" on_node_uuid="2221254848"><attributes CRM_meta_default_ping_set
Jun  5 15:32:30 vm1 crmd[4764]:    error: crm_xml_err: XML Error:                                                                                ^
Jun  5 15:32:30 vm1 crmd[4764]:    error: crm_xml_err: XML Error: Entity: line 1: parser error : attributes construct error
Jun  5 15:32:30 vm1 crmd[4764]:    error: crm_xml_err: XML Error: 2" on_node="vm2" on_node_uuid="2221254848"><attributes CRM_meta_default_ping_set
Jun  5 15:32:30 vm1 crmd[4764]:    error: crm_xml_err: XML Error:                                                                                ^
Jun  5 15:32:30 vm1 crmd[4764]:    error: crm_xml_err: XML Error: Entity: line 1: parser error : Couldn't find end of Start Tag attributes line 1
Jun  5 15:32:30 vm1 crmd[4764]:    error: crm_xml_err: XML Error: 2" on_node="vm2" on_node_uuid="2221254848"><attributes CRM_meta_default_ping_set
Jun  5 15:32:30 vm1 crmd[4764]:    error: crm_xml_err: XML Error:                                                                                ^
Jun  5 15:32:30 vm1 crmd[4764]:  warning: string2xml: Parsing failed (domain=1, level=3, code=73): Couldn't find end of Start Tag attributes line 1
Jun  5 15:32:30 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section /cib: OK (rc=0, origin=local/attrd/21, version=0.20.32)
Jun  5 15:32:30 vm1 pengine[4580]:  warning: process_pe_message: Calculated Transition 5: /var/lib/pacemaker/pengine/pe-warn-2.bz2
Jun  5 15:32:30 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/attrd/22, version=0.20.33)
Jun  5 15:32:30 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section //cib/status//node_state[@id='2204477632']//transient_attributes//nvpair[@name='default_ping_set(1)']: No such device or address (rc=-6, origin=local/attrd/23, version=0.20.33)
Jun  5 15:32:30 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section /cib: OK (rc=0, origin=local/attrd/24, version=0.20.33)
Jun  5 15:32:30 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/attrd/25, version=0.20.34)
Jun  5 15:32:32 vm1 cib[4576]:     info: crm_client_destroy: Destroying 0 events
Jun  5 15:32:32 vm1 lrmd[4578]:     info: crm_client_destroy: Destroying 0 events
Jun  5 15:32:32 vm1 stonith-ng[4577]:     info: crm_client_destroy: Destroying 0 events
Jun  5 15:32:32 vm1 attrd[4579]:     info: crm_client_destroy: Destroying 0 events
Jun  5 15:32:32 vm1 pacemakerd[4574]:    error: child_death_dispatch: Managed process 4764 (crmd) dumped core
Jun  5 15:32:32 vm1 pacemakerd[4574]:   notice: pcmk_child_exit: Child process crmd terminated with signal 11 (pid=4764, core=1)
Jun  5 15:32:32 vm1 pacemakerd[4574]:   notice: pcmk_process_exit: Respawning failed child process: crmd
Jun  5 15:32:32 vm1 pacemakerd[4574]:     info: start_child: Using uid=496 and group=492 for process crmd
Jun  5 15:32:32 vm1 pacemakerd[4574]:     info: start_child: Forked child 4770 for process crmd
Jun  5 15:32:32 vm1 pengine[4580]:     info: crm_client_destroy: Destroying 0 events
Jun  5 15:32:32 vm1 corosync[4555]:   [QB    ] qb_ipcs_dispatch_connection_request HUP conn (4555-4764-29)
Jun  5 15:32:32 vm1 corosync[4555]:   [QB    ] qb_ipcs_disconnect qb_ipcs_disconnect(4555-4764-29) state:2
Jun  5 15:32:32 vm1 corosync[4555]:   [QB    ] _del epoll_ctl(del): Bad file descriptor (9)
Jun  5 15:32:32 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_closed cs_ipcs_connection_closed() 
Jun  5 15:32:32 vm1 corosync[4555]:   [CPG   ] cpg_lib_exit_fn exit_fn for conn=0x7fab5fe79ab0
Jun  5 15:32:32 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_destroyed cs_ipcs_connection_destroyed() 
Jun  5 15:32:32 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cpg-response-4555-4764-29-header
Jun  5 15:32:32 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cpg-event-4555-4764-29-header
Jun  5 15:32:32 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cpg-request-4555-4764-29-header
Jun  5 15:32:32 vm1 corosync[4555]:   [QB    ] qb_ipcs_dispatch_connection_request HUP conn (4555-4764-30)
Jun  5 15:32:32 vm1 corosync[4555]:   [QB    ] qb_ipcs_disconnect qb_ipcs_disconnect(4555-4764-30) state:2
Jun  5 15:32:32 vm1 corosync[4555]:   [QB    ] _del epoll_ctl(del): Bad file descriptor (9)
Jun  5 15:32:32 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_closed cs_ipcs_connection_closed() 
Jun  5 15:32:32 vm1 corosync[4555]:   [QUORUM] quorum_lib_exit_fn lib_exit_fn: conn=0x7fab5fe78d10
Jun  5 15:32:32 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_destroyed cs_ipcs_connection_destroyed() 
Jun  5 15:32:32 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-quorum-response-4555-4764-30-header
Jun  5 15:32:32 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-quorum-event-4555-4764-30-header
Jun  5 15:32:32 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-quorum-request-4555-4764-30-header
Jun  5 15:32:32 vm1 corosync[4555]:   [CPG   ] message_handler_req_exec_cpg_procleave got procleave message from cluster node -2090489664
Jun  5 15:32:32 vm1 crmd[4770]:   notice: crm_add_logfile: Additional logging available in /var/log/pacemaker.log
Jun  5 15:32:32 vm1 crmd[4770]:    debug: crm_update_callsites: Enabling callsites based on priority=7, files=(null), functions=(null), formats=(null), tags=(null)
Jun  5 15:32:32 vm1 crmd[4770]:     info: crm_log_init: Changed active directory to /var/lib/heartbeat/cores/hacluster
Jun  5 15:32:32 vm1 crmd[4770]:   notice: main: CRM Git Version: 7209c02
Jun  5 15:32:32 vm1 crmd[4770]:     info: do_log: FSA: Input I_STARTUP from crmd_init() received in state S_STARTING
Jun  5 15:32:32 vm1 crmd[4770]:     info: get_cluster_type: Verifying cluster type: 'corosync'
Jun  5 15:32:32 vm1 crmd[4770]:     info: get_cluster_type: Assuming an active 'corosync' cluster
Jun  5 15:32:32 vm1 cib[4576]:     info: crm_client_new: Connecting 0x11c2d40 for uid=496 gid=492 pid=4770 id=df266572-f00f-4fee-bc72-68cbbafa7d64
Jun  5 15:32:32 vm1 crmd[4770]:     info: do_cib_control: CIB connection established
Jun  5 15:32:32 vm1 crmd[4770]:   notice: crm_cluster_connect: Connecting to cluster infrastructure: corosync
Jun  5 15:32:32 vm1 corosync[4555]:   [QB    ] handle_new_connection IPC credentials authenticated (4555-4770-29)
Jun  5 15:32:32 vm1 corosync[4555]:   [QB    ] qb_ipcs_shm_connect connecting to client [4770]
Jun  5 15:32:32 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:32:32 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:32:32 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:32:32 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_created connection created
Jun  5 15:32:32 vm1 corosync[4555]:   [CPG   ] cpg_lib_init_fn lib_init_fn: conn=0x7fab5fe79ab0, cpd=0x7fab5fe79844
Jun  5 15:32:32 vm1 crmd[4770]:     info: crm_get_peer: Node <null> now has id: 2204477632
Jun  5 15:32:32 vm1 crmd[4770]:     info: crm_update_peer_proc: init_cpg_connection: Node (null)[2204477632] - corosync-cpg is now online
Jun  5 15:32:32 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/2, version=0.20.34)
Jun  5 15:32:32 vm1 corosync[4555]:   [QB    ] handle_new_connection IPC credentials authenticated (4555-4770-30)
Jun  5 15:32:32 vm1 corosync[4555]:   [QB    ] qb_ipcs_shm_connect connecting to client [4770]
Jun  5 15:32:32 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:32:32 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:32:32 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:32:32 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_created connection created
Jun  5 15:32:32 vm1 corosync[4555]:   [CMAP  ] cmap_lib_init_fn lib_init_fn: conn=0x7fab5fe7a530
Jun  5 15:32:32 vm1 crmd[4770]:   notice: corosync_node_name: Unable to get node name for nodeid 2204477632
Jun  5 15:32:32 vm1 crmd[4770]:   notice: get_local_node_name: Defaulting to uname -n for the local corosync node name
Jun  5 15:32:32 vm1 crmd[4770]:     info: init_cs_connection_once: Connection to 'corosync': established
Jun  5 15:32:32 vm1 crmd[4770]:     info: crm_get_peer: Node 2204477632 is now known as vm1
Jun  5 15:32:32 vm1 crmd[4770]:     info: peer_update_callback: vm1 is now (null)
Jun  5 15:32:32 vm1 crmd[4770]:     info: crm_get_peer: Node 2204477632 has uuid 2204477632
Jun  5 15:32:32 vm1 corosync[4555]:   [QB    ] qb_ipcs_dispatch_connection_request HUP conn (4555-4770-30)
Jun  5 15:32:32 vm1 corosync[4555]:   [QB    ] qb_ipcs_disconnect qb_ipcs_disconnect(4555-4770-30) state:2
Jun  5 15:32:32 vm1 corosync[4555]:   [QB    ] _del epoll_ctl(del): Bad file descriptor (9)
Jun  5 15:32:32 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_closed cs_ipcs_connection_closed() 
Jun  5 15:32:32 vm1 corosync[4555]:   [CMAP  ] cmap_lib_exit_fn exit_fn for conn=0x7fab5fe7a530
Jun  5 15:32:32 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_destroyed cs_ipcs_connection_destroyed() 
Jun  5 15:32:32 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-response-4555-4770-30-header
Jun  5 15:32:32 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-event-4555-4770-30-header
Jun  5 15:32:32 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-request-4555-4770-30-header
Jun  5 15:32:32 vm1 corosync[4555]:   [CPG   ] message_handler_req_exec_cpg_procjoin got procjoin message from cluster node -2090489664 (r(0) ip(192.168.101.131) r(1) ip(192.168.102.131) ) for pid 4770
Jun  5 15:32:32 vm1 corosync[4555]:   [QB    ] handle_new_connection IPC credentials authenticated (4555-4770-30)
Jun  5 15:32:32 vm1 corosync[4555]:   [QB    ] qb_ipcs_shm_connect connecting to client [4770]
Jun  5 15:32:32 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:32:32 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:32:32 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:32:32 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_created connection created
Jun  5 15:32:32 vm1 corosync[4555]:   [QUORUM] quorum_lib_init_fn lib_init_fn: conn=0x7fab5fe79210
Jun  5 15:32:32 vm1 corosync[4555]:   [QUORUM] message_handler_req_lib_quorum_gettype got quorum_type request on 0x7fab5fe79210
Jun  5 15:32:32 vm1 corosync[4555]:   [QUORUM] message_handler_req_lib_quorum_getquorate got quorate request on 0x7fab5fe79210
Jun  5 15:32:32 vm1 crmd[4770]:   notice: init_quorum_connection: Quorum acquired
Jun  5 15:32:32 vm1 corosync[4555]:   [QUORUM] message_handler_req_lib_quorum_trackstart got trackstart request on 0x7fab5fe79210
Jun  5 15:32:32 vm1 corosync[4555]:   [QUORUM] message_handler_req_lib_quorum_trackstart sending initial status to 0x7fab5fe79210
Jun  5 15:32:32 vm1 corosync[4555]:   [QUORUM] send_library_notification sending quorum notification to 0x7fab5fe79210, length = 52
Jun  5 15:32:32 vm1 corosync[4555]:   [QB    ] handle_new_connection IPC credentials authenticated (4555-4770-31)
Jun  5 15:32:32 vm1 corosync[4555]:   [QB    ] qb_ipcs_shm_connect connecting to client [4770]
Jun  5 15:32:32 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:32:32 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:32:32 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:32:32 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_created connection created
Jun  5 15:32:32 vm1 corosync[4555]:   [CMAP  ] cmap_lib_init_fn lib_init_fn: conn=0x7fab5fe7b730
Jun  5 15:32:32 vm1 corosync[4555]:   [QB    ] qb_ipcs_dispatch_connection_request HUP conn (4555-4770-31)
Jun  5 15:32:32 vm1 corosync[4555]:   [QB    ] qb_ipcs_disconnect qb_ipcs_disconnect(4555-4770-31) state:2
Jun  5 15:32:32 vm1 corosync[4555]:   [QB    ] _del epoll_ctl(del): Bad file descriptor (9)
Jun  5 15:32:32 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_closed cs_ipcs_connection_closed() 
Jun  5 15:32:32 vm1 corosync[4555]:   [CMAP  ] cmap_lib_exit_fn exit_fn for conn=0x7fab5fe7b730
Jun  5 15:32:32 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_destroyed cs_ipcs_connection_destroyed() 
Jun  5 15:32:32 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-response-4555-4770-31-header
Jun  5 15:32:32 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-event-4555-4770-31-header
Jun  5 15:32:32 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-request-4555-4770-31-header
Jun  5 15:32:32 vm1 corosync[4555]:   [QB    ] handle_new_connection IPC credentials authenticated (4555-4770-31)
Jun  5 15:32:32 vm1 corosync[4555]:   [QB    ] qb_ipcs_shm_connect connecting to client [4770]
Jun  5 15:32:32 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:32:32 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:32:32 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:32:32 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_created connection created
Jun  5 15:32:32 vm1 corosync[4555]:   [CMAP  ] cmap_lib_init_fn lib_init_fn: conn=0x7fab5fe7d060
Jun  5 15:32:32 vm1 crmd[4770]:     info: do_ha_control: Connected to the cluster
Jun  5 15:32:32 vm1 crmd[4770]:     info: lrmd_ipc_connect: Connecting to lrmd
Jun  5 15:32:32 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section nodes: OK (rc=0, origin=local/crmd/3, version=0.20.34)
Jun  5 15:32:32 vm1 lrmd[4578]:     info: crm_client_new: Connecting 0x2125d50 for uid=496 gid=492 pid=4770 id=6aaf45fb-b37c-4d4c-a164-3e60432041d8
Jun  5 15:32:32 vm1 crmd[4770]:     info: do_lrm_control: LRM connection established
Jun  5 15:32:32 vm1 crmd[4770]:     info: do_started: Delaying start, no membership data (0000000000100000)
Jun  5 15:32:32 vm1 crmd[4770]:     info: pcmk_quorum_notification: Membership 388: quorum retained (1)
Jun  5 15:32:32 vm1 crmd[4770]:   notice: crm_update_peer_state: pcmk_quorum_notification: Node vm1[2204477632] - state is now member (was (null))
Jun  5 15:32:32 vm1 crmd[4770]:     info: peer_update_callback: vm1 is now member (was (null))
Jun  5 15:32:32 vm1 crmd[4770]:     info: do_started: Delaying start, Config not read (0000000000000040)
Jun  5 15:32:32 vm1 crmd[4770]:     info: pcmk_cpg_membership: Joined[0.0] crmd.2204477632 
Jun  5 15:32:32 vm1 crmd[4770]:     info: pcmk_cpg_membership: Member[0.0] crmd.2204477632 
Jun  5 15:32:32 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section crm_config: OK (rc=0, origin=local/crmd/4, version=0.20.34)
Jun  5 15:32:32 vm1 crmd[4770]:     info: qb_ipcs_us_publish: server name: crmd
Jun  5 15:32:32 vm1 crmd[4770]:   notice: do_started: The local CRM is operational
Jun  5 15:32:32 vm1 crmd[4770]:     info: do_log: FSA: Input I_PENDING from do_started() received in state S_STARTING
Jun  5 15:32:32 vm1 crmd[4770]:     info: do_state_transition: State transition S_STARTING -> S_PENDING [ input=I_PENDING cause=C_FSA_INTERNAL origin=do_started ]
Jun  5 15:32:32 vm1 cib[4576]:     info: cib_process_readwrite: We are now in R/O mode
Jun  5 15:32:32 vm1 cib[4576]:     info: cib_process_request: Completed cib_slave operation for section 'all': OK (rc=0, origin=local/crmd/5, version=0.20.34)
Jun  5 15:32:32 vm1 corosync[4555]:   [QB    ] qb_ipcs_dispatch_connection_request HUP conn (4555-4770-31)
Jun  5 15:32:32 vm1 corosync[4555]:   [QB    ] qb_ipcs_disconnect qb_ipcs_disconnect(4555-4770-31) state:2
Jun  5 15:32:32 vm1 corosync[4555]:   [QB    ] _del epoll_ctl(del): Bad file descriptor (9)
Jun  5 15:32:32 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_closed cs_ipcs_connection_closed() 
Jun  5 15:32:32 vm1 corosync[4555]:   [CMAP  ] cmap_lib_exit_fn exit_fn for conn=0x7fab5fe7d060
Jun  5 15:32:32 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_destroyed cs_ipcs_connection_destroyed() 
Jun  5 15:32:32 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-response-4555-4770-31-header
Jun  5 15:32:32 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-event-4555-4770-31-header
Jun  5 15:32:32 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-request-4555-4770-31-header
Jun  5 15:32:34 vm1 stonith-ng[4577]:     info: crm_client_new: Connecting 0x1867d30 for uid=496 gid=492 pid=4770 id=de5bae2a-f656-4654-ade7-c7e8ae82dabd
Jun  5 15:32:34 vm1 stonith-ng[4577]:     info: stonith_command: Processed register from crmd.4770: OK (0)
Jun  5 15:32:34 vm1 stonith-ng[4577]:     info: stonith_command: Processed st_notify from crmd.4770: OK (0)
Jun  5 15:32:34 vm1 stonith-ng[4577]:     info: stonith_command: Processed st_notify from crmd.4770: OK (0)
Jun  5 15:32:53 vm1 crmd[4770]:     info: crm_timer_popped: Election Trigger (I_DC_TIMEOUT) just popped (20000ms)
Jun  5 15:32:53 vm1 crmd[4770]:  warning: do_log: FSA: Input I_DC_TIMEOUT from crm_timer_popped() received in state S_PENDING
Jun  5 15:32:53 vm1 crmd[4770]:     info: do_state_transition: State transition S_PENDING -> S_ELECTION [ input=I_DC_TIMEOUT cause=C_TIMER_POPPED origin=crm_timer_popped ]
Jun  5 15:32:53 vm1 crmd[4770]:     info: do_log: FSA: Input I_ELECTION_DC from do_election_check() received in state S_ELECTION
Jun  5 15:32:53 vm1 crmd[4770]:   notice: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_FSA_INTERNAL origin=do_election_check ]
Jun  5 15:32:53 vm1 crmd[4770]:     info: do_te_control: Registering TE UUID: 7f5f9503-4dc3-4c2a-bad4-b14d336b379f
Jun  5 15:32:53 vm1 crmd[4770]:     info: set_graph_functions: Setting custom graph functions
Jun  5 15:32:53 vm1 pengine[4580]:     info: crm_client_new: Connecting 0x2290f90 for uid=496 gid=492 pid=4770 id=c5c92ba7-ff66-4c6c-a619-960101500b8e
Jun  5 15:32:53 vm1 crmd[4770]:     info: do_dc_takeover: Taking over DC status for this partition
Jun  5 15:32:53 vm1 cib[4576]:     info: cib_process_readwrite: We are now in R/W mode
Jun  5 15:32:53 vm1 cib[4576]:     info: cib_process_request: Completed cib_master operation for section 'all': OK (rc=0, origin=local/crmd/6, version=0.20.34)
Jun  5 15:32:53 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section cib: OK (rc=0, origin=local/crmd/7, version=0.20.34)
Jun  5 15:32:53 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section //cib/configuration/crm_config//cluster_property_set//nvpair[@name='dc-version']: OK (rc=0, origin=local/crmd/8, version=0.20.34)
Jun  5 15:32:53 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section crm_config: OK (rc=0, origin=local/crmd/9, version=0.20.34)
Jun  5 15:32:53 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section //cib/configuration/crm_config//cluster_property_set//nvpair[@name='cluster-infrastructure']: OK (rc=0, origin=local/crmd/10, version=0.20.34)
Jun  5 15:32:53 vm1 crmd[4770]:     info: join_make_offer: Making join offers based on membership 388
Jun  5 15:32:53 vm1 crmd[4770]:     info: join_make_offer: join-1: Sending offer to vm1
Jun  5 15:32:53 vm1 crmd[4770]:     info: crm_update_peer_join: join_make_offer: Node vm1[2204477632] - join-1 phase 0 -> 1
Jun  5 15:32:53 vm1 crmd[4770]:     info: do_dc_join_offer_all: join-1: Waiting on 1 outstanding join acks
Jun  5 15:32:53 vm1 crmd[4770]:     info: update_dc: Set DC to vm1 (3.0.7)
Jun  5 15:32:53 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section crm_config: OK (rc=0, origin=local/crmd/11, version=0.20.34)
Jun  5 15:32:53 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section crm_config: OK (rc=0, origin=local/crmd/12, version=0.20.34)
Jun  5 15:32:53 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/13, version=0.20.34)
Jun  5 15:32:53 vm1 crmd[4770]:     info: crm_update_peer_join: do_dc_join_filter_offer: Node vm1[2204477632] - join-1 phase 1 -> 2
Jun  5 15:32:53 vm1 crmd[4770]:     info: crm_update_peer_expected: do_dc_join_filter_offer: Node vm1[2204477632] - expected state is now member
Jun  5 15:32:53 vm1 crmd[4770]:     info: do_state_transition: State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_FSA_INTERNAL origin=check_join_state ]
Jun  5 15:32:53 vm1 crmd[4770]:     info: crmd_join_phase_log: join-1: vm1=integrated
Jun  5 15:32:53 vm1 crmd[4770]:     info: do_dc_join_finalize: join-1: Syncing our CIB to the rest of the cluster
Jun  5 15:32:53 vm1 cib[4576]:     info: cib_process_request: Completed cib_sync operation for section 'all': OK (rc=0, origin=local/crmd/14, version=0.20.34)
Jun  5 15:32:53 vm1 crmd[4770]:     info: crm_update_peer_join: finalize_join_for: Node vm1[2204477632] - join-1 phase 2 -> 3
Jun  5 15:32:53 vm1 crmd[4770]:     info: erase_status_tag: Deleting xpath: //node_state[@uname='vm1']/transient_attributes
Jun  5 15:32:53 vm1 crmd[4770]:     info: update_attrd: Connecting to attrd... 5 retries remaining
Jun  5 15:32:53 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section nodes: OK (rc=0, origin=local/crmd/15, version=0.20.34)
Jun  5 15:32:53 vm1 attrd[4579]:     info: crm_client_new: Connecting 0x915d10 for uid=496 gid=492 pid=4770 id=655d52d7-3e7b-4b58-aba0-6a1a1de8fc86
Jun  5 15:32:53 vm1 crmd[4770]:     info: crm_update_peer_join: do_dc_join_ack: Node vm1[2204477632] - join-1 phase 3 -> 4
Jun  5 15:32:53 vm1 crmd[4770]:     info: do_dc_join_ack: join-1: Updating node state to member for vm1
Jun  5 15:32:53 vm1 crmd[4770]:     info: erase_status_tag: Deleting xpath: //node_state[@uname='vm1']/lrm
Jun  5 15:32:53 vm1 cib[4576]:     info: cib_process_request: Completed cib_delete operation for section //node_state[@uname='vm1']/transient_attributes: OK (rc=0, origin=local/crmd/16, version=0.20.35)
Jun  5 15:32:53 vm1 cib[4576]:     info: cib_process_request: Completed cib_delete operation for section //node_state[@uname='vm1']/lrm: OK (rc=0, origin=local/crmd/17, version=0.20.36)
Jun  5 15:32:53 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/crmd/18, version=0.20.37)
Jun  5 15:32:53 vm1 crmd[4770]:     info: do_state_transition: State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state ]
Jun  5 15:32:53 vm1 crmd[4770]:     info: abort_transition_graph: do_te_invoke:155 - Triggered transition abort (complete=1) : Peer Cancelled
Jun  5 15:32:53 vm1 attrd[4579]:   notice: attrd_local_callback: Sending full refresh (origin=crmd)
Jun  5 15:32:53 vm1 attrd[4579]:   notice: attrd_trigger_update: Sending flush op to all hosts for: default_ping_set(1) (100)
Jun  5 15:32:53 vm1 attrd[4579]:   notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Jun  5 15:32:53 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section //cib/status//node_state[@id='2204477632']//transient_attributes//nvpair[@name='probe_complete']: No such device or address (rc=-6, origin=local/attrd/26, version=0.20.37)
Jun  5 15:32:53 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section nodes: OK (rc=0, origin=local/crmd/19, version=0.20.37)
Jun  5 15:32:53 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/crmd/20, version=0.20.37)
Jun  5 15:32:53 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section cib: OK (rc=0, origin=local/crmd/21, version=0.20.37)
Jun  5 15:32:53 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/22, version=0.20.37)
Jun  5 15:32:53 vm1 pengine[4580]:   notice: unpack_config: On loss of CCM Quorum: Ignore
Jun  5 15:32:53 vm1 pengine[4580]:  warning: unpack_nodes: Blind faith: not fencing unseen nodes
Jun  5 15:32:53 vm1 pengine[4580]:     info: determine_online_status_fencing: Node vm1 is active
Jun  5 15:32:53 vm1 pengine[4580]:     info: determine_online_status: Node vm1 is online
Jun  5 15:32:53 vm1 pengine[4580]:  warning: pe_fence_node: Node vm2 will be fenced because the node is no longer part of the cluster
Jun  5 15:32:53 vm1 pengine[4580]:  warning: determine_online_status: Node vm2 is unclean
Jun  5 15:32:53 vm1 pengine[4580]:     info: clone_print:  Clone Set: cl1 [st1]
Jun  5 15:32:53 vm1 pengine[4580]:     info: short_print:      Started: [ vm2 ]
Jun  5 15:32:53 vm1 pengine[4580]:     info: short_print:      Stopped: [ vm1 ]
Jun  5 15:32:53 vm1 pengine[4580]:     info: native_print: prmDummy#011(ocf::pacemaker:Dummy):#011Stopped 
Jun  5 15:32:53 vm1 pengine[4580]:     info: clone_print:  Clone Set: clnPing [prmPing]
Jun  5 15:32:53 vm1 pengine[4580]:     info: short_print:      Started: [ vm2 ]
Jun  5 15:32:53 vm1 pengine[4580]:     info: short_print:      Stopped: [ vm1 ]
Jun  5 15:32:53 vm1 pengine[4580]:     info: native_color: Resource st1:1 cannot run anywhere
Jun  5 15:32:53 vm1 pengine[4580]:     info: native_color: Resource prmDummy cannot run anywhere
Jun  5 15:32:53 vm1 pengine[4580]:     info: native_color: Resource prmPing:1 cannot run anywhere
Jun  5 15:32:53 vm1 pengine[4580]:  warning: custom_action: Action st1:0_stop_0 on vm2 is unrunnable (offline)
Jun  5 15:32:53 vm1 pengine[4580]:  warning: custom_action: Action prmPing:0_stop_0 on vm2 is unrunnable (offline)
Jun  5 15:32:53 vm1 pengine[4580]:     info: RecurringOp:  Start recurring monitor (10s) for prmPing:0 on vm1
Jun  5 15:32:53 vm1 pengine[4580]:  warning: stage6: Scheduling Node vm2 for STONITH
Jun  5 15:32:53 vm1 pengine[4580]:     info: native_stop_constraints: st1:0_stop_0 is implicit after vm2 is fenced
Jun  5 15:32:53 vm1 pengine[4580]:     info: native_stop_constraints: prmPing:0_stop_0 is implicit after vm2 is fenced
Jun  5 15:32:53 vm1 pengine[4580]:   notice: LogActions: Move    st1:0#011(Started vm2 -> vm1)
Jun  5 15:32:53 vm1 pengine[4580]:     info: LogActions: Leave   st1:1#011(Stopped)
Jun  5 15:32:53 vm1 pengine[4580]:     info: LogActions: Leave   prmDummy#011(Stopped)
Jun  5 15:32:53 vm1 pengine[4580]:   notice: LogActions: Move    prmPing:0#011(Started vm2 -> vm1)
Jun  5 15:32:53 vm1 pengine[4580]:     info: LogActions: Leave   prmPing:1#011(Stopped)
Jun  5 15:32:53 vm1 crmd[4770]:    error: crm_xml_err: XML Error: Entity: line 1: parser error : Specification mandate value for attribute CRM_meta_default_ping_set
Jun  5 15:32:53 vm1 crmd[4770]:    error: crm_xml_err: XML Error: 2" on_node="vm2" on_node_uuid="2221254848"><attributes CRM_meta_default_ping_set
Jun  5 15:32:53 vm1 crmd[4770]:    error: crm_xml_err: XML Error:                                                                                ^
Jun  5 15:32:53 vm1 crmd[4770]:    error: crm_xml_err: XML Error: Entity: line 1: parser error : attributes construct error
Jun  5 15:32:53 vm1 crmd[4770]:    error: crm_xml_err: XML Error: 2" on_node="vm2" on_node_uuid="2221254848"><attributes CRM_meta_default_ping_set
Jun  5 15:32:53 vm1 crmd[4770]:    error: crm_xml_err: XML Error:                                                                                ^
Jun  5 15:32:53 vm1 crmd[4770]:    error: crm_xml_err: XML Error: Entity: line 1: parser error : Couldn't find end of Start Tag attributes line 1
Jun  5 15:32:53 vm1 crmd[4770]:    error: crm_xml_err: XML Error: 2" on_node="vm2" on_node_uuid="2221254848"><attributes CRM_meta_default_ping_set
Jun  5 15:32:53 vm1 crmd[4770]:    error: crm_xml_err: XML Error:                                                                                ^
Jun  5 15:32:53 vm1 crmd[4770]:  warning: string2xml: Parsing failed (domain=1, level=3, code=73): Couldn't find end of Start Tag attributes line 1
Jun  5 15:32:53 vm1 pengine[4580]:  warning: process_pe_message: Calculated Transition 6: /var/lib/pacemaker/pengine/pe-warn-3.bz2
Jun  5 15:32:53 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section /cib: OK (rc=0, origin=local/attrd/27, version=0.20.37)
Jun  5 15:32:53 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/attrd/28, version=0.20.38)
Jun  5 15:32:53 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section //cib/status//node_state[@id='2204477632']//transient_attributes//nvpair[@name='default_ping_set(1)']: No such device or address (rc=-6, origin=local/attrd/29, version=0.20.38)
Jun  5 15:32:53 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section /cib: OK (rc=0, origin=local/attrd/30, version=0.20.38)
Jun  5 15:32:53 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/attrd/31, version=0.20.39)
Jun  5 15:32:55 vm1 lrmd[4578]:     info: crm_client_destroy: Destroying 0 events
Jun  5 15:32:55 vm1 pengine[4580]:     info: crm_client_destroy: Destroying 0 events
Jun  5 15:32:55 vm1 pacemakerd[4574]:    error: child_death_dispatch: Managed process 4770 (crmd) dumped core
Jun  5 15:32:55 vm1 pacemakerd[4574]:   notice: pcmk_child_exit: Child process crmd terminated with signal 11 (pid=4770, core=1)
Jun  5 15:32:55 vm1 pacemakerd[4574]:   notice: pcmk_process_exit: Respawning failed child process: crmd
Jun  5 15:32:55 vm1 pacemakerd[4574]:     info: start_child: Using uid=496 and group=492 for process crmd
Jun  5 15:32:55 vm1 pacemakerd[4574]:     info: start_child: Forked child 4774 for process crmd
Jun  5 15:32:55 vm1 cib[4576]:     info: crm_client_destroy: Destroying 0 events
Jun  5 15:32:55 vm1 stonith-ng[4577]:     info: crm_client_destroy: Destroying 0 events
Jun  5 15:32:55 vm1 attrd[4579]:     info: crm_client_destroy: Destroying 0 events
Jun  5 15:32:55 vm1 corosync[4555]:   [QB    ] qb_ipcs_dispatch_connection_request HUP conn (4555-4770-29)
Jun  5 15:32:55 vm1 corosync[4555]:   [QB    ] qb_ipcs_disconnect qb_ipcs_disconnect(4555-4770-29) state:2
Jun  5 15:32:55 vm1 corosync[4555]:   [QB    ] _del epoll_ctl(del): Bad file descriptor (9)
Jun  5 15:32:55 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_closed cs_ipcs_connection_closed() 
Jun  5 15:32:55 vm1 corosync[4555]:   [CPG   ] cpg_lib_exit_fn exit_fn for conn=0x7fab5fe79ab0
Jun  5 15:32:55 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_destroyed cs_ipcs_connection_destroyed() 
Jun  5 15:32:55 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cpg-response-4555-4770-29-header
Jun  5 15:32:55 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cpg-event-4555-4770-29-header
Jun  5 15:32:55 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cpg-request-4555-4770-29-header
Jun  5 15:32:55 vm1 corosync[4555]:   [QB    ] qb_ipcs_dispatch_connection_request HUP conn (4555-4770-30)
Jun  5 15:32:55 vm1 corosync[4555]:   [QB    ] qb_ipcs_disconnect qb_ipcs_disconnect(4555-4770-30) state:2
Jun  5 15:32:55 vm1 corosync[4555]:   [QB    ] _del epoll_ctl(del): Bad file descriptor (9)
Jun  5 15:32:55 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_closed cs_ipcs_connection_closed() 
Jun  5 15:32:55 vm1 corosync[4555]:   [QUORUM] quorum_lib_exit_fn lib_exit_fn: conn=0x7fab5fe79210
Jun  5 15:32:55 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_destroyed cs_ipcs_connection_destroyed() 
Jun  5 15:32:55 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-quorum-response-4555-4770-30-header
Jun  5 15:32:55 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-quorum-event-4555-4770-30-header
Jun  5 15:32:55 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-quorum-request-4555-4770-30-header
Jun  5 15:32:55 vm1 corosync[4555]:   [CPG   ] message_handler_req_exec_cpg_procleave got procleave message from cluster node -2090489664
Jun  5 15:32:55 vm1 crmd[4774]:   notice: crm_add_logfile: Additional logging available in /var/log/pacemaker.log
Jun  5 15:32:55 vm1 crmd[4774]:    debug: crm_update_callsites: Enabling callsites based on priority=7, files=(null), functions=(null), formats=(null), tags=(null)
Jun  5 15:32:55 vm1 crmd[4774]:     info: crm_log_init: Changed active directory to /var/lib/heartbeat/cores/hacluster
Jun  5 15:32:55 vm1 crmd[4774]:   notice: main: CRM Git Version: 7209c02
Jun  5 15:32:55 vm1 crmd[4774]:     info: do_log: FSA: Input I_STARTUP from crmd_init() received in state S_STARTING
Jun  5 15:32:55 vm1 crmd[4774]:     info: get_cluster_type: Verifying cluster type: 'corosync'
Jun  5 15:32:55 vm1 crmd[4774]:     info: get_cluster_type: Assuming an active 'corosync' cluster
Jun  5 15:32:55 vm1 cib[4576]:     info: crm_client_new: Connecting 0x11c2d40 for uid=496 gid=492 pid=4774 id=3eb3c6ef-1b91-4931-8808-602902f8cafd
Jun  5 15:32:55 vm1 crmd[4774]:     info: do_cib_control: CIB connection established
Jun  5 15:32:55 vm1 crmd[4774]:   notice: crm_cluster_connect: Connecting to cluster infrastructure: corosync
Jun  5 15:32:55 vm1 corosync[4555]:   [QB    ] handle_new_connection IPC credentials authenticated (4555-4774-29)
Jun  5 15:32:55 vm1 corosync[4555]:   [QB    ] qb_ipcs_shm_connect connecting to client [4774]
Jun  5 15:32:55 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:32:55 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:32:55 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:32:55 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_created connection created
Jun  5 15:32:55 vm1 corosync[4555]:   [CPG   ] cpg_lib_init_fn lib_init_fn: conn=0x7fab5fe79ab0, cpd=0x7fab5fe7aac4
Jun  5 15:32:55 vm1 crmd[4774]:     info: crm_get_peer: Node <null> now has id: 2204477632
Jun  5 15:32:55 vm1 crmd[4774]:     info: crm_update_peer_proc: init_cpg_connection: Node (null)[2204477632] - corosync-cpg is now online
Jun  5 15:32:55 vm1 corosync[4555]:   [QB    ] handle_new_connection IPC credentials authenticated (4555-4774-30)
Jun  5 15:32:55 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/2, version=0.20.39)
Jun  5 15:32:55 vm1 corosync[4555]:   [QB    ] qb_ipcs_shm_connect connecting to client [4774]
Jun  5 15:32:55 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:32:55 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:32:55 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:32:55 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_created connection created
Jun  5 15:32:55 vm1 corosync[4555]:   [CMAP  ] cmap_lib_init_fn lib_init_fn: conn=0x7fab5fe7d4d0
Jun  5 15:32:55 vm1 crmd[4774]:   notice: corosync_node_name: Unable to get node name for nodeid 2204477632
Jun  5 15:32:55 vm1 crmd[4774]:   notice: get_local_node_name: Defaulting to uname -n for the local corosync node name
Jun  5 15:32:55 vm1 crmd[4774]:     info: init_cs_connection_once: Connection to 'corosync': established
Jun  5 15:32:55 vm1 crmd[4774]:     info: crm_get_peer: Node 2204477632 is now known as vm1
Jun  5 15:32:55 vm1 crmd[4774]:     info: peer_update_callback: vm1 is now (null)
Jun  5 15:32:55 vm1 crmd[4774]:     info: crm_get_peer: Node 2204477632 has uuid 2204477632
Jun  5 15:32:55 vm1 corosync[4555]:   [QB    ] qb_ipcs_dispatch_connection_request HUP conn (4555-4774-30)
Jun  5 15:32:55 vm1 corosync[4555]:   [QB    ] qb_ipcs_disconnect qb_ipcs_disconnect(4555-4774-30) state:2
Jun  5 15:32:55 vm1 corosync[4555]:   [QB    ] _del epoll_ctl(del): Bad file descriptor (9)
Jun  5 15:32:55 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_closed cs_ipcs_connection_closed() 
Jun  5 15:32:55 vm1 corosync[4555]:   [CMAP  ] cmap_lib_exit_fn exit_fn for conn=0x7fab5fe7d4d0
Jun  5 15:32:55 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_destroyed cs_ipcs_connection_destroyed() 
Jun  5 15:32:55 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-response-4555-4774-30-header
Jun  5 15:32:55 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-event-4555-4774-30-header
Jun  5 15:32:55 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-request-4555-4774-30-header
Jun  5 15:32:55 vm1 corosync[4555]:   [CPG   ] message_handler_req_exec_cpg_procjoin got procjoin message from cluster node -2090489664 (r(0) ip(192.168.101.131) r(1) ip(192.168.102.131) ) for pid 4774
Jun  5 15:32:55 vm1 corosync[4555]:   [QB    ] handle_new_connection IPC credentials authenticated (4555-4774-30)
Jun  5 15:32:55 vm1 corosync[4555]:   [QB    ] qb_ipcs_shm_connect connecting to client [4774]
Jun  5 15:32:55 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:32:55 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:32:55 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:32:55 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_created connection created
Jun  5 15:32:55 vm1 corosync[4555]:   [QUORUM] quorum_lib_init_fn lib_init_fn: conn=0x7fab5fe7d4d0
Jun  5 15:32:55 vm1 corosync[4555]:   [QUORUM] message_handler_req_lib_quorum_gettype got quorum_type request on 0x7fab5fe7d4d0
Jun  5 15:32:55 vm1 corosync[4555]:   [QUORUM] message_handler_req_lib_quorum_getquorate got quorate request on 0x7fab5fe7d4d0
Jun  5 15:32:55 vm1 crmd[4774]:   notice: init_quorum_connection: Quorum acquired
Jun  5 15:32:55 vm1 corosync[4555]:   [QUORUM] message_handler_req_lib_quorum_trackstart got trackstart request on 0x7fab5fe7d4d0
Jun  5 15:32:55 vm1 corosync[4555]:   [QUORUM] message_handler_req_lib_quorum_trackstart sending initial status to 0x7fab5fe7d4d0
Jun  5 15:32:55 vm1 corosync[4555]:   [QUORUM] send_library_notification sending quorum notification to 0x7fab5fe7d4d0, length = 52
Jun  5 15:32:55 vm1 corosync[4555]:   [QB    ] handle_new_connection IPC credentials authenticated (4555-4774-31)
Jun  5 15:32:55 vm1 corosync[4555]:   [QB    ] qb_ipcs_shm_connect connecting to client [4774]
Jun  5 15:32:55 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:32:55 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:32:55 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:32:55 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_created connection created
Jun  5 15:32:55 vm1 corosync[4555]:   [CMAP  ] cmap_lib_init_fn lib_init_fn: conn=0x7fab5fe79050
Jun  5 15:32:55 vm1 corosync[4555]:   [QB    ] qb_ipcs_dispatch_connection_request HUP conn (4555-4774-31)
Jun  5 15:32:55 vm1 corosync[4555]:   [QB    ] qb_ipcs_disconnect qb_ipcs_disconnect(4555-4774-31) state:2
Jun  5 15:32:55 vm1 corosync[4555]:   [QB    ] _del epoll_ctl(del): Bad file descriptor (9)
Jun  5 15:32:55 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_closed cs_ipcs_connection_closed() 
Jun  5 15:32:55 vm1 corosync[4555]:   [CMAP  ] cmap_lib_exit_fn exit_fn for conn=0x7fab5fe79050
Jun  5 15:32:55 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_destroyed cs_ipcs_connection_destroyed() 
Jun  5 15:32:55 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-response-4555-4774-31-header
Jun  5 15:32:55 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-event-4555-4774-31-header
Jun  5 15:32:55 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-request-4555-4774-31-header
Jun  5 15:32:55 vm1 corosync[4555]:   [QB    ] handle_new_connection IPC credentials authenticated (4555-4774-31)
Jun  5 15:32:55 vm1 corosync[4555]:   [QB    ] qb_ipcs_shm_connect connecting to client [4774]
Jun  5 15:32:55 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:32:55 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:32:55 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:32:55 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_created connection created
Jun  5 15:32:55 vm1 corosync[4555]:   [CMAP  ] cmap_lib_init_fn lib_init_fn: conn=0x7fab5fe79050
Jun  5 15:32:55 vm1 crmd[4774]:     info: do_ha_control: Connected to the cluster
Jun  5 15:32:55 vm1 crmd[4774]:     info: lrmd_ipc_connect: Connecting to lrmd
Jun  5 15:32:55 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section nodes: OK (rc=0, origin=local/crmd/3, version=0.20.39)
Jun  5 15:32:55 vm1 lrmd[4578]:     info: crm_client_new: Connecting 0x2125d50 for uid=496 gid=492 pid=4774 id=c8df3f4d-eb3d-4345-b66c-813149f93c60
Jun  5 15:32:55 vm1 crmd[4774]:     info: do_lrm_control: LRM connection established
Jun  5 15:32:55 vm1 crmd[4774]:     info: do_started: Delaying start, no membership data (0000000000100000)
Jun  5 15:32:55 vm1 crmd[4774]:     info: pcmk_quorum_notification: Membership 388: quorum retained (1)
Jun  5 15:32:55 vm1 crmd[4774]:   notice: crm_update_peer_state: pcmk_quorum_notification: Node vm1[2204477632] - state is now member (was (null))
Jun  5 15:32:55 vm1 crmd[4774]:     info: peer_update_callback: vm1 is now member (was (null))
Jun  5 15:32:55 vm1 crmd[4774]:     info: do_started: Delaying start, Config not read (0000000000000040)
Jun  5 15:32:55 vm1 crmd[4774]:     info: pcmk_cpg_membership: Joined[0.0] crmd.2204477632 
Jun  5 15:32:55 vm1 crmd[4774]:     info: pcmk_cpg_membership: Member[0.0] crmd.2204477632 
Jun  5 15:32:55 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section crm_config: OK (rc=0, origin=local/crmd/4, version=0.20.39)
Jun  5 15:32:55 vm1 crmd[4774]:     info: qb_ipcs_us_publish: server name: crmd
Jun  5 15:32:55 vm1 crmd[4774]:   notice: do_started: The local CRM is operational
Jun  5 15:32:55 vm1 crmd[4774]:     info: do_log: FSA: Input I_PENDING from do_started() received in state S_STARTING
Jun  5 15:32:55 vm1 crmd[4774]:     info: do_state_transition: State transition S_STARTING -> S_PENDING [ input=I_PENDING cause=C_FSA_INTERNAL origin=do_started ]
Jun  5 15:32:55 vm1 cib[4576]:     info: cib_process_readwrite: We are now in R/O mode
Jun  5 15:32:55 vm1 cib[4576]:     info: cib_process_request: Completed cib_slave operation for section 'all': OK (rc=0, origin=local/crmd/5, version=0.20.39)
Jun  5 15:32:55 vm1 corosync[4555]:   [QB    ] qb_ipcs_dispatch_connection_request HUP conn (4555-4774-31)
Jun  5 15:32:55 vm1 corosync[4555]:   [QB    ] qb_ipcs_disconnect qb_ipcs_disconnect(4555-4774-31) state:2
Jun  5 15:32:55 vm1 corosync[4555]:   [QB    ] _del epoll_ctl(del): Bad file descriptor (9)
Jun  5 15:32:55 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_closed cs_ipcs_connection_closed() 
Jun  5 15:32:55 vm1 corosync[4555]:   [CMAP  ] cmap_lib_exit_fn exit_fn for conn=0x7fab5fe79050
Jun  5 15:32:55 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_destroyed cs_ipcs_connection_destroyed() 
Jun  5 15:32:55 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-response-4555-4774-31-header
Jun  5 15:32:55 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-event-4555-4774-31-header
Jun  5 15:32:55 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-request-4555-4774-31-header
Jun  5 15:32:57 vm1 stonith-ng[4577]:     info: crm_client_new: Connecting 0x1867d30 for uid=496 gid=492 pid=4774 id=1067fc62-321f-42cb-aad6-54b98e9c7a80
Jun  5 15:32:57 vm1 stonith-ng[4577]:     info: stonith_command: Processed register from crmd.4774: OK (0)
Jun  5 15:32:57 vm1 stonith-ng[4577]:     info: stonith_command: Processed st_notify from crmd.4774: OK (0)
Jun  5 15:32:57 vm1 stonith-ng[4577]:     info: stonith_command: Processed st_notify from crmd.4774: OK (0)
Jun  5 15:33:16 vm1 crmd[4774]:     info: crm_timer_popped: Election Trigger (I_DC_TIMEOUT) just popped (20000ms)
Jun  5 15:33:16 vm1 crmd[4774]:  warning: do_log: FSA: Input I_DC_TIMEOUT from crm_timer_popped() received in state S_PENDING
Jun  5 15:33:16 vm1 crmd[4774]:     info: do_state_transition: State transition S_PENDING -> S_ELECTION [ input=I_DC_TIMEOUT cause=C_TIMER_POPPED origin=crm_timer_popped ]
Jun  5 15:33:16 vm1 crmd[4774]:     info: do_log: FSA: Input I_ELECTION_DC from do_election_check() received in state S_ELECTION
Jun  5 15:33:16 vm1 crmd[4774]:   notice: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_FSA_INTERNAL origin=do_election_check ]
Jun  5 15:33:16 vm1 crmd[4774]:     info: do_te_control: Registering TE UUID: f2868110-1b81-48ae-9ca8-3c44ba1db76a
Jun  5 15:33:16 vm1 crmd[4774]:     info: set_graph_functions: Setting custom graph functions
Jun  5 15:33:16 vm1 pengine[4580]:     info: crm_client_new: Connecting 0x2290f90 for uid=496 gid=492 pid=4774 id=878e07b6-bdb3-4b1b-b0de-3d07ce93839b
Jun  5 15:33:16 vm1 crmd[4774]:     info: do_dc_takeover: Taking over DC status for this partition
Jun  5 15:33:16 vm1 cib[4576]:     info: cib_process_readwrite: We are now in R/W mode
Jun  5 15:33:16 vm1 cib[4576]:     info: cib_process_request: Completed cib_master operation for section 'all': OK (rc=0, origin=local/crmd/6, version=0.20.39)
Jun  5 15:33:16 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section cib: OK (rc=0, origin=local/crmd/7, version=0.20.39)
Jun  5 15:33:16 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section //cib/configuration/crm_config//cluster_property_set//nvpair[@name='dc-version']: OK (rc=0, origin=local/crmd/8, version=0.20.39)
Jun  5 15:33:16 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section crm_config: OK (rc=0, origin=local/crmd/9, version=0.20.39)
Jun  5 15:33:16 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section //cib/configuration/crm_config//cluster_property_set//nvpair[@name='cluster-infrastructure']: OK (rc=0, origin=local/crmd/10, version=0.20.39)
Jun  5 15:33:16 vm1 crmd[4774]:     info: join_make_offer: Making join offers based on membership 388
Jun  5 15:33:16 vm1 crmd[4774]:     info: join_make_offer: join-1: Sending offer to vm1
Jun  5 15:33:16 vm1 crmd[4774]:     info: crm_update_peer_join: join_make_offer: Node vm1[2204477632] - join-1 phase 0 -> 1
Jun  5 15:33:16 vm1 crmd[4774]:     info: do_dc_join_offer_all: join-1: Waiting on 1 outstanding join acks
Jun  5 15:33:16 vm1 crmd[4774]:     info: update_dc: Set DC to vm1 (3.0.7)
Jun  5 15:33:16 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section crm_config: OK (rc=0, origin=local/crmd/11, version=0.20.39)
Jun  5 15:33:16 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section crm_config: OK (rc=0, origin=local/crmd/12, version=0.20.39)
Jun  5 15:33:16 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/13, version=0.20.39)
Jun  5 15:33:16 vm1 crmd[4774]:     info: crm_update_peer_join: do_dc_join_filter_offer: Node vm1[2204477632] - join-1 phase 1 -> 2
Jun  5 15:33:16 vm1 crmd[4774]:     info: crm_update_peer_expected: do_dc_join_filter_offer: Node vm1[2204477632] - expected state is now member
Jun  5 15:33:16 vm1 crmd[4774]:     info: do_state_transition: State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_FSA_INTERNAL origin=check_join_state ]
Jun  5 15:33:16 vm1 crmd[4774]:     info: crmd_join_phase_log: join-1: vm1=integrated
Jun  5 15:33:16 vm1 crmd[4774]:     info: do_dc_join_finalize: join-1: Syncing our CIB to the rest of the cluster
Jun  5 15:33:16 vm1 cib[4576]:     info: cib_process_request: Completed cib_sync operation for section 'all': OK (rc=0, origin=local/crmd/14, version=0.20.39)
Jun  5 15:33:16 vm1 crmd[4774]:     info: crm_update_peer_join: finalize_join_for: Node vm1[2204477632] - join-1 phase 2 -> 3
Jun  5 15:33:16 vm1 crmd[4774]:     info: erase_status_tag: Deleting xpath: //node_state[@uname='vm1']/transient_attributes
Jun  5 15:33:16 vm1 crmd[4774]:     info: update_attrd: Connecting to attrd... 5 retries remaining
Jun  5 15:33:16 vm1 attrd[4579]:     info: crm_client_new: Connecting 0x915d10 for uid=496 gid=492 pid=4774 id=8ff3bb9d-453a-4e41-98da-231566f458c9
Jun  5 15:33:16 vm1 crmd[4774]:     info: crm_update_peer_join: do_dc_join_ack: Node vm1[2204477632] - join-1 phase 3 -> 4
Jun  5 15:33:16 vm1 crmd[4774]:     info: do_dc_join_ack: join-1: Updating node state to member for vm1
Jun  5 15:33:16 vm1 crmd[4774]:     info: erase_status_tag: Deleting xpath: //node_state[@uname='vm1']/lrm
Jun  5 15:33:16 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section nodes: OK (rc=0, origin=local/crmd/15, version=0.20.39)
Jun  5 15:33:16 vm1 cib[4576]:     info: cib_process_request: Completed cib_delete operation for section //node_state[@uname='vm1']/transient_attributes: OK (rc=0, origin=local/crmd/16, version=0.20.40)
Jun  5 15:33:16 vm1 cib[4576]:     info: cib_process_request: Completed cib_delete operation for section //node_state[@uname='vm1']/lrm: OK (rc=0, origin=local/crmd/17, version=0.20.41)
Jun  5 15:33:16 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/crmd/18, version=0.20.42)
Jun  5 15:33:16 vm1 crmd[4774]:     info: do_state_transition: State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state ]
Jun  5 15:33:16 vm1 crmd[4774]:     info: abort_transition_graph: do_te_invoke:155 - Triggered transition abort (complete=1) : Peer Cancelled
Jun  5 15:33:16 vm1 attrd[4579]:   notice: attrd_local_callback: Sending full refresh (origin=crmd)
Jun  5 15:33:16 vm1 attrd[4579]:   notice: attrd_trigger_update: Sending flush op to all hosts for: default_ping_set(1) (100)
Jun  5 15:33:16 vm1 attrd[4579]:   notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Jun  5 15:33:16 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section //cib/status//node_state[@id='2204477632']//transient_attributes//nvpair[@name='probe_complete']: No such device or address (rc=-6, origin=local/attrd/32, version=0.20.42)
Jun  5 15:33:16 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section nodes: OK (rc=0, origin=local/crmd/19, version=0.20.42)
Jun  5 15:33:16 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/crmd/20, version=0.20.42)
Jun  5 15:33:16 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section cib: OK (rc=0, origin=local/crmd/21, version=0.20.42)
Jun  5 15:33:16 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/22, version=0.20.42)
Jun  5 15:33:16 vm1 pengine[4580]:   notice: unpack_config: On loss of CCM Quorum: Ignore
Jun  5 15:33:16 vm1 pengine[4580]:  warning: unpack_nodes: Blind faith: not fencing unseen nodes
Jun  5 15:33:16 vm1 pengine[4580]:     info: determine_online_status_fencing: Node vm1 is active
Jun  5 15:33:16 vm1 pengine[4580]:     info: determine_online_status: Node vm1 is online
Jun  5 15:33:16 vm1 pengine[4580]:  warning: pe_fence_node: Node vm2 will be fenced because the node is no longer part of the cluster
Jun  5 15:33:16 vm1 pengine[4580]:  warning: determine_online_status: Node vm2 is unclean
Jun  5 15:33:16 vm1 pengine[4580]:     info: clone_print:  Clone Set: cl1 [st1]
Jun  5 15:33:16 vm1 pengine[4580]:     info: short_print:      Started: [ vm2 ]
Jun  5 15:33:16 vm1 pengine[4580]:     info: short_print:      Stopped: [ vm1 ]
Jun  5 15:33:16 vm1 pengine[4580]:     info: native_print: prmDummy#011(ocf::pacemaker:Dummy):#011Stopped 
Jun  5 15:33:16 vm1 pengine[4580]:     info: clone_print:  Clone Set: clnPing [prmPing]
Jun  5 15:33:16 vm1 pengine[4580]:     info: short_print:      Started: [ vm2 ]
Jun  5 15:33:16 vm1 pengine[4580]:     info: short_print:      Stopped: [ vm1 ]
Jun  5 15:33:16 vm1 pengine[4580]:     info: native_color: Resource st1:1 cannot run anywhere
Jun  5 15:33:16 vm1 pengine[4580]:     info: native_color: Resource prmDummy cannot run anywhere
Jun  5 15:33:16 vm1 pengine[4580]:     info: native_color: Resource prmPing:1 cannot run anywhere
Jun  5 15:33:16 vm1 pengine[4580]:  warning: custom_action: Action st1:0_stop_0 on vm2 is unrunnable (offline)
Jun  5 15:33:16 vm1 pengine[4580]:  warning: custom_action: Action prmPing:0_stop_0 on vm2 is unrunnable (offline)
Jun  5 15:33:16 vm1 pengine[4580]:     info: RecurringOp:  Start recurring monitor (10s) for prmPing:0 on vm1
Jun  5 15:33:16 vm1 pengine[4580]:  warning: stage6: Scheduling Node vm2 for STONITH
Jun  5 15:33:16 vm1 pengine[4580]:     info: native_stop_constraints: st1:0_stop_0 is implicit after vm2 is fenced
Jun  5 15:33:16 vm1 pengine[4580]:     info: native_stop_constraints: prmPing:0_stop_0 is implicit after vm2 is fenced
Jun  5 15:33:16 vm1 pengine[4580]:   notice: LogActions: Move    st1:0#011(Started vm2 -> vm1)
Jun  5 15:33:16 vm1 pengine[4580]:     info: LogActions: Leave   st1:1#011(Stopped)
Jun  5 15:33:16 vm1 pengine[4580]:     info: LogActions: Leave   prmDummy#011(Stopped)
Jun  5 15:33:16 vm1 pengine[4580]:   notice: LogActions: Move    prmPing:0#011(Started vm2 -> vm1)
Jun  5 15:33:16 vm1 pengine[4580]:     info: LogActions: Leave   prmPing:1#011(Stopped)
Jun  5 15:33:16 vm1 crmd[4774]:    error: crm_xml_err: XML Error: Entity: line 1: parser error : Specification mandate value for attribute CRM_meta_default_ping_set
Jun  5 15:33:16 vm1 crmd[4774]:    error: crm_xml_err: XML Error: 2" on_node="vm2" on_node_uuid="2221254848"><attributes CRM_meta_default_ping_set
Jun  5 15:33:16 vm1 crmd[4774]:    error: crm_xml_err: XML Error:                                                                                ^
Jun  5 15:33:16 vm1 crmd[4774]:    error: crm_xml_err: XML Error: Entity: line 1: parser error : attributes construct error
Jun  5 15:33:16 vm1 crmd[4774]:    error: crm_xml_err: XML Error: 2" on_node="vm2" on_node_uuid="2221254848"><attributes CRM_meta_default_ping_set
Jun  5 15:33:16 vm1 crmd[4774]:    error: crm_xml_err: XML Error:                                                                                ^
Jun  5 15:33:16 vm1 crmd[4774]:    error: crm_xml_err: XML Error: Entity: line 1: parser error : Couldn't find end of Start Tag attributes line 1
Jun  5 15:33:16 vm1 crmd[4774]:    error: crm_xml_err: XML Error: 2" on_node="vm2" on_node_uuid="2221254848"><attributes CRM_meta_default_ping_set
Jun  5 15:33:16 vm1 crmd[4774]:    error: crm_xml_err: XML Error:                                                                                ^
Jun  5 15:33:16 vm1 crmd[4774]:  warning: string2xml: Parsing failed (domain=1, level=3, code=73): Couldn't find end of Start Tag attributes line 1
Jun  5 15:33:16 vm1 pengine[4580]:  warning: process_pe_message: Calculated Transition 7: /var/lib/pacemaker/pengine/pe-warn-4.bz2
Jun  5 15:33:16 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section /cib: OK (rc=0, origin=local/attrd/33, version=0.20.42)
Jun  5 15:33:16 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/attrd/34, version=0.20.43)
Jun  5 15:33:16 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section //cib/status//node_state[@id='2204477632']//transient_attributes//nvpair[@name='default_ping_set(1)']: No such device or address (rc=-6, origin=local/attrd/35, version=0.20.43)
Jun  5 15:33:16 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section /cib: OK (rc=0, origin=local/attrd/36, version=0.20.43)
Jun  5 15:33:16 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/attrd/37, version=0.20.44)
Jun  5 15:33:18 vm1 cib[4576]:     info: crm_client_destroy: Destroying 0 events
Jun  5 15:33:18 vm1 lrmd[4578]:     info: crm_client_destroy: Destroying 0 events
Jun  5 15:33:18 vm1 stonith-ng[4577]:     info: crm_client_destroy: Destroying 0 events
Jun  5 15:33:18 vm1 pengine[4580]:     info: crm_client_destroy: Destroying 0 events
Jun  5 15:33:18 vm1 attrd[4579]:     info: crm_client_destroy: Destroying 0 events
Jun  5 15:33:18 vm1 pacemakerd[4574]:    error: child_death_dispatch: Managed process 4774 (crmd) dumped core
Jun  5 15:33:18 vm1 pacemakerd[4574]:   notice: pcmk_child_exit: Child process crmd terminated with signal 11 (pid=4774, core=1)
Jun  5 15:33:18 vm1 pacemakerd[4574]:   notice: pcmk_process_exit: Respawning failed child process: crmd
Jun  5 15:33:18 vm1 pacemakerd[4574]:     info: start_child: Using uid=496 and group=492 for process crmd
Jun  5 15:33:18 vm1 pacemakerd[4574]:     info: start_child: Forked child 4784 for process crmd
Jun  5 15:33:18 vm1 corosync[4555]:   [QB    ] qb_ipcs_dispatch_connection_request HUP conn (4555-4774-29)
Jun  5 15:33:18 vm1 corosync[4555]:   [QB    ] qb_ipcs_disconnect qb_ipcs_disconnect(4555-4774-29) state:2
Jun  5 15:33:18 vm1 corosync[4555]:   [QB    ] _del epoll_ctl(del): Bad file descriptor (9)
Jun  5 15:33:18 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_closed cs_ipcs_connection_closed() 
Jun  5 15:33:18 vm1 corosync[4555]:   [CPG   ] cpg_lib_exit_fn exit_fn for conn=0x7fab5fe79ab0
Jun  5 15:33:18 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_destroyed cs_ipcs_connection_destroyed() 
Jun  5 15:33:18 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cpg-response-4555-4774-29-header
Jun  5 15:33:18 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cpg-event-4555-4774-29-header
Jun  5 15:33:18 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cpg-request-4555-4774-29-header
Jun  5 15:33:18 vm1 corosync[4555]:   [QB    ] qb_ipcs_dispatch_connection_request HUP conn (4555-4774-30)
Jun  5 15:33:18 vm1 corosync[4555]:   [QB    ] qb_ipcs_disconnect qb_ipcs_disconnect(4555-4774-30) state:2
Jun  5 15:33:18 vm1 corosync[4555]:   [QB    ] _del epoll_ctl(del): Bad file descriptor (9)
Jun  5 15:33:18 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_closed cs_ipcs_connection_closed() 
Jun  5 15:33:18 vm1 corosync[4555]:   [QUORUM] quorum_lib_exit_fn lib_exit_fn: conn=0x7fab5fe7d4d0
Jun  5 15:33:18 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_destroyed cs_ipcs_connection_destroyed() 
Jun  5 15:33:18 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-quorum-response-4555-4774-30-header
Jun  5 15:33:18 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-quorum-event-4555-4774-30-header
Jun  5 15:33:18 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-quorum-request-4555-4774-30-header
Jun  5 15:33:18 vm1 corosync[4555]:   [CPG   ] message_handler_req_exec_cpg_procleave got procleave message from cluster node -2090489664
Jun  5 15:33:18 vm1 crmd[4784]:   notice: crm_add_logfile: Additional logging available in /var/log/pacemaker.log
Jun  5 15:33:18 vm1 crmd[4784]:    debug: crm_update_callsites: Enabling callsites based on priority=7, files=(null), functions=(null), formats=(null), tags=(null)
Jun  5 15:33:18 vm1 crmd[4784]:     info: crm_log_init: Changed active directory to /var/lib/heartbeat/cores/hacluster
Jun  5 15:33:18 vm1 crmd[4784]:   notice: main: CRM Git Version: 7209c02
Jun  5 15:33:18 vm1 crmd[4784]:     info: do_log: FSA: Input I_STARTUP from crmd_init() received in state S_STARTING
Jun  5 15:33:18 vm1 crmd[4784]:     info: get_cluster_type: Verifying cluster type: 'corosync'
Jun  5 15:33:18 vm1 crmd[4784]:     info: get_cluster_type: Assuming an active 'corosync' cluster
Jun  5 15:33:18 vm1 cib[4576]:     info: crm_client_new: Connecting 0x11c2d40 for uid=496 gid=492 pid=4784 id=6dab483b-a211-466a-b949-37d7235ccac5
Jun  5 15:33:18 vm1 crmd[4784]:     info: do_cib_control: CIB connection established
Jun  5 15:33:18 vm1 crmd[4784]:   notice: crm_cluster_connect: Connecting to cluster infrastructure: corosync
Jun  5 15:33:18 vm1 corosync[4555]:   [QB    ] handle_new_connection IPC credentials authenticated (4555-4784-29)
Jun  5 15:33:18 vm1 corosync[4555]:   [QB    ] qb_ipcs_shm_connect connecting to client [4784]
Jun  5 15:33:18 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:33:18 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:33:18 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:33:18 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_created connection created
Jun  5 15:33:18 vm1 corosync[4555]:   [CPG   ] cpg_lib_init_fn lib_init_fn: conn=0x7fab5fe7b730, cpd=0x7fab5fe7b394
Jun  5 15:33:18 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/2, version=0.20.44)
Jun  5 15:33:18 vm1 crmd[4784]:     info: crm_get_peer: Node <null> now has id: 2204477632
Jun  5 15:33:18 vm1 crmd[4784]:     info: crm_update_peer_proc: init_cpg_connection: Node (null)[2204477632] - corosync-cpg is now online
Jun  5 15:33:18 vm1 corosync[4555]:   [QB    ] handle_new_connection IPC credentials authenticated (4555-4784-30)
Jun  5 15:33:18 vm1 corosync[4555]:   [QB    ] qb_ipcs_shm_connect connecting to client [4784]
Jun  5 15:33:18 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:33:18 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:33:18 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:33:18 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_created connection created
Jun  5 15:33:18 vm1 corosync[4555]:   [CMAP  ] cmap_lib_init_fn lib_init_fn: conn=0x7fab5fd78a30
Jun  5 15:33:18 vm1 crmd[4784]:   notice: corosync_node_name: Unable to get node name for nodeid 2204477632
Jun  5 15:33:18 vm1 crmd[4784]:   notice: get_local_node_name: Defaulting to uname -n for the local corosync node name
Jun  5 15:33:18 vm1 crmd[4784]:     info: init_cs_connection_once: Connection to 'corosync': established
Jun  5 15:33:18 vm1 crmd[4784]:     info: crm_get_peer: Node 2204477632 is now known as vm1
Jun  5 15:33:18 vm1 crmd[4784]:     info: peer_update_callback: vm1 is now (null)
Jun  5 15:33:18 vm1 crmd[4784]:     info: crm_get_peer: Node 2204477632 has uuid 2204477632
Jun  5 15:33:18 vm1 corosync[4555]:   [QB    ] qb_ipcs_dispatch_connection_request HUP conn (4555-4784-30)
Jun  5 15:33:18 vm1 corosync[4555]:   [QB    ] qb_ipcs_disconnect qb_ipcs_disconnect(4555-4784-30) state:2
Jun  5 15:33:18 vm1 corosync[4555]:   [QB    ] _del epoll_ctl(del): Bad file descriptor (9)
Jun  5 15:33:18 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_closed cs_ipcs_connection_closed() 
Jun  5 15:33:18 vm1 corosync[4555]:   [CMAP  ] cmap_lib_exit_fn exit_fn for conn=0x7fab5fd78a30
Jun  5 15:33:18 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_destroyed cs_ipcs_connection_destroyed() 
Jun  5 15:33:18 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-response-4555-4784-30-header
Jun  5 15:33:18 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-event-4555-4784-30-header
Jun  5 15:33:18 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-request-4555-4784-30-header
Jun  5 15:33:18 vm1 corosync[4555]:   [CPG   ] message_handler_req_exec_cpg_procjoin got procjoin message from cluster node -2090489664 (r(0) ip(192.168.101.131) r(1) ip(192.168.102.131) ) for pid 4784
Jun  5 15:33:18 vm1 corosync[4555]:   [QB    ] handle_new_connection IPC credentials authenticated (4555-4784-30)
Jun  5 15:33:18 vm1 corosync[4555]:   [QB    ] qb_ipcs_shm_connect connecting to client [4784]
Jun  5 15:33:18 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:33:18 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:33:18 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:33:18 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_created connection created
Jun  5 15:33:18 vm1 corosync[4555]:   [QUORUM] quorum_lib_init_fn lib_init_fn: conn=0x7fab5fd78a30
Jun  5 15:33:18 vm1 corosync[4555]:   [QUORUM] message_handler_req_lib_quorum_gettype got quorum_type request on 0x7fab5fd78a30
Jun  5 15:33:18 vm1 corosync[4555]:   [QUORUM] message_handler_req_lib_quorum_getquorate got quorate request on 0x7fab5fd78a30
Jun  5 15:33:18 vm1 crmd[4784]:   notice: init_quorum_connection: Quorum acquired
Jun  5 15:33:18 vm1 corosync[4555]:   [QUORUM] message_handler_req_lib_quorum_trackstart got trackstart request on 0x7fab5fd78a30
Jun  5 15:33:18 vm1 corosync[4555]:   [QUORUM] message_handler_req_lib_quorum_trackstart sending initial status to 0x7fab5fd78a30
Jun  5 15:33:18 vm1 corosync[4555]:   [QUORUM] send_library_notification sending quorum notification to 0x7fab5fd78a30, length = 52
Jun  5 15:33:18 vm1 corosync[4555]:   [QB    ] handle_new_connection IPC credentials authenticated (4555-4784-31)
Jun  5 15:33:18 vm1 corosync[4555]:   [QB    ] qb_ipcs_shm_connect connecting to client [4784]
Jun  5 15:33:18 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:33:18 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:33:18 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:33:18 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_created connection created
Jun  5 15:33:18 vm1 corosync[4555]:   [CMAP  ] cmap_lib_init_fn lib_init_fn: conn=0x7fab5fe78ea0
Jun  5 15:33:18 vm1 corosync[4555]:   [QB    ] qb_ipcs_dispatch_connection_request HUP conn (4555-4784-31)
Jun  5 15:33:18 vm1 corosync[4555]:   [QB    ] qb_ipcs_disconnect qb_ipcs_disconnect(4555-4784-31) state:2
Jun  5 15:33:18 vm1 corosync[4555]:   [QB    ] _del epoll_ctl(del): Bad file descriptor (9)
Jun  5 15:33:18 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_closed cs_ipcs_connection_closed() 
Jun  5 15:33:18 vm1 corosync[4555]:   [CMAP  ] cmap_lib_exit_fn exit_fn for conn=0x7fab5fe78ea0
Jun  5 15:33:18 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_destroyed cs_ipcs_connection_destroyed() 
Jun  5 15:33:18 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-response-4555-4784-31-header
Jun  5 15:33:18 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-event-4555-4784-31-header
Jun  5 15:33:18 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-request-4555-4784-31-header
Jun  5 15:33:18 vm1 corosync[4555]:   [QB    ] handle_new_connection IPC credentials authenticated (4555-4784-31)
Jun  5 15:33:18 vm1 corosync[4555]:   [QB    ] qb_ipcs_shm_connect connecting to client [4784]
Jun  5 15:33:18 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:33:18 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:33:18 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:33:18 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_created connection created
Jun  5 15:33:18 vm1 corosync[4555]:   [CMAP  ] cmap_lib_init_fn lib_init_fn: conn=0x7fab5fe78ea0
Jun  5 15:33:18 vm1 crmd[4784]:     info: do_ha_control: Connected to the cluster
Jun  5 15:33:18 vm1 crmd[4784]:     info: lrmd_ipc_connect: Connecting to lrmd
Jun  5 15:33:18 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section nodes: OK (rc=0, origin=local/crmd/3, version=0.20.44)
Jun  5 15:33:18 vm1 lrmd[4578]:     info: crm_client_new: Connecting 0x2125d50 for uid=496 gid=492 pid=4784 id=57c63005-23af-43d1-90f7-5c53764f3dc0
Jun  5 15:33:18 vm1 crmd[4784]:     info: do_lrm_control: LRM connection established
Jun  5 15:33:18 vm1 crmd[4784]:     info: do_started: Delaying start, no membership data (0000000000100000)
Jun  5 15:33:18 vm1 crmd[4784]:     info: pcmk_quorum_notification: Membership 388: quorum retained (1)
Jun  5 15:33:18 vm1 crmd[4784]:   notice: crm_update_peer_state: pcmk_quorum_notification: Node vm1[2204477632] - state is now member (was (null))
Jun  5 15:33:18 vm1 crmd[4784]:     info: peer_update_callback: vm1 is now member (was (null))
Jun  5 15:33:18 vm1 crmd[4784]:     info: do_started: Delaying start, Config not read (0000000000000040)
Jun  5 15:33:18 vm1 crmd[4784]:     info: pcmk_cpg_membership: Joined[0.0] crmd.2204477632 
Jun  5 15:33:18 vm1 crmd[4784]:     info: pcmk_cpg_membership: Member[0.0] crmd.2204477632 
Jun  5 15:33:18 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section crm_config: OK (rc=0, origin=local/crmd/4, version=0.20.44)
Jun  5 15:33:18 vm1 crmd[4784]:     info: qb_ipcs_us_publish: server name: crmd
Jun  5 15:33:18 vm1 crmd[4784]:   notice: do_started: The local CRM is operational
Jun  5 15:33:18 vm1 crmd[4784]:     info: do_log: FSA: Input I_PENDING from do_started() received in state S_STARTING
Jun  5 15:33:18 vm1 crmd[4784]:     info: do_state_transition: State transition S_STARTING -> S_PENDING [ input=I_PENDING cause=C_FSA_INTERNAL origin=do_started ]
Jun  5 15:33:18 vm1 cib[4576]:     info: cib_process_readwrite: We are now in R/O mode
Jun  5 15:33:18 vm1 cib[4576]:     info: cib_process_request: Completed cib_slave operation for section 'all': OK (rc=0, origin=local/crmd/5, version=0.20.44)
Jun  5 15:33:18 vm1 corosync[4555]:   [QB    ] qb_ipcs_dispatch_connection_request HUP conn (4555-4784-31)
Jun  5 15:33:18 vm1 corosync[4555]:   [QB    ] qb_ipcs_disconnect qb_ipcs_disconnect(4555-4784-31) state:2
Jun  5 15:33:18 vm1 corosync[4555]:   [QB    ] _del epoll_ctl(del): Bad file descriptor (9)
Jun  5 15:33:18 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_closed cs_ipcs_connection_closed() 
Jun  5 15:33:18 vm1 corosync[4555]:   [CMAP  ] cmap_lib_exit_fn exit_fn for conn=0x7fab5fe78ea0
Jun  5 15:33:18 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_destroyed cs_ipcs_connection_destroyed() 
Jun  5 15:33:18 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-response-4555-4784-31-header
Jun  5 15:33:18 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-event-4555-4784-31-header
Jun  5 15:33:18 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-request-4555-4784-31-header
Jun  5 15:33:20 vm1 stonith-ng[4577]:     info: crm_client_new: Connecting 0x1867d30 for uid=496 gid=492 pid=4784 id=635e7bff-08d7-4795-b639-48302580b75b
Jun  5 15:33:20 vm1 stonith-ng[4577]:     info: stonith_command: Processed register from crmd.4784: OK (0)
Jun  5 15:33:20 vm1 stonith-ng[4577]:     info: stonith_command: Processed st_notify from crmd.4784: OK (0)
Jun  5 15:33:20 vm1 stonith-ng[4577]:     info: stonith_command: Processed st_notify from crmd.4784: OK (0)
Jun  5 15:33:39 vm1 crmd[4784]:     info: crm_timer_popped: Election Trigger (I_DC_TIMEOUT) just popped (20000ms)
Jun  5 15:33:39 vm1 crmd[4784]:  warning: do_log: FSA: Input I_DC_TIMEOUT from crm_timer_popped() received in state S_PENDING
Jun  5 15:33:39 vm1 crmd[4784]:     info: do_state_transition: State transition S_PENDING -> S_ELECTION [ input=I_DC_TIMEOUT cause=C_TIMER_POPPED origin=crm_timer_popped ]
Jun  5 15:33:39 vm1 crmd[4784]:     info: do_log: FSA: Input I_ELECTION_DC from do_election_check() received in state S_ELECTION
Jun  5 15:33:39 vm1 crmd[4784]:   notice: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_FSA_INTERNAL origin=do_election_check ]
Jun  5 15:33:39 vm1 crmd[4784]:     info: do_te_control: Registering TE UUID: c09561e7-72f3-433c-97d6-0c67d1d6788c
Jun  5 15:33:39 vm1 crmd[4784]:     info: set_graph_functions: Setting custom graph functions
Jun  5 15:33:39 vm1 pengine[4580]:     info: crm_client_new: Connecting 0x2290f90 for uid=496 gid=492 pid=4784 id=91179dfb-79e6-49a1-96cc-c7471b36450d
Jun  5 15:33:39 vm1 crmd[4784]:     info: do_dc_takeover: Taking over DC status for this partition
Jun  5 15:33:39 vm1 cib[4576]:     info: cib_process_readwrite: We are now in R/W mode
Jun  5 15:33:39 vm1 cib[4576]:     info: cib_process_request: Completed cib_master operation for section 'all': OK (rc=0, origin=local/crmd/6, version=0.20.44)
Jun  5 15:33:39 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section cib: OK (rc=0, origin=local/crmd/7, version=0.20.44)
Jun  5 15:33:39 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section //cib/configuration/crm_config//cluster_property_set//nvpair[@name='dc-version']: OK (rc=0, origin=local/crmd/8, version=0.20.44)
Jun  5 15:33:39 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section crm_config: OK (rc=0, origin=local/crmd/9, version=0.20.44)
Jun  5 15:33:39 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section //cib/configuration/crm_config//cluster_property_set//nvpair[@name='cluster-infrastructure']: OK (rc=0, origin=local/crmd/10, version=0.20.44)
Jun  5 15:33:39 vm1 crmd[4784]:     info: join_make_offer: Making join offers based on membership 388
Jun  5 15:33:39 vm1 crmd[4784]:     info: join_make_offer: join-1: Sending offer to vm1
Jun  5 15:33:39 vm1 crmd[4784]:     info: crm_update_peer_join: join_make_offer: Node vm1[2204477632] - join-1 phase 0 -> 1
Jun  5 15:33:39 vm1 crmd[4784]:     info: do_dc_join_offer_all: join-1: Waiting on 1 outstanding join acks
Jun  5 15:33:39 vm1 crmd[4784]:     info: update_dc: Set DC to vm1 (3.0.7)
Jun  5 15:33:39 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section crm_config: OK (rc=0, origin=local/crmd/11, version=0.20.44)
Jun  5 15:33:39 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section crm_config: OK (rc=0, origin=local/crmd/12, version=0.20.44)
Jun  5 15:33:39 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/13, version=0.20.44)
Jun  5 15:33:39 vm1 crmd[4784]:     info: crm_update_peer_join: do_dc_join_filter_offer: Node vm1[2204477632] - join-1 phase 1 -> 2
Jun  5 15:33:39 vm1 crmd[4784]:     info: crm_update_peer_expected: do_dc_join_filter_offer: Node vm1[2204477632] - expected state is now member
Jun  5 15:33:39 vm1 crmd[4784]:     info: do_state_transition: State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_FSA_INTERNAL origin=check_join_state ]
Jun  5 15:33:39 vm1 crmd[4784]:     info: crmd_join_phase_log: join-1: vm1=integrated
Jun  5 15:33:39 vm1 crmd[4784]:     info: do_dc_join_finalize: join-1: Syncing our CIB to the rest of the cluster
Jun  5 15:33:39 vm1 cib[4576]:     info: cib_process_request: Completed cib_sync operation for section 'all': OK (rc=0, origin=local/crmd/14, version=0.20.44)
Jun  5 15:33:39 vm1 crmd[4784]:     info: crm_update_peer_join: finalize_join_for: Node vm1[2204477632] - join-1 phase 2 -> 3
Jun  5 15:33:39 vm1 crmd[4784]:     info: erase_status_tag: Deleting xpath: //node_state[@uname='vm1']/transient_attributes
Jun  5 15:33:39 vm1 crmd[4784]:     info: update_attrd: Connecting to attrd... 5 retries remaining
Jun  5 15:33:39 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section nodes: OK (rc=0, origin=local/crmd/15, version=0.20.44)
Jun  5 15:33:39 vm1 attrd[4579]:     info: crm_client_new: Connecting 0x915d10 for uid=496 gid=492 pid=4784 id=2ac5d35b-8d21-414a-9eda-1201cf1216ec
Jun  5 15:33:39 vm1 crmd[4784]:     info: crm_update_peer_join: do_dc_join_ack: Node vm1[2204477632] - join-1 phase 3 -> 4
Jun  5 15:33:39 vm1 crmd[4784]:     info: do_dc_join_ack: join-1: Updating node state to member for vm1
Jun  5 15:33:39 vm1 crmd[4784]:     info: erase_status_tag: Deleting xpath: //node_state[@uname='vm1']/lrm
Jun  5 15:33:39 vm1 cib[4576]:     info: cib_process_request: Completed cib_delete operation for section //node_state[@uname='vm1']/transient_attributes: OK (rc=0, origin=local/crmd/16, version=0.20.45)
Jun  5 15:33:39 vm1 cib[4576]:     info: cib_process_request: Completed cib_delete operation for section //node_state[@uname='vm1']/lrm: OK (rc=0, origin=local/crmd/17, version=0.20.46)
Jun  5 15:33:39 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/crmd/18, version=0.20.47)
Jun  5 15:33:39 vm1 crmd[4784]:     info: do_state_transition: State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state ]
Jun  5 15:33:39 vm1 crmd[4784]:     info: abort_transition_graph: do_te_invoke:155 - Triggered transition abort (complete=1) : Peer Cancelled
Jun  5 15:33:39 vm1 attrd[4579]:   notice: attrd_local_callback: Sending full refresh (origin=crmd)
Jun  5 15:33:39 vm1 attrd[4579]:   notice: attrd_trigger_update: Sending flush op to all hosts for: default_ping_set(1) (100)
Jun  5 15:33:39 vm1 attrd[4579]:   notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Jun  5 15:33:39 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section //cib/status//node_state[@id='2204477632']//transient_attributes//nvpair[@name='probe_complete']: No such device or address (rc=-6, origin=local/attrd/38, version=0.20.47)
Jun  5 15:33:39 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section nodes: OK (rc=0, origin=local/crmd/19, version=0.20.47)
Jun  5 15:33:39 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/crmd/20, version=0.20.47)
Jun  5 15:33:39 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section cib: OK (rc=0, origin=local/crmd/21, version=0.20.47)
Jun  5 15:33:39 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/22, version=0.20.47)
Jun  5 15:33:39 vm1 pengine[4580]:   notice: unpack_config: On loss of CCM Quorum: Ignore
Jun  5 15:33:39 vm1 pengine[4580]:  warning: unpack_nodes: Blind faith: not fencing unseen nodes
Jun  5 15:33:39 vm1 pengine[4580]:     info: determine_online_status_fencing: Node vm1 is active
Jun  5 15:33:39 vm1 pengine[4580]:     info: determine_online_status: Node vm1 is online
Jun  5 15:33:39 vm1 pengine[4580]:  warning: pe_fence_node: Node vm2 will be fenced because the node is no longer part of the cluster
Jun  5 15:33:39 vm1 pengine[4580]:  warning: determine_online_status: Node vm2 is unclean
Jun  5 15:33:39 vm1 pengine[4580]:     info: clone_print:  Clone Set: cl1 [st1]
Jun  5 15:33:39 vm1 pengine[4580]:     info: short_print:      Started: [ vm2 ]
Jun  5 15:33:39 vm1 pengine[4580]:     info: short_print:      Stopped: [ vm1 ]
Jun  5 15:33:39 vm1 pengine[4580]:     info: native_print: prmDummy#011(ocf::pacemaker:Dummy):#011Stopped 
Jun  5 15:33:39 vm1 pengine[4580]:     info: clone_print:  Clone Set: clnPing [prmPing]
Jun  5 15:33:39 vm1 pengine[4580]:     info: short_print:      Started: [ vm2 ]
Jun  5 15:33:39 vm1 pengine[4580]:     info: short_print:      Stopped: [ vm1 ]
Jun  5 15:33:39 vm1 pengine[4580]:     info: native_color: Resource st1:1 cannot run anywhere
Jun  5 15:33:39 vm1 pengine[4580]:     info: native_color: Resource prmDummy cannot run anywhere
Jun  5 15:33:39 vm1 pengine[4580]:     info: native_color: Resource prmPing:1 cannot run anywhere
Jun  5 15:33:39 vm1 pengine[4580]:  warning: custom_action: Action st1:0_stop_0 on vm2 is unrunnable (offline)
Jun  5 15:33:39 vm1 pengine[4580]:  warning: custom_action: Action prmPing:0_stop_0 on vm2 is unrunnable (offline)
Jun  5 15:33:39 vm1 pengine[4580]:     info: RecurringOp:  Start recurring monitor (10s) for prmPing:0 on vm1
Jun  5 15:33:39 vm1 pengine[4580]:  warning: stage6: Scheduling Node vm2 for STONITH
Jun  5 15:33:39 vm1 pengine[4580]:     info: native_stop_constraints: st1:0_stop_0 is implicit after vm2 is fenced
Jun  5 15:33:39 vm1 pengine[4580]:     info: native_stop_constraints: prmPing:0_stop_0 is implicit after vm2 is fenced
Jun  5 15:33:39 vm1 pengine[4580]:   notice: LogActions: Move    st1:0#011(Started vm2 -> vm1)
Jun  5 15:33:39 vm1 pengine[4580]:     info: LogActions: Leave   st1:1#011(Stopped)
Jun  5 15:33:39 vm1 pengine[4580]:     info: LogActions: Leave   prmDummy#011(Stopped)
Jun  5 15:33:39 vm1 pengine[4580]:   notice: LogActions: Move    prmPing:0#011(Started vm2 -> vm1)
Jun  5 15:33:39 vm1 pengine[4580]:     info: LogActions: Leave   prmPing:1#011(Stopped)
Jun  5 15:33:39 vm1 crmd[4784]:    error: crm_xml_err: XML Error: Entity: line 1: parser error : Specification mandate value for attribute CRM_meta_default_ping_set
Jun  5 15:33:39 vm1 crmd[4784]:    error: crm_xml_err: XML Error: 2" on_node="vm2" on_node_uuid="2221254848"><attributes CRM_meta_default_ping_set
Jun  5 15:33:39 vm1 crmd[4784]:    error: crm_xml_err: XML Error:                                                                                ^
Jun  5 15:33:39 vm1 crmd[4784]:    error: crm_xml_err: XML Error: Entity: line 1: parser error : attributes construct error
Jun  5 15:33:39 vm1 crmd[4784]:    error: crm_xml_err: XML Error: 2" on_node="vm2" on_node_uuid="2221254848"><attributes CRM_meta_default_ping_set
Jun  5 15:33:39 vm1 crmd[4784]:    error: crm_xml_err: XML Error:                                                                                ^
Jun  5 15:33:39 vm1 crmd[4784]:    error: crm_xml_err: XML Error: Entity: line 1: parser error : Couldn't find end of Start Tag attributes line 1
Jun  5 15:33:39 vm1 crmd[4784]:    error: crm_xml_err: XML Error: 2" on_node="vm2" on_node_uuid="2221254848"><attributes CRM_meta_default_ping_set
Jun  5 15:33:39 vm1 crmd[4784]:    error: crm_xml_err: XML Error:                                                                                ^
Jun  5 15:33:39 vm1 crmd[4784]:  warning: string2xml: Parsing failed (domain=1, level=3, code=73): Couldn't find end of Start Tag attributes line 1
Jun  5 15:33:39 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section /cib: OK (rc=0, origin=local/attrd/39, version=0.20.47)
Jun  5 15:33:39 vm1 pengine[4580]:  warning: process_pe_message: Calculated Transition 8: /var/lib/pacemaker/pengine/pe-warn-5.bz2
Jun  5 15:33:39 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/attrd/40, version=0.20.48)
Jun  5 15:33:39 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section //cib/status//node_state[@id='2204477632']//transient_attributes//nvpair[@name='default_ping_set(1)']: No such device or address (rc=-6, origin=local/attrd/41, version=0.20.48)
Jun  5 15:33:39 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section /cib: OK (rc=0, origin=local/attrd/42, version=0.20.48)
Jun  5 15:33:39 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/attrd/43, version=0.20.49)
Jun  5 15:33:40 vm1 cib[4576]:     info: crm_client_destroy: Destroying 0 events
Jun  5 15:33:40 vm1 lrmd[4578]:     info: crm_client_destroy: Destroying 0 events
Jun  5 15:33:40 vm1 stonith-ng[4577]:     info: crm_client_destroy: Destroying 0 events
Jun  5 15:33:40 vm1 attrd[4579]:     info: crm_client_destroy: Destroying 0 events
Jun  5 15:33:40 vm1 pengine[4580]:     info: crm_client_destroy: Destroying 0 events
Jun  5 15:33:40 vm1 pacemakerd[4574]:    error: child_death_dispatch: Managed process 4784 (crmd) dumped core
Jun  5 15:33:40 vm1 pacemakerd[4574]:   notice: pcmk_child_exit: Child process crmd terminated with signal 11 (pid=4784, core=1)
Jun  5 15:33:40 vm1 pacemakerd[4574]:   notice: pcmk_process_exit: Respawning failed child process: crmd
Jun  5 15:33:40 vm1 pacemakerd[4574]:     info: start_child: Using uid=496 and group=492 for process crmd
Jun  5 15:33:40 vm1 pacemakerd[4574]:     info: start_child: Forked child 4790 for process crmd
Jun  5 15:33:40 vm1 corosync[4555]:   [QB    ] qb_ipcs_dispatch_connection_request HUP conn (4555-4784-29)
Jun  5 15:33:40 vm1 corosync[4555]:   [QB    ] qb_ipcs_disconnect qb_ipcs_disconnect(4555-4784-29) state:2
Jun  5 15:33:40 vm1 corosync[4555]:   [QB    ] _del epoll_ctl(del): Bad file descriptor (9)
Jun  5 15:33:40 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_closed cs_ipcs_connection_closed() 
Jun  5 15:33:40 vm1 corosync[4555]:   [CPG   ] cpg_lib_exit_fn exit_fn for conn=0x7fab5fe7b730
Jun  5 15:33:40 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_destroyed cs_ipcs_connection_destroyed() 
Jun  5 15:33:40 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cpg-response-4555-4784-29-header
Jun  5 15:33:40 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cpg-event-4555-4784-29-header
Jun  5 15:33:40 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cpg-request-4555-4784-29-header
Jun  5 15:33:40 vm1 corosync[4555]:   [QB    ] qb_ipcs_dispatch_connection_request HUP conn (4555-4784-30)
Jun  5 15:33:40 vm1 corosync[4555]:   [QB    ] qb_ipcs_disconnect qb_ipcs_disconnect(4555-4784-30) state:2
Jun  5 15:33:40 vm1 corosync[4555]:   [QB    ] _del epoll_ctl(del): Bad file descriptor (9)
Jun  5 15:33:40 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_closed cs_ipcs_connection_closed() 
Jun  5 15:33:40 vm1 corosync[4555]:   [QUORUM] quorum_lib_exit_fn lib_exit_fn: conn=0x7fab5fd78a30
Jun  5 15:33:40 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_destroyed cs_ipcs_connection_destroyed() 
Jun  5 15:33:40 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-quorum-response-4555-4784-30-header
Jun  5 15:33:40 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-quorum-event-4555-4784-30-header
Jun  5 15:33:40 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-quorum-request-4555-4784-30-header
Jun  5 15:33:40 vm1 corosync[4555]:   [CPG   ] message_handler_req_exec_cpg_procleave got procleave message from cluster node -2090489664
Jun  5 15:33:40 vm1 crmd[4790]:   notice: crm_add_logfile: Additional logging available in /var/log/pacemaker.log
Jun  5 15:33:40 vm1 crmd[4790]:    debug: crm_update_callsites: Enabling callsites based on priority=7, files=(null), functions=(null), formats=(null), tags=(null)
Jun  5 15:33:40 vm1 crmd[4790]:     info: crm_log_init: Changed active directory to /var/lib/heartbeat/cores/hacluster
Jun  5 15:33:40 vm1 crmd[4790]:   notice: main: CRM Git Version: 7209c02
Jun  5 15:33:40 vm1 crmd[4790]:     info: do_log: FSA: Input I_STARTUP from crmd_init() received in state S_STARTING
Jun  5 15:33:40 vm1 crmd[4790]:     info: get_cluster_type: Verifying cluster type: 'corosync'
Jun  5 15:33:40 vm1 crmd[4790]:     info: get_cluster_type: Assuming an active 'corosync' cluster
Jun  5 15:33:40 vm1 cib[4576]:     info: crm_client_new: Connecting 0x11c2d40 for uid=496 gid=492 pid=4790 id=1ad9df4c-3ec0-4339-9f37-8c5e5c830232
Jun  5 15:33:40 vm1 crmd[4790]:     info: do_cib_control: CIB connection established
Jun  5 15:33:40 vm1 crmd[4790]:   notice: crm_cluster_connect: Connecting to cluster infrastructure: corosync
Jun  5 15:33:40 vm1 corosync[4555]:   [QB    ] handle_new_connection IPC credentials authenticated (4555-4790-29)
Jun  5 15:33:40 vm1 corosync[4555]:   [QB    ] qb_ipcs_shm_connect connecting to client [4790]
Jun  5 15:33:40 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:33:40 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:33:40 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:33:40 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_created connection created
Jun  5 15:33:40 vm1 corosync[4555]:   [CPG   ] cpg_lib_init_fn lib_init_fn: conn=0x7fab5fe7b360, cpd=0x7fab5fe79ec4
Jun  5 15:33:40 vm1 crmd[4790]:     info: crm_get_peer: Node <null> now has id: 2204477632
Jun  5 15:33:40 vm1 crmd[4790]:     info: crm_update_peer_proc: init_cpg_connection: Node (null)[2204477632] - corosync-cpg is now online
Jun  5 15:33:40 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/2, version=0.20.49)
Jun  5 15:33:40 vm1 corosync[4555]:   [QB    ] handle_new_connection IPC credentials authenticated (4555-4790-30)
Jun  5 15:33:40 vm1 corosync[4555]:   [QB    ] qb_ipcs_shm_connect connecting to client [4790]
Jun  5 15:33:40 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:33:40 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:33:40 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:33:40 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_created connection created
Jun  5 15:33:40 vm1 corosync[4555]:   [CMAP  ] cmap_lib_init_fn lib_init_fn: conn=0x7fab5fd78a30
Jun  5 15:33:40 vm1 crmd[4790]:   notice: corosync_node_name: Unable to get node name for nodeid 2204477632
Jun  5 15:33:40 vm1 crmd[4790]:   notice: get_local_node_name: Defaulting to uname -n for the local corosync node name
Jun  5 15:33:40 vm1 crmd[4790]:     info: init_cs_connection_once: Connection to 'corosync': established
Jun  5 15:33:40 vm1 crmd[4790]:     info: crm_get_peer: Node 2204477632 is now known as vm1
Jun  5 15:33:40 vm1 crmd[4790]:     info: peer_update_callback: vm1 is now (null)
Jun  5 15:33:40 vm1 crmd[4790]:     info: crm_get_peer: Node 2204477632 has uuid 2204477632
Jun  5 15:33:40 vm1 corosync[4555]:   [CPG   ] message_handler_req_exec_cpg_procjoin got procjoin message from cluster node -2090489664 (r(0) ip(192.168.101.131) r(1) ip(192.168.102.131) ) for pid 4790
Jun  5 15:33:40 vm1 corosync[4555]:   [QB    ] qb_ipcs_dispatch_connection_request HUP conn (4555-4790-30)
Jun  5 15:33:40 vm1 corosync[4555]:   [QB    ] qb_ipcs_disconnect qb_ipcs_disconnect(4555-4790-30) state:2
Jun  5 15:33:40 vm1 corosync[4555]:   [QB    ] _del epoll_ctl(del): Bad file descriptor (9)
Jun  5 15:33:40 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_closed cs_ipcs_connection_closed() 
Jun  5 15:33:40 vm1 corosync[4555]:   [CMAP  ] cmap_lib_exit_fn exit_fn for conn=0x7fab5fd78a30
Jun  5 15:33:40 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_destroyed cs_ipcs_connection_destroyed() 
Jun  5 15:33:40 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-response-4555-4790-30-header
Jun  5 15:33:40 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-event-4555-4790-30-header
Jun  5 15:33:40 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-request-4555-4790-30-header
Jun  5 15:33:40 vm1 corosync[4555]:   [QB    ] handle_new_connection IPC credentials authenticated (4555-4790-30)
Jun  5 15:33:40 vm1 corosync[4555]:   [QB    ] qb_ipcs_shm_connect connecting to client [4790]
Jun  5 15:33:40 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:33:40 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:33:40 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:33:40 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_created connection created
Jun  5 15:33:40 vm1 corosync[4555]:   [QUORUM] quorum_lib_init_fn lib_init_fn: conn=0x7fab5fd78a30
Jun  5 15:33:40 vm1 corosync[4555]:   [QUORUM] message_handler_req_lib_quorum_gettype got quorum_type request on 0x7fab5fd78a30
Jun  5 15:33:40 vm1 corosync[4555]:   [QUORUM] message_handler_req_lib_quorum_getquorate got quorate request on 0x7fab5fd78a30
Jun  5 15:33:40 vm1 crmd[4790]:   notice: init_quorum_connection: Quorum acquired
Jun  5 15:33:40 vm1 corosync[4555]:   [QUORUM] message_handler_req_lib_quorum_trackstart got trackstart request on 0x7fab5fd78a30
Jun  5 15:33:40 vm1 corosync[4555]:   [QUORUM] message_handler_req_lib_quorum_trackstart sending initial status to 0x7fab5fd78a30
Jun  5 15:33:40 vm1 corosync[4555]:   [QUORUM] send_library_notification sending quorum notification to 0x7fab5fd78a30, length = 52
Jun  5 15:33:40 vm1 corosync[4555]:   [QB    ] handle_new_connection IPC credentials authenticated (4555-4790-31)
Jun  5 15:33:40 vm1 corosync[4555]:   [QB    ] qb_ipcs_shm_connect connecting to client [4790]
Jun  5 15:33:40 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:33:40 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:33:40 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:33:40 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_created connection created
Jun  5 15:33:40 vm1 corosync[4555]:   [CMAP  ] cmap_lib_init_fn lib_init_fn: conn=0x7fab5fe78e80
Jun  5 15:33:40 vm1 corosync[4555]:   [QB    ] qb_ipcs_dispatch_connection_request HUP conn (4555-4790-31)
Jun  5 15:33:40 vm1 corosync[4555]:   [QB    ] qb_ipcs_disconnect qb_ipcs_disconnect(4555-4790-31) state:2
Jun  5 15:33:40 vm1 corosync[4555]:   [QB    ] _del epoll_ctl(del): Bad file descriptor (9)
Jun  5 15:33:40 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_closed cs_ipcs_connection_closed() 
Jun  5 15:33:40 vm1 corosync[4555]:   [CMAP  ] cmap_lib_exit_fn exit_fn for conn=0x7fab5fe78e80
Jun  5 15:33:40 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_destroyed cs_ipcs_connection_destroyed() 
Jun  5 15:33:40 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-response-4555-4790-31-header
Jun  5 15:33:40 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-event-4555-4790-31-header
Jun  5 15:33:40 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-request-4555-4790-31-header
Jun  5 15:33:40 vm1 corosync[4555]:   [QB    ] handle_new_connection IPC credentials authenticated (4555-4790-31)
Jun  5 15:33:40 vm1 corosync[4555]:   [QB    ] qb_ipcs_shm_connect connecting to client [4790]
Jun  5 15:33:40 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:33:40 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:33:40 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:33:40 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_created connection created
Jun  5 15:33:40 vm1 corosync[4555]:   [CMAP  ] cmap_lib_init_fn lib_init_fn: conn=0x7fab5fe78e80
Jun  5 15:33:40 vm1 crmd[4790]:     info: do_ha_control: Connected to the cluster
Jun  5 15:33:40 vm1 crmd[4790]:     info: lrmd_ipc_connect: Connecting to lrmd
Jun  5 15:33:40 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section nodes: OK (rc=0, origin=local/crmd/3, version=0.20.49)
Jun  5 15:33:40 vm1 lrmd[4578]:     info: crm_client_new: Connecting 0x2125d50 for uid=496 gid=492 pid=4790 id=9d929c05-7485-41f2-a718-2a3c1588b05e
Jun  5 15:33:40 vm1 crmd[4790]:     info: do_lrm_control: LRM connection established
Jun  5 15:33:40 vm1 crmd[4790]:     info: do_started: Delaying start, no membership data (0000000000100000)
Jun  5 15:33:40 vm1 crmd[4790]:     info: pcmk_quorum_notification: Membership 388: quorum retained (1)
Jun  5 15:33:40 vm1 crmd[4790]:   notice: crm_update_peer_state: pcmk_quorum_notification: Node vm1[2204477632] - state is now member (was (null))
Jun  5 15:33:40 vm1 crmd[4790]:     info: peer_update_callback: vm1 is now member (was (null))
Jun  5 15:33:40 vm1 crmd[4790]:     info: do_started: Delaying start, Config not read (0000000000000040)
Jun  5 15:33:40 vm1 crmd[4790]:     info: pcmk_cpg_membership: Joined[0.0] crmd.2204477632 
Jun  5 15:33:40 vm1 crmd[4790]:     info: pcmk_cpg_membership: Member[0.0] crmd.2204477632 
Jun  5 15:33:40 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section crm_config: OK (rc=0, origin=local/crmd/4, version=0.20.49)
Jun  5 15:33:40 vm1 crmd[4790]:     info: qb_ipcs_us_publish: server name: crmd
Jun  5 15:33:40 vm1 crmd[4790]:   notice: do_started: The local CRM is operational
Jun  5 15:33:40 vm1 crmd[4790]:     info: do_log: FSA: Input I_PENDING from do_started() received in state S_STARTING
Jun  5 15:33:40 vm1 crmd[4790]:     info: do_state_transition: State transition S_STARTING -> S_PENDING [ input=I_PENDING cause=C_FSA_INTERNAL origin=do_started ]
Jun  5 15:33:40 vm1 cib[4576]:     info: cib_process_readwrite: We are now in R/O mode
Jun  5 15:33:40 vm1 cib[4576]:     info: cib_process_request: Completed cib_slave operation for section 'all': OK (rc=0, origin=local/crmd/5, version=0.20.49)
Jun  5 15:33:40 vm1 corosync[4555]:   [QB    ] qb_ipcs_dispatch_connection_request HUP conn (4555-4790-31)
Jun  5 15:33:40 vm1 corosync[4555]:   [QB    ] qb_ipcs_disconnect qb_ipcs_disconnect(4555-4790-31) state:2
Jun  5 15:33:40 vm1 corosync[4555]:   [QB    ] _del epoll_ctl(del): Bad file descriptor (9)
Jun  5 15:33:40 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_closed cs_ipcs_connection_closed() 
Jun  5 15:33:40 vm1 corosync[4555]:   [CMAP  ] cmap_lib_exit_fn exit_fn for conn=0x7fab5fe78e80
Jun  5 15:33:40 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_destroyed cs_ipcs_connection_destroyed() 
Jun  5 15:33:40 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-response-4555-4790-31-header
Jun  5 15:33:40 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-event-4555-4790-31-header
Jun  5 15:33:40 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-request-4555-4790-31-header
Jun  5 15:33:42 vm1 stonith-ng[4577]:     info: crm_client_new: Connecting 0x1867d30 for uid=496 gid=492 pid=4790 id=2c128be4-14c2-4511-ad71-e40487898de8
Jun  5 15:33:42 vm1 stonith-ng[4577]:     info: stonith_command: Processed register from crmd.4790: OK (0)
Jun  5 15:33:42 vm1 stonith-ng[4577]:     info: stonith_command: Processed st_notify from crmd.4790: OK (0)
Jun  5 15:33:42 vm1 stonith-ng[4577]:     info: stonith_command: Processed st_notify from crmd.4790: OK (0)
Jun  5 15:34:01 vm1 crmd[4790]:     info: crm_timer_popped: Election Trigger (I_DC_TIMEOUT) just popped (20000ms)
Jun  5 15:34:01 vm1 crmd[4790]:  warning: do_log: FSA: Input I_DC_TIMEOUT from crm_timer_popped() received in state S_PENDING
Jun  5 15:34:01 vm1 crmd[4790]:     info: do_state_transition: State transition S_PENDING -> S_ELECTION [ input=I_DC_TIMEOUT cause=C_TIMER_POPPED origin=crm_timer_popped ]
Jun  5 15:34:01 vm1 crmd[4790]:     info: do_log: FSA: Input I_ELECTION_DC from do_election_check() received in state S_ELECTION
Jun  5 15:34:01 vm1 crmd[4790]:   notice: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_FSA_INTERNAL origin=do_election_check ]
Jun  5 15:34:01 vm1 crmd[4790]:     info: do_te_control: Registering TE UUID: 9c2c2a09-d898-4c6a-928a-61590411820b
Jun  5 15:34:01 vm1 crmd[4790]:     info: set_graph_functions: Setting custom graph functions
Jun  5 15:34:01 vm1 pengine[4580]:     info: crm_client_new: Connecting 0x2290f90 for uid=496 gid=492 pid=4790 id=4b631ea3-35aa-468b-8b22-e6534f5e354a
Jun  5 15:34:01 vm1 crmd[4790]:     info: do_dc_takeover: Taking over DC status for this partition
Jun  5 15:34:01 vm1 cib[4576]:     info: cib_process_readwrite: We are now in R/W mode
Jun  5 15:34:01 vm1 cib[4576]:     info: cib_process_request: Completed cib_master operation for section 'all': OK (rc=0, origin=local/crmd/6, version=0.20.49)
Jun  5 15:34:01 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section cib: OK (rc=0, origin=local/crmd/7, version=0.20.49)
Jun  5 15:34:01 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section //cib/configuration/crm_config//cluster_property_set//nvpair[@name='dc-version']: OK (rc=0, origin=local/crmd/8, version=0.20.49)
Jun  5 15:34:01 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section crm_config: OK (rc=0, origin=local/crmd/9, version=0.20.49)
Jun  5 15:34:01 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section //cib/configuration/crm_config//cluster_property_set//nvpair[@name='cluster-infrastructure']: OK (rc=0, origin=local/crmd/10, version=0.20.49)
Jun  5 15:34:01 vm1 crmd[4790]:     info: join_make_offer: Making join offers based on membership 388
Jun  5 15:34:01 vm1 crmd[4790]:     info: join_make_offer: join-1: Sending offer to vm1
Jun  5 15:34:01 vm1 crmd[4790]:     info: crm_update_peer_join: join_make_offer: Node vm1[2204477632] - join-1 phase 0 -> 1
Jun  5 15:34:01 vm1 crmd[4790]:     info: do_dc_join_offer_all: join-1: Waiting on 1 outstanding join acks
Jun  5 15:34:01 vm1 crmd[4790]:     info: update_dc: Set DC to vm1 (3.0.7)
Jun  5 15:34:01 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section crm_config: OK (rc=0, origin=local/crmd/11, version=0.20.49)
Jun  5 15:34:01 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section crm_config: OK (rc=0, origin=local/crmd/12, version=0.20.49)
Jun  5 15:34:01 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/13, version=0.20.49)
Jun  5 15:34:01 vm1 crmd[4790]:     info: crm_update_peer_join: do_dc_join_filter_offer: Node vm1[2204477632] - join-1 phase 1 -> 2
Jun  5 15:34:01 vm1 crmd[4790]:     info: crm_update_peer_expected: do_dc_join_filter_offer: Node vm1[2204477632] - expected state is now member
Jun  5 15:34:01 vm1 crmd[4790]:     info: do_state_transition: State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_FSA_INTERNAL origin=check_join_state ]
Jun  5 15:34:01 vm1 crmd[4790]:     info: crmd_join_phase_log: join-1: vm1=integrated
Jun  5 15:34:01 vm1 crmd[4790]:     info: do_dc_join_finalize: join-1: Syncing our CIB to the rest of the cluster
Jun  5 15:34:01 vm1 cib[4576]:     info: cib_process_request: Completed cib_sync operation for section 'all': OK (rc=0, origin=local/crmd/14, version=0.20.49)
Jun  5 15:34:01 vm1 crmd[4790]:     info: crm_update_peer_join: finalize_join_for: Node vm1[2204477632] - join-1 phase 2 -> 3
Jun  5 15:34:01 vm1 crmd[4790]:     info: erase_status_tag: Deleting xpath: //node_state[@uname='vm1']/transient_attributes
Jun  5 15:34:01 vm1 crmd[4790]:     info: update_attrd: Connecting to attrd... 5 retries remaining
Jun  5 15:34:01 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section nodes: OK (rc=0, origin=local/crmd/15, version=0.20.49)
Jun  5 15:34:01 vm1 attrd[4579]:     info: crm_client_new: Connecting 0x915d10 for uid=496 gid=492 pid=4790 id=3e915a25-3293-443d-a602-68d976da572a
Jun  5 15:34:01 vm1 crmd[4790]:     info: crm_update_peer_join: do_dc_join_ack: Node vm1[2204477632] - join-1 phase 3 -> 4
Jun  5 15:34:01 vm1 crmd[4790]:     info: do_dc_join_ack: join-1: Updating node state to member for vm1
Jun  5 15:34:01 vm1 crmd[4790]:     info: erase_status_tag: Deleting xpath: //node_state[@uname='vm1']/lrm
Jun  5 15:34:01 vm1 cib[4576]:     info: cib_process_request: Completed cib_delete operation for section //node_state[@uname='vm1']/transient_attributes: OK (rc=0, origin=local/crmd/16, version=0.20.50)
Jun  5 15:34:01 vm1 cib[4576]:     info: cib_process_request: Completed cib_delete operation for section //node_state[@uname='vm1']/lrm: OK (rc=0, origin=local/crmd/17, version=0.20.51)
Jun  5 15:34:01 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/crmd/18, version=0.20.52)
Jun  5 15:34:01 vm1 crmd[4790]:     info: do_state_transition: State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state ]
Jun  5 15:34:01 vm1 crmd[4790]:     info: abort_transition_graph: do_te_invoke:155 - Triggered transition abort (complete=1) : Peer Cancelled
Jun  5 15:34:01 vm1 attrd[4579]:   notice: attrd_local_callback: Sending full refresh (origin=crmd)
Jun  5 15:34:01 vm1 attrd[4579]:   notice: attrd_trigger_update: Sending flush op to all hosts for: default_ping_set(1) (100)
Jun  5 15:34:01 vm1 attrd[4579]:   notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Jun  5 15:34:01 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section //cib/status//node_state[@id='2204477632']//transient_attributes//nvpair[@name='probe_complete']: No such device or address (rc=-6, origin=local/attrd/44, version=0.20.52)
Jun  5 15:34:01 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section nodes: OK (rc=0, origin=local/crmd/19, version=0.20.52)
Jun  5 15:34:01 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/crmd/20, version=0.20.52)
Jun  5 15:34:01 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section cib: OK (rc=0, origin=local/crmd/21, version=0.20.52)
Jun  5 15:34:01 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/22, version=0.20.52)
Jun  5 15:34:01 vm1 pengine[4580]:   notice: unpack_config: On loss of CCM Quorum: Ignore
Jun  5 15:34:01 vm1 pengine[4580]:  warning: unpack_nodes: Blind faith: not fencing unseen nodes
Jun  5 15:34:01 vm1 pengine[4580]:     info: determine_online_status_fencing: Node vm1 is active
Jun  5 15:34:01 vm1 pengine[4580]:     info: determine_online_status: Node vm1 is online
Jun  5 15:34:01 vm1 pengine[4580]:  warning: pe_fence_node: Node vm2 will be fenced because the node is no longer part of the cluster
Jun  5 15:34:01 vm1 pengine[4580]:  warning: determine_online_status: Node vm2 is unclean
Jun  5 15:34:01 vm1 pengine[4580]:     info: clone_print:  Clone Set: cl1 [st1]
Jun  5 15:34:01 vm1 pengine[4580]:     info: short_print:      Started: [ vm2 ]
Jun  5 15:34:01 vm1 pengine[4580]:     info: short_print:      Stopped: [ vm1 ]
Jun  5 15:34:01 vm1 pengine[4580]:     info: native_print: prmDummy#011(ocf::pacemaker:Dummy):#011Stopped 
Jun  5 15:34:01 vm1 pengine[4580]:     info: clone_print:  Clone Set: clnPing [prmPing]
Jun  5 15:34:01 vm1 pengine[4580]:     info: short_print:      Started: [ vm2 ]
Jun  5 15:34:01 vm1 pengine[4580]:     info: short_print:      Stopped: [ vm1 ]
Jun  5 15:34:01 vm1 pengine[4580]:     info: native_color: Resource st1:1 cannot run anywhere
Jun  5 15:34:01 vm1 pengine[4580]:     info: native_color: Resource prmDummy cannot run anywhere
Jun  5 15:34:01 vm1 pengine[4580]:     info: native_color: Resource prmPing:1 cannot run anywhere
Jun  5 15:34:01 vm1 pengine[4580]:  warning: custom_action: Action st1:0_stop_0 on vm2 is unrunnable (offline)
Jun  5 15:34:01 vm1 pengine[4580]:  warning: custom_action: Action prmPing:0_stop_0 on vm2 is unrunnable (offline)
Jun  5 15:34:01 vm1 pengine[4580]:     info: RecurringOp:  Start recurring monitor (10s) for prmPing:0 on vm1
Jun  5 15:34:01 vm1 pengine[4580]:  warning: stage6: Scheduling Node vm2 for STONITH
Jun  5 15:34:01 vm1 pengine[4580]:     info: native_stop_constraints: st1:0_stop_0 is implicit after vm2 is fenced
Jun  5 15:34:01 vm1 pengine[4580]:     info: native_stop_constraints: prmPing:0_stop_0 is implicit after vm2 is fenced
Jun  5 15:34:01 vm1 pengine[4580]:   notice: LogActions: Move    st1:0#011(Started vm2 -> vm1)
Jun  5 15:34:01 vm1 pengine[4580]:     info: LogActions: Leave   st1:1#011(Stopped)
Jun  5 15:34:01 vm1 pengine[4580]:     info: LogActions: Leave   prmDummy#011(Stopped)
Jun  5 15:34:01 vm1 pengine[4580]:   notice: LogActions: Move    prmPing:0#011(Started vm2 -> vm1)
Jun  5 15:34:01 vm1 pengine[4580]:     info: LogActions: Leave   prmPing:1#011(Stopped)
Jun  5 15:34:02 vm1 crmd[4790]:    error: crm_xml_err: XML Error: Entity: line 1: parser error : Specification mandate value for attribute CRM_meta_default_ping_set
Jun  5 15:34:02 vm1 crmd[4790]:    error: crm_xml_err: XML Error: 2" on_node="vm2" on_node_uuid="2221254848"><attributes CRM_meta_default_ping_set
Jun  5 15:34:02 vm1 crmd[4790]:    error: crm_xml_err: XML Error:                                                                                ^
Jun  5 15:34:02 vm1 crmd[4790]:    error: crm_xml_err: XML Error: Entity: line 1: parser error : attributes construct error
Jun  5 15:34:02 vm1 crmd[4790]:    error: crm_xml_err: XML Error: 2" on_node="vm2" on_node_uuid="2221254848"><attributes CRM_meta_default_ping_set
Jun  5 15:34:02 vm1 crmd[4790]:    error: crm_xml_err: XML Error:                                                                                ^
Jun  5 15:34:02 vm1 crmd[4790]:    error: crm_xml_err: XML Error: Entity: line 1: parser error : Couldn't find end of Start Tag attributes line 1
Jun  5 15:34:02 vm1 crmd[4790]:    error: crm_xml_err: XML Error: 2" on_node="vm2" on_node_uuid="2221254848"><attributes CRM_meta_default_ping_set
Jun  5 15:34:02 vm1 crmd[4790]:    error: crm_xml_err: XML Error:                                                                                ^
Jun  5 15:34:02 vm1 crmd[4790]:  warning: string2xml: Parsing failed (domain=1, level=3, code=73): Couldn't find end of Start Tag attributes line 1
Jun  5 15:34:02 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section /cib: OK (rc=0, origin=local/attrd/45, version=0.20.52)
Jun  5 15:34:02 vm1 pengine[4580]:  warning: process_pe_message: Calculated Transition 9: /var/lib/pacemaker/pengine/pe-warn-6.bz2
Jun  5 15:34:02 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/attrd/46, version=0.20.53)
Jun  5 15:34:02 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section //cib/status//node_state[@id='2204477632']//transient_attributes//nvpair[@name='default_ping_set(1)']: No such device or address (rc=-6, origin=local/attrd/47, version=0.20.53)
Jun  5 15:34:02 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section /cib: OK (rc=0, origin=local/attrd/48, version=0.20.53)
Jun  5 15:34:02 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/attrd/49, version=0.20.54)
Jun  5 15:34:03 vm1 cib[4576]:     info: crm_client_destroy: Destroying 0 events
Jun  5 15:34:03 vm1 lrmd[4578]:     info: crm_client_destroy: Destroying 0 events
Jun  5 15:34:03 vm1 stonith-ng[4577]:     info: crm_client_destroy: Destroying 0 events
Jun  5 15:34:03 vm1 pengine[4580]:     info: crm_client_destroy: Destroying 0 events
Jun  5 15:34:03 vm1 attrd[4579]:     info: crm_client_destroy: Destroying 0 events
Jun  5 15:34:03 vm1 pacemakerd[4574]:    error: child_death_dispatch: Managed process 4790 (crmd) dumped core
Jun  5 15:34:03 vm1 pacemakerd[4574]:   notice: pcmk_child_exit: Child process crmd terminated with signal 11 (pid=4790, core=1)
Jun  5 15:34:03 vm1 pacemakerd[4574]:   notice: pcmk_process_exit: Respawning failed child process: crmd
Jun  5 15:34:03 vm1 pacemakerd[4574]:     info: start_child: Using uid=496 and group=492 for process crmd
Jun  5 15:34:03 vm1 pacemakerd[4574]:     info: start_child: Forked child 4800 for process crmd
Jun  5 15:34:03 vm1 corosync[4555]:   [QB    ] qb_ipcs_dispatch_connection_request HUP conn (4555-4790-29)
Jun  5 15:34:03 vm1 corosync[4555]:   [QB    ] qb_ipcs_disconnect qb_ipcs_disconnect(4555-4790-29) state:2
Jun  5 15:34:03 vm1 corosync[4555]:   [QB    ] _del epoll_ctl(del): Bad file descriptor (9)
Jun  5 15:34:03 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_closed cs_ipcs_connection_closed() 
Jun  5 15:34:03 vm1 corosync[4555]:   [CPG   ] cpg_lib_exit_fn exit_fn for conn=0x7fab5fe7b360
Jun  5 15:34:03 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_destroyed cs_ipcs_connection_destroyed() 
Jun  5 15:34:03 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cpg-response-4555-4790-29-header
Jun  5 15:34:03 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cpg-event-4555-4790-29-header
Jun  5 15:34:03 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cpg-request-4555-4790-29-header
Jun  5 15:34:03 vm1 corosync[4555]:   [QB    ] qb_ipcs_dispatch_connection_request HUP conn (4555-4790-30)
Jun  5 15:34:03 vm1 corosync[4555]:   [QB    ] qb_ipcs_disconnect qb_ipcs_disconnect(4555-4790-30) state:2
Jun  5 15:34:03 vm1 corosync[4555]:   [QB    ] _del epoll_ctl(del): Bad file descriptor (9)
Jun  5 15:34:03 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_closed cs_ipcs_connection_closed() 
Jun  5 15:34:03 vm1 corosync[4555]:   [QUORUM] quorum_lib_exit_fn lib_exit_fn: conn=0x7fab5fd78a30
Jun  5 15:34:03 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_destroyed cs_ipcs_connection_destroyed() 
Jun  5 15:34:03 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-quorum-response-4555-4790-30-header
Jun  5 15:34:03 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-quorum-event-4555-4790-30-header
Jun  5 15:34:03 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-quorum-request-4555-4790-30-header
Jun  5 15:34:03 vm1 corosync[4555]:   [CPG   ] message_handler_req_exec_cpg_procleave got procleave message from cluster node -2090489664
Jun  5 15:34:03 vm1 crmd[4800]:   notice: crm_add_logfile: Additional logging available in /var/log/pacemaker.log
Jun  5 15:34:03 vm1 crmd[4800]:    debug: crm_update_callsites: Enabling callsites based on priority=7, files=(null), functions=(null), formats=(null), tags=(null)
Jun  5 15:34:03 vm1 crmd[4800]:     info: crm_log_init: Changed active directory to /var/lib/heartbeat/cores/hacluster
Jun  5 15:34:03 vm1 crmd[4800]:   notice: main: CRM Git Version: 7209c02
Jun  5 15:34:03 vm1 crmd[4800]:     info: do_log: FSA: Input I_STARTUP from crmd_init() received in state S_STARTING
Jun  5 15:34:03 vm1 crmd[4800]:     info: get_cluster_type: Verifying cluster type: 'corosync'
Jun  5 15:34:03 vm1 crmd[4800]:     info: get_cluster_type: Assuming an active 'corosync' cluster
Jun  5 15:34:03 vm1 cib[4576]:     info: crm_client_new: Connecting 0x11c2d40 for uid=496 gid=492 pid=4800 id=1845dac5-f27b-49a0-bcfa-1867273ac87e
Jun  5 15:34:03 vm1 crmd[4800]:     info: do_cib_control: CIB connection established
Jun  5 15:34:03 vm1 crmd[4800]:   notice: crm_cluster_connect: Connecting to cluster infrastructure: corosync
Jun  5 15:34:03 vm1 corosync[4555]:   [QB    ] handle_new_connection IPC credentials authenticated (4555-4800-29)
Jun  5 15:34:03 vm1 corosync[4555]:   [QB    ] qb_ipcs_shm_connect connecting to client [4800]
Jun  5 15:34:03 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:34:03 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:34:03 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:34:03 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_created connection created
Jun  5 15:34:03 vm1 corosync[4555]:   [CPG   ] cpg_lib_init_fn lib_init_fn: conn=0x7fab5fe7b360, cpd=0x7fab5fe79ec4
Jun  5 15:34:03 vm1 crmd[4800]:     info: crm_get_peer: Node <null> now has id: 2204477632
Jun  5 15:34:03 vm1 crmd[4800]:     info: crm_update_peer_proc: init_cpg_connection: Node (null)[2204477632] - corosync-cpg is now online
Jun  5 15:34:03 vm1 corosync[4555]:   [QB    ] handle_new_connection IPC credentials authenticated (4555-4800-30)
Jun  5 15:34:03 vm1 corosync[4555]:   [QB    ] qb_ipcs_shm_connect connecting to client [4800]
Jun  5 15:34:03 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:34:03 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/2, version=0.20.54)
Jun  5 15:34:03 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:34:03 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:34:03 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_created connection created
Jun  5 15:34:03 vm1 corosync[4555]:   [CMAP  ] cmap_lib_init_fn lib_init_fn: conn=0x7fab5fd78a30
Jun  5 15:34:03 vm1 crmd[4800]:   notice: corosync_node_name: Unable to get node name for nodeid 2204477632
Jun  5 15:34:03 vm1 crmd[4800]:   notice: get_local_node_name: Defaulting to uname -n for the local corosync node name
Jun  5 15:34:03 vm1 crmd[4800]:     info: init_cs_connection_once: Connection to 'corosync': established
Jun  5 15:34:03 vm1 crmd[4800]:     info: crm_get_peer: Node 2204477632 is now known as vm1
Jun  5 15:34:03 vm1 crmd[4800]:     info: peer_update_callback: vm1 is now (null)
Jun  5 15:34:03 vm1 crmd[4800]:     info: crm_get_peer: Node 2204477632 has uuid 2204477632
Jun  5 15:34:03 vm1 corosync[4555]:   [QB    ] qb_ipcs_dispatch_connection_request HUP conn (4555-4800-30)
Jun  5 15:34:03 vm1 corosync[4555]:   [QB    ] qb_ipcs_disconnect qb_ipcs_disconnect(4555-4800-30) state:2
Jun  5 15:34:03 vm1 corosync[4555]:   [QB    ] _del epoll_ctl(del): Bad file descriptor (9)
Jun  5 15:34:03 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_closed cs_ipcs_connection_closed() 
Jun  5 15:34:03 vm1 corosync[4555]:   [CMAP  ] cmap_lib_exit_fn exit_fn for conn=0x7fab5fd78a30
Jun  5 15:34:03 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_destroyed cs_ipcs_connection_destroyed() 
Jun  5 15:34:03 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-response-4555-4800-30-header
Jun  5 15:34:03 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-event-4555-4800-30-header
Jun  5 15:34:03 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-request-4555-4800-30-header
Jun  5 15:34:03 vm1 corosync[4555]:   [CPG   ] message_handler_req_exec_cpg_procjoin got procjoin message from cluster node -2090489664 (r(0) ip(192.168.101.131) r(1) ip(192.168.102.131) ) for pid 4800
Jun  5 15:34:03 vm1 corosync[4555]:   [QB    ] handle_new_connection IPC credentials authenticated (4555-4800-30)
Jun  5 15:34:03 vm1 corosync[4555]:   [QB    ] qb_ipcs_shm_connect connecting to client [4800]
Jun  5 15:34:03 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:34:03 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:34:03 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:34:03 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_created connection created
Jun  5 15:34:03 vm1 corosync[4555]:   [QUORUM] quorum_lib_init_fn lib_init_fn: conn=0x7fab5fd78a30
Jun  5 15:34:03 vm1 corosync[4555]:   [QUORUM] message_handler_req_lib_quorum_gettype got quorum_type request on 0x7fab5fd78a30
Jun  5 15:34:03 vm1 corosync[4555]:   [QUORUM] message_handler_req_lib_quorum_getquorate got quorate request on 0x7fab5fd78a30
Jun  5 15:34:03 vm1 crmd[4800]:   notice: init_quorum_connection: Quorum acquired
Jun  5 15:34:03 vm1 corosync[4555]:   [QUORUM] message_handler_req_lib_quorum_trackstart got trackstart request on 0x7fab5fd78a30
Jun  5 15:34:03 vm1 corosync[4555]:   [QUORUM] message_handler_req_lib_quorum_trackstart sending initial status to 0x7fab5fd78a30
Jun  5 15:34:03 vm1 corosync[4555]:   [QUORUM] send_library_notification sending quorum notification to 0x7fab5fd78a30, length = 52
Jun  5 15:34:03 vm1 corosync[4555]:   [QB    ] handle_new_connection IPC credentials authenticated (4555-4800-31)
Jun  5 15:34:03 vm1 corosync[4555]:   [QB    ] qb_ipcs_shm_connect connecting to client [4800]
Jun  5 15:34:03 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:34:03 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:34:03 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:34:03 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_created connection created
Jun  5 15:34:03 vm1 corosync[4555]:   [CMAP  ] cmap_lib_init_fn lib_init_fn: conn=0x7fab5fe7a350
Jun  5 15:34:03 vm1 corosync[4555]:   [QB    ] qb_ipcs_dispatch_connection_request HUP conn (4555-4800-31)
Jun  5 15:34:03 vm1 corosync[4555]:   [QB    ] qb_ipcs_disconnect qb_ipcs_disconnect(4555-4800-31) state:2
Jun  5 15:34:03 vm1 corosync[4555]:   [QB    ] _del epoll_ctl(del): Bad file descriptor (9)
Jun  5 15:34:03 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_closed cs_ipcs_connection_closed() 
Jun  5 15:34:03 vm1 corosync[4555]:   [CMAP  ] cmap_lib_exit_fn exit_fn for conn=0x7fab5fe7a350
Jun  5 15:34:03 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_destroyed cs_ipcs_connection_destroyed() 
Jun  5 15:34:03 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-response-4555-4800-31-header
Jun  5 15:34:03 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-event-4555-4800-31-header
Jun  5 15:34:03 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-request-4555-4800-31-header
Jun  5 15:34:03 vm1 corosync[4555]:   [QB    ] handle_new_connection IPC credentials authenticated (4555-4800-31)
Jun  5 15:34:03 vm1 corosync[4555]:   [QB    ] qb_ipcs_shm_connect connecting to client [4800]
Jun  5 15:34:03 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:34:03 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:34:03 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:34:03 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_created connection created
Jun  5 15:34:03 vm1 corosync[4555]:   [CMAP  ] cmap_lib_init_fn lib_init_fn: conn=0x7fab5fe7aca0
Jun  5 15:34:03 vm1 crmd[4800]:     info: do_ha_control: Connected to the cluster
Jun  5 15:34:03 vm1 crmd[4800]:     info: lrmd_ipc_connect: Connecting to lrmd
Jun  5 15:34:03 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section nodes: OK (rc=0, origin=local/crmd/3, version=0.20.54)
Jun  5 15:34:03 vm1 lrmd[4578]:     info: crm_client_new: Connecting 0x2125d50 for uid=496 gid=492 pid=4800 id=a10599fc-76b8-4579-a978-b08ab7f902e3
Jun  5 15:34:03 vm1 crmd[4800]:     info: do_lrm_control: LRM connection established
Jun  5 15:34:03 vm1 crmd[4800]:     info: do_started: Delaying start, no membership data (0000000000100000)
Jun  5 15:34:03 vm1 crmd[4800]:     info: pcmk_quorum_notification: Membership 388: quorum retained (1)
Jun  5 15:34:03 vm1 crmd[4800]:   notice: crm_update_peer_state: pcmk_quorum_notification: Node vm1[2204477632] - state is now member (was (null))
Jun  5 15:34:03 vm1 crmd[4800]:     info: peer_update_callback: vm1 is now member (was (null))
Jun  5 15:34:03 vm1 crmd[4800]:     info: do_started: Delaying start, Config not read (0000000000000040)
Jun  5 15:34:03 vm1 crmd[4800]:     info: pcmk_cpg_membership: Joined[0.0] crmd.2204477632 
Jun  5 15:34:03 vm1 crmd[4800]:     info: pcmk_cpg_membership: Member[0.0] crmd.2204477632 
Jun  5 15:34:03 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section crm_config: OK (rc=0, origin=local/crmd/4, version=0.20.54)
Jun  5 15:34:03 vm1 crmd[4800]:     info: qb_ipcs_us_publish: server name: crmd
Jun  5 15:34:03 vm1 crmd[4800]:   notice: do_started: The local CRM is operational
Jun  5 15:34:03 vm1 crmd[4800]:     info: do_log: FSA: Input I_PENDING from do_started() received in state S_STARTING
Jun  5 15:34:03 vm1 crmd[4800]:     info: do_state_transition: State transition S_STARTING -> S_PENDING [ input=I_PENDING cause=C_FSA_INTERNAL origin=do_started ]
Jun  5 15:34:03 vm1 cib[4576]:     info: cib_process_readwrite: We are now in R/O mode
Jun  5 15:34:03 vm1 cib[4576]:     info: cib_process_request: Completed cib_slave operation for section 'all': OK (rc=0, origin=local/crmd/5, version=0.20.54)
Jun  5 15:34:03 vm1 corosync[4555]:   [QB    ] qb_ipcs_dispatch_connection_request HUP conn (4555-4800-31)
Jun  5 15:34:03 vm1 corosync[4555]:   [QB    ] qb_ipcs_disconnect qb_ipcs_disconnect(4555-4800-31) state:2
Jun  5 15:34:03 vm1 corosync[4555]:   [QB    ] _del epoll_ctl(del): Bad file descriptor (9)
Jun  5 15:34:03 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_closed cs_ipcs_connection_closed() 
Jun  5 15:34:03 vm1 corosync[4555]:   [CMAP  ] cmap_lib_exit_fn exit_fn for conn=0x7fab5fe7aca0
Jun  5 15:34:03 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_destroyed cs_ipcs_connection_destroyed() 
Jun  5 15:34:03 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-response-4555-4800-31-header
Jun  5 15:34:03 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-event-4555-4800-31-header
Jun  5 15:34:03 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-request-4555-4800-31-header
Jun  5 15:34:05 vm1 stonith-ng[4577]:     info: crm_client_new: Connecting 0x1867d30 for uid=496 gid=492 pid=4800 id=081a9ca6-5197-4157-8d06-c395f45039b5
Jun  5 15:34:05 vm1 stonith-ng[4577]:     info: stonith_command: Processed register from crmd.4800: OK (0)
Jun  5 15:34:05 vm1 stonith-ng[4577]:     info: stonith_command: Processed st_notify from crmd.4800: OK (0)
Jun  5 15:34:05 vm1 stonith-ng[4577]:     info: stonith_command: Processed st_notify from crmd.4800: OK (0)
Jun  5 15:34:24 vm1 crmd[4800]:     info: crm_timer_popped: Election Trigger (I_DC_TIMEOUT) just popped (20000ms)
Jun  5 15:34:24 vm1 crmd[4800]:  warning: do_log: FSA: Input I_DC_TIMEOUT from crm_timer_popped() received in state S_PENDING
Jun  5 15:34:24 vm1 crmd[4800]:     info: do_state_transition: State transition S_PENDING -> S_ELECTION [ input=I_DC_TIMEOUT cause=C_TIMER_POPPED origin=crm_timer_popped ]
Jun  5 15:34:24 vm1 crmd[4800]:     info: do_log: FSA: Input I_ELECTION_DC from do_election_check() received in state S_ELECTION
Jun  5 15:34:24 vm1 crmd[4800]:   notice: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_FSA_INTERNAL origin=do_election_check ]
Jun  5 15:34:24 vm1 crmd[4800]:     info: do_te_control: Registering TE UUID: 7bbc9b2f-0c54-45b2-85a8-03f5d6b76044
Jun  5 15:34:24 vm1 crmd[4800]:     info: set_graph_functions: Setting custom graph functions
Jun  5 15:34:24 vm1 pengine[4580]:     info: crm_client_new: Connecting 0x2290f90 for uid=496 gid=492 pid=4800 id=e26220d6-30e6-46eb-8c9c-7bebeea52b28
Jun  5 15:34:24 vm1 crmd[4800]:     info: do_dc_takeover: Taking over DC status for this partition
Jun  5 15:34:24 vm1 cib[4576]:     info: cib_process_readwrite: We are now in R/W mode
Jun  5 15:34:24 vm1 cib[4576]:     info: cib_process_request: Completed cib_master operation for section 'all': OK (rc=0, origin=local/crmd/6, version=0.20.54)
Jun  5 15:34:24 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section cib: OK (rc=0, origin=local/crmd/7, version=0.20.54)
Jun  5 15:34:24 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section //cib/configuration/crm_config//cluster_property_set//nvpair[@name='dc-version']: OK (rc=0, origin=local/crmd/8, version=0.20.54)
Jun  5 15:34:24 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section crm_config: OK (rc=0, origin=local/crmd/9, version=0.20.54)
Jun  5 15:34:24 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section //cib/configuration/crm_config//cluster_property_set//nvpair[@name='cluster-infrastructure']: OK (rc=0, origin=local/crmd/10, version=0.20.54)
Jun  5 15:34:24 vm1 crmd[4800]:     info: join_make_offer: Making join offers based on membership 388
Jun  5 15:34:24 vm1 crmd[4800]:     info: join_make_offer: join-1: Sending offer to vm1
Jun  5 15:34:24 vm1 crmd[4800]:     info: crm_update_peer_join: join_make_offer: Node vm1[2204477632] - join-1 phase 0 -> 1
Jun  5 15:34:24 vm1 crmd[4800]:     info: do_dc_join_offer_all: join-1: Waiting on 1 outstanding join acks
Jun  5 15:34:24 vm1 crmd[4800]:     info: update_dc: Set DC to vm1 (3.0.7)
Jun  5 15:34:24 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section crm_config: OK (rc=0, origin=local/crmd/11, version=0.20.54)
Jun  5 15:34:24 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section crm_config: OK (rc=0, origin=local/crmd/12, version=0.20.54)
Jun  5 15:34:24 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/13, version=0.20.54)
Jun  5 15:34:24 vm1 crmd[4800]:     info: crm_update_peer_join: do_dc_join_filter_offer: Node vm1[2204477632] - join-1 phase 1 -> 2
Jun  5 15:34:24 vm1 crmd[4800]:     info: crm_update_peer_expected: do_dc_join_filter_offer: Node vm1[2204477632] - expected state is now member
Jun  5 15:34:24 vm1 crmd[4800]:     info: do_state_transition: State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_FSA_INTERNAL origin=check_join_state ]
Jun  5 15:34:24 vm1 crmd[4800]:     info: crmd_join_phase_log: join-1: vm1=integrated
Jun  5 15:34:24 vm1 crmd[4800]:     info: do_dc_join_finalize: join-1: Syncing our CIB to the rest of the cluster
Jun  5 15:34:24 vm1 cib[4576]:     info: cib_process_request: Completed cib_sync operation for section 'all': OK (rc=0, origin=local/crmd/14, version=0.20.54)
Jun  5 15:34:24 vm1 crmd[4800]:     info: crm_update_peer_join: finalize_join_for: Node vm1[2204477632] - join-1 phase 2 -> 3
Jun  5 15:34:24 vm1 crmd[4800]:     info: erase_status_tag: Deleting xpath: //node_state[@uname='vm1']/transient_attributes
Jun  5 15:34:24 vm1 crmd[4800]:     info: update_attrd: Connecting to attrd... 5 retries remaining
Jun  5 15:34:24 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section nodes: OK (rc=0, origin=local/crmd/15, version=0.20.54)
Jun  5 15:34:24 vm1 attrd[4579]:     info: crm_client_new: Connecting 0x915d10 for uid=496 gid=492 pid=4800 id=b3a2b4ab-5b72-42f2-9f39-f1fe665e8b2d
Jun  5 15:34:24 vm1 crmd[4800]:     info: crm_update_peer_join: do_dc_join_ack: Node vm1[2204477632] - join-1 phase 3 -> 4
Jun  5 15:34:24 vm1 crmd[4800]:     info: do_dc_join_ack: join-1: Updating node state to member for vm1
Jun  5 15:34:24 vm1 crmd[4800]:     info: erase_status_tag: Deleting xpath: //node_state[@uname='vm1']/lrm
Jun  5 15:34:24 vm1 cib[4576]:     info: cib_process_request: Completed cib_delete operation for section //node_state[@uname='vm1']/transient_attributes: OK (rc=0, origin=local/crmd/16, version=0.20.55)
Jun  5 15:34:24 vm1 cib[4576]:     info: cib_process_request: Completed cib_delete operation for section //node_state[@uname='vm1']/lrm: OK (rc=0, origin=local/crmd/17, version=0.20.56)
Jun  5 15:34:24 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/crmd/18, version=0.20.57)
Jun  5 15:34:24 vm1 crmd[4800]:     info: do_state_transition: State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state ]
Jun  5 15:34:24 vm1 crmd[4800]:     info: abort_transition_graph: do_te_invoke:155 - Triggered transition abort (complete=1) : Peer Cancelled
Jun  5 15:34:24 vm1 attrd[4579]:   notice: attrd_local_callback: Sending full refresh (origin=crmd)
Jun  5 15:34:24 vm1 attrd[4579]:   notice: attrd_trigger_update: Sending flush op to all hosts for: default_ping_set(1) (100)
Jun  5 15:34:24 vm1 attrd[4579]:   notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Jun  5 15:34:24 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section //cib/status//node_state[@id='2204477632']//transient_attributes//nvpair[@name='probe_complete']: No such device or address (rc=-6, origin=local/attrd/50, version=0.20.57)
Jun  5 15:34:24 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section nodes: OK (rc=0, origin=local/crmd/19, version=0.20.57)
Jun  5 15:34:24 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/crmd/20, version=0.20.57)
Jun  5 15:34:24 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section cib: OK (rc=0, origin=local/crmd/21, version=0.20.57)
Jun  5 15:34:24 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/22, version=0.20.57)
Jun  5 15:34:24 vm1 pengine[4580]:   notice: unpack_config: On loss of CCM Quorum: Ignore
Jun  5 15:34:24 vm1 pengine[4580]:  warning: unpack_nodes: Blind faith: not fencing unseen nodes
Jun  5 15:34:24 vm1 pengine[4580]:     info: determine_online_status_fencing: Node vm1 is active
Jun  5 15:34:24 vm1 pengine[4580]:     info: determine_online_status: Node vm1 is online
Jun  5 15:34:24 vm1 pengine[4580]:  warning: pe_fence_node: Node vm2 will be fenced because the node is no longer part of the cluster
Jun  5 15:34:24 vm1 pengine[4580]:  warning: determine_online_status: Node vm2 is unclean
Jun  5 15:34:24 vm1 pengine[4580]:     info: clone_print:  Clone Set: cl1 [st1]
Jun  5 15:34:24 vm1 pengine[4580]:     info: short_print:      Started: [ vm2 ]
Jun  5 15:34:24 vm1 pengine[4580]:     info: short_print:      Stopped: [ vm1 ]
Jun  5 15:34:24 vm1 pengine[4580]:     info: native_print: prmDummy#011(ocf::pacemaker:Dummy):#011Stopped 
Jun  5 15:34:24 vm1 pengine[4580]:     info: clone_print:  Clone Set: clnPing [prmPing]
Jun  5 15:34:24 vm1 pengine[4580]:     info: short_print:      Started: [ vm2 ]
Jun  5 15:34:24 vm1 pengine[4580]:     info: short_print:      Stopped: [ vm1 ]
Jun  5 15:34:24 vm1 pengine[4580]:     info: native_color: Resource st1:1 cannot run anywhere
Jun  5 15:34:24 vm1 pengine[4580]:     info: native_color: Resource prmDummy cannot run anywhere
Jun  5 15:34:24 vm1 pengine[4580]:     info: native_color: Resource prmPing:1 cannot run anywhere
Jun  5 15:34:24 vm1 pengine[4580]:  warning: custom_action: Action st1:0_stop_0 on vm2 is unrunnable (offline)
Jun  5 15:34:24 vm1 pengine[4580]:  warning: custom_action: Action prmPing:0_stop_0 on vm2 is unrunnable (offline)
Jun  5 15:34:24 vm1 pengine[4580]:     info: RecurringOp:  Start recurring monitor (10s) for prmPing:0 on vm1
Jun  5 15:34:24 vm1 pengine[4580]:  warning: stage6: Scheduling Node vm2 for STONITH
Jun  5 15:34:24 vm1 pengine[4580]:     info: native_stop_constraints: st1:0_stop_0 is implicit after vm2 is fenced
Jun  5 15:34:24 vm1 pengine[4580]:     info: native_stop_constraints: prmPing:0_stop_0 is implicit after vm2 is fenced
Jun  5 15:34:24 vm1 pengine[4580]:   notice: LogActions: Move    st1:0#011(Started vm2 -> vm1)
Jun  5 15:34:24 vm1 pengine[4580]:     info: LogActions: Leave   st1:1#011(Stopped)
Jun  5 15:34:24 vm1 pengine[4580]:     info: LogActions: Leave   prmDummy#011(Stopped)
Jun  5 15:34:24 vm1 pengine[4580]:   notice: LogActions: Move    prmPing:0#011(Started vm2 -> vm1)
Jun  5 15:34:24 vm1 pengine[4580]:     info: LogActions: Leave   prmPing:1#011(Stopped)
Jun  5 15:34:24 vm1 crmd[4800]:    error: crm_xml_err: XML Error: Entity: line 1: parser error : Specification mandate value for attribute CRM_meta_default_ping_set
Jun  5 15:34:24 vm1 crmd[4800]:    error: crm_xml_err: XML Error: 2" on_node="vm2" on_node_uuid="2221254848"><attributes CRM_meta_default_ping_set
Jun  5 15:34:24 vm1 crmd[4800]:    error: crm_xml_err: XML Error:                                                                                ^
Jun  5 15:34:24 vm1 crmd[4800]:    error: crm_xml_err: XML Error: Entity: line 1: parser error : attributes construct error
Jun  5 15:34:24 vm1 crmd[4800]:    error: crm_xml_err: XML Error: 2" on_node="vm2" on_node_uuid="2221254848"><attributes CRM_meta_default_ping_set
Jun  5 15:34:24 vm1 crmd[4800]:    error: crm_xml_err: XML Error:                                                                                ^
Jun  5 15:34:24 vm1 crmd[4800]:    error: crm_xml_err: XML Error: Entity: line 1: parser error : Couldn't find end of Start Tag attributes line 1
Jun  5 15:34:24 vm1 crmd[4800]:    error: crm_xml_err: XML Error: 2" on_node="vm2" on_node_uuid="2221254848"><attributes CRM_meta_default_ping_set
Jun  5 15:34:24 vm1 crmd[4800]:    error: crm_xml_err: XML Error:                                                                                ^
Jun  5 15:34:24 vm1 crmd[4800]:  warning: string2xml: Parsing failed (domain=1, level=3, code=73): Couldn't find end of Start Tag attributes line 1
Jun  5 15:34:24 vm1 pengine[4580]:  warning: process_pe_message: Calculated Transition 10: /var/lib/pacemaker/pengine/pe-warn-7.bz2
Jun  5 15:34:24 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section /cib: OK (rc=0, origin=local/attrd/51, version=0.20.57)
Jun  5 15:34:24 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/attrd/52, version=0.20.58)
Jun  5 15:34:24 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section //cib/status//node_state[@id='2204477632']//transient_attributes//nvpair[@name='default_ping_set(1)']: No such device or address (rc=-6, origin=local/attrd/53, version=0.20.58)
Jun  5 15:34:24 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section /cib: OK (rc=0, origin=local/attrd/54, version=0.20.58)
Jun  5 15:34:24 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/attrd/55, version=0.20.59)
Jun  5 15:34:26 vm1 cib[4576]:     info: crm_client_destroy: Destroying 0 events
Jun  5 15:34:26 vm1 lrmd[4578]:     info: crm_client_destroy: Destroying 0 events
Jun  5 15:34:26 vm1 stonith-ng[4577]:     info: crm_client_destroy: Destroying 0 events
Jun  5 15:34:26 vm1 pengine[4580]:     info: crm_client_destroy: Destroying 0 events
Jun  5 15:34:26 vm1 attrd[4579]:     info: crm_client_destroy: Destroying 0 events
Jun  5 15:34:26 vm1 pacemakerd[4574]:    error: child_death_dispatch: Managed process 4800 (crmd) dumped core
Jun  5 15:34:26 vm1 pacemakerd[4574]:   notice: pcmk_child_exit: Child process crmd terminated with signal 11 (pid=4800, core=1)
Jun  5 15:34:26 vm1 pacemakerd[4574]:   notice: pcmk_process_exit: Respawning failed child process: crmd
Jun  5 15:34:26 vm1 pacemakerd[4574]:     info: start_child: Using uid=496 and group=492 for process crmd
Jun  5 15:34:26 vm1 pacemakerd[4574]:     info: start_child: Forked child 4804 for process crmd
Jun  5 15:34:26 vm1 corosync[4555]:   [QB    ] qb_ipcs_dispatch_connection_request HUP conn (4555-4800-29)
Jun  5 15:34:26 vm1 corosync[4555]:   [QB    ] qb_ipcs_disconnect qb_ipcs_disconnect(4555-4800-29) state:2
Jun  5 15:34:26 vm1 corosync[4555]:   [QB    ] _del epoll_ctl(del): Bad file descriptor (9)
Jun  5 15:34:26 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_closed cs_ipcs_connection_closed() 
Jun  5 15:34:26 vm1 corosync[4555]:   [CPG   ] cpg_lib_exit_fn exit_fn for conn=0x7fab5fe7b360
Jun  5 15:34:26 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_destroyed cs_ipcs_connection_destroyed() 
Jun  5 15:34:26 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cpg-response-4555-4800-29-header
Jun  5 15:34:26 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cpg-event-4555-4800-29-header
Jun  5 15:34:26 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cpg-request-4555-4800-29-header
Jun  5 15:34:26 vm1 corosync[4555]:   [QB    ] qb_ipcs_dispatch_connection_request HUP conn (4555-4800-30)
Jun  5 15:34:26 vm1 corosync[4555]:   [QB    ] qb_ipcs_disconnect qb_ipcs_disconnect(4555-4800-30) state:2
Jun  5 15:34:26 vm1 corosync[4555]:   [QB    ] _del epoll_ctl(del): Bad file descriptor (9)
Jun  5 15:34:26 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_closed cs_ipcs_connection_closed() 
Jun  5 15:34:26 vm1 corosync[4555]:   [QUORUM] quorum_lib_exit_fn lib_exit_fn: conn=0x7fab5fd78a30
Jun  5 15:34:26 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_destroyed cs_ipcs_connection_destroyed() 
Jun  5 15:34:26 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-quorum-response-4555-4800-30-header
Jun  5 15:34:26 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-quorum-event-4555-4800-30-header
Jun  5 15:34:26 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-quorum-request-4555-4800-30-header
Jun  5 15:34:26 vm1 corosync[4555]:   [CPG   ] message_handler_req_exec_cpg_procleave got procleave message from cluster node -2090489664
Jun  5 15:34:26 vm1 crmd[4804]:   notice: crm_add_logfile: Additional logging available in /var/log/pacemaker.log
Jun  5 15:34:26 vm1 crmd[4804]:    debug: crm_update_callsites: Enabling callsites based on priority=7, files=(null), functions=(null), formats=(null), tags=(null)
Jun  5 15:34:26 vm1 crmd[4804]:     info: crm_log_init: Changed active directory to /var/lib/heartbeat/cores/hacluster
Jun  5 15:34:26 vm1 crmd[4804]:   notice: main: CRM Git Version: 7209c02
Jun  5 15:34:26 vm1 crmd[4804]:     info: do_log: FSA: Input I_STARTUP from crmd_init() received in state S_STARTING
Jun  5 15:34:26 vm1 crmd[4804]:     info: get_cluster_type: Verifying cluster type: 'corosync'
Jun  5 15:34:26 vm1 crmd[4804]:     info: get_cluster_type: Assuming an active 'corosync' cluster
Jun  5 15:34:26 vm1 cib[4576]:     info: crm_client_new: Connecting 0x11c2d40 for uid=496 gid=492 pid=4804 id=3bd59b5f-9d7a-4406-a6e5-044c946f72d4
Jun  5 15:34:26 vm1 crmd[4804]:     info: do_cib_control: CIB connection established
Jun  5 15:34:26 vm1 crmd[4804]:   notice: crm_cluster_connect: Connecting to cluster infrastructure: corosync
Jun  5 15:34:26 vm1 corosync[4555]:   [QB    ] handle_new_connection IPC credentials authenticated (4555-4804-29)
Jun  5 15:34:26 vm1 corosync[4555]:   [QB    ] qb_ipcs_shm_connect connecting to client [4804]
Jun  5 15:34:26 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:34:26 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:34:26 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:34:26 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_created connection created
Jun  5 15:34:26 vm1 corosync[4555]:   [CPG   ] cpg_lib_init_fn lib_init_fn: conn=0x7fab5fd78a30, cpd=0x7fab5fe79ec4
Jun  5 15:34:26 vm1 crmd[4804]:     info: crm_get_peer: Node <null> now has id: 2204477632
Jun  5 15:34:26 vm1 crmd[4804]:     info: crm_update_peer_proc: init_cpg_connection: Node (null)[2204477632] - corosync-cpg is now online
Jun  5 15:34:26 vm1 corosync[4555]:   [QB    ] handle_new_connection IPC credentials authenticated (4555-4804-30)
Jun  5 15:34:26 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/2, version=0.20.59)
Jun  5 15:34:26 vm1 corosync[4555]:   [QB    ] qb_ipcs_shm_connect connecting to client [4804]
Jun  5 15:34:26 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:34:26 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:34:26 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:34:26 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_created connection created
Jun  5 15:34:26 vm1 corosync[4555]:   [CMAP  ] cmap_lib_init_fn lib_init_fn: conn=0x7fab5fe7b360
Jun  5 15:34:26 vm1 crmd[4804]:   notice: corosync_node_name: Unable to get node name for nodeid 2204477632
Jun  5 15:34:26 vm1 crmd[4804]:   notice: get_local_node_name: Defaulting to uname -n for the local corosync node name
Jun  5 15:34:26 vm1 crmd[4804]:     info: init_cs_connection_once: Connection to 'corosync': established
Jun  5 15:34:26 vm1 crmd[4804]:     info: crm_get_peer: Node 2204477632 is now known as vm1
Jun  5 15:34:26 vm1 crmd[4804]:     info: peer_update_callback: vm1 is now (null)
Jun  5 15:34:26 vm1 crmd[4804]:     info: crm_get_peer: Node 2204477632 has uuid 2204477632
Jun  5 15:34:26 vm1 corosync[4555]:   [QB    ] qb_ipcs_dispatch_connection_request HUP conn (4555-4804-30)
Jun  5 15:34:26 vm1 corosync[4555]:   [QB    ] qb_ipcs_disconnect qb_ipcs_disconnect(4555-4804-30) state:2
Jun  5 15:34:26 vm1 corosync[4555]:   [QB    ] _del epoll_ctl(del): Bad file descriptor (9)
Jun  5 15:34:26 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_closed cs_ipcs_connection_closed() 
Jun  5 15:34:26 vm1 corosync[4555]:   [CMAP  ] cmap_lib_exit_fn exit_fn for conn=0x7fab5fe7b360
Jun  5 15:34:26 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_destroyed cs_ipcs_connection_destroyed() 
Jun  5 15:34:26 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-response-4555-4804-30-header
Jun  5 15:34:26 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-event-4555-4804-30-header
Jun  5 15:34:26 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-request-4555-4804-30-header
Jun  5 15:34:26 vm1 corosync[4555]:   [CPG   ] message_handler_req_exec_cpg_procjoin got procjoin message from cluster node -2090489664 (r(0) ip(192.168.101.131) r(1) ip(192.168.102.131) ) for pid 4804
Jun  5 15:34:26 vm1 corosync[4555]:   [QB    ] handle_new_connection IPC credentials authenticated (4555-4804-30)
Jun  5 15:34:26 vm1 corosync[4555]:   [QB    ] qb_ipcs_shm_connect connecting to client [4804]
Jun  5 15:34:26 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:34:26 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:34:26 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:34:26 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_created connection created
Jun  5 15:34:26 vm1 corosync[4555]:   [QUORUM] quorum_lib_init_fn lib_init_fn: conn=0x7fab5fe7b360
Jun  5 15:34:26 vm1 corosync[4555]:   [QUORUM] message_handler_req_lib_quorum_gettype got quorum_type request on 0x7fab5fe7b360
Jun  5 15:34:26 vm1 corosync[4555]:   [QUORUM] message_handler_req_lib_quorum_getquorate got quorate request on 0x7fab5fe7b360
Jun  5 15:34:26 vm1 crmd[4804]:   notice: init_quorum_connection: Quorum acquired
Jun  5 15:34:26 vm1 corosync[4555]:   [QUORUM] message_handler_req_lib_quorum_trackstart got trackstart request on 0x7fab5fe7b360
Jun  5 15:34:26 vm1 corosync[4555]:   [QUORUM] message_handler_req_lib_quorum_trackstart sending initial status to 0x7fab5fe7b360
Jun  5 15:34:26 vm1 corosync[4555]:   [QUORUM] send_library_notification sending quorum notification to 0x7fab5fe7b360, length = 52
Jun  5 15:34:26 vm1 corosync[4555]:   [QB    ] handle_new_connection IPC credentials authenticated (4555-4804-31)
Jun  5 15:34:26 vm1 corosync[4555]:   [QB    ] qb_ipcs_shm_connect connecting to client [4804]
Jun  5 15:34:26 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:34:26 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:34:26 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:34:26 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_created connection created
Jun  5 15:34:26 vm1 corosync[4555]:   [CMAP  ] cmap_lib_init_fn lib_init_fn: conn=0x7fab5fe7a370
Jun  5 15:34:26 vm1 corosync[4555]:   [QB    ] qb_ipcs_dispatch_connection_request HUP conn (4555-4804-31)
Jun  5 15:34:26 vm1 corosync[4555]:   [QB    ] qb_ipcs_disconnect qb_ipcs_disconnect(4555-4804-31) state:2
Jun  5 15:34:26 vm1 corosync[4555]:   [QB    ] _del epoll_ctl(del): Bad file descriptor (9)
Jun  5 15:34:26 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_closed cs_ipcs_connection_closed() 
Jun  5 15:34:26 vm1 corosync[4555]:   [CMAP  ] cmap_lib_exit_fn exit_fn for conn=0x7fab5fe7a370
Jun  5 15:34:26 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_destroyed cs_ipcs_connection_destroyed() 
Jun  5 15:34:26 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-response-4555-4804-31-header
Jun  5 15:34:26 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-event-4555-4804-31-header
Jun  5 15:34:26 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-request-4555-4804-31-header
Jun  5 15:34:26 vm1 corosync[4555]:   [QB    ] handle_new_connection IPC credentials authenticated (4555-4804-31)
Jun  5 15:34:26 vm1 corosync[4555]:   [QB    ] qb_ipcs_shm_connect connecting to client [4804]
Jun  5 15:34:26 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:34:26 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:34:26 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:34:26 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_created connection created
Jun  5 15:34:26 vm1 corosync[4555]:   [CMAP  ] cmap_lib_init_fn lib_init_fn: conn=0x7fab5fe7ac60
Jun  5 15:34:26 vm1 crmd[4804]:     info: do_ha_control: Connected to the cluster
Jun  5 15:34:26 vm1 crmd[4804]:     info: lrmd_ipc_connect: Connecting to lrmd
Jun  5 15:34:26 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section nodes: OK (rc=0, origin=local/crmd/3, version=0.20.59)
Jun  5 15:34:26 vm1 lrmd[4578]:     info: crm_client_new: Connecting 0x2125d50 for uid=496 gid=492 pid=4804 id=2b6bf1d3-b105-4962-ba09-0617a5b69f56
Jun  5 15:34:26 vm1 crmd[4804]:     info: do_lrm_control: LRM connection established
Jun  5 15:34:26 vm1 crmd[4804]:     info: do_started: Delaying start, no membership data (0000000000100000)
Jun  5 15:34:26 vm1 crmd[4804]:     info: pcmk_quorum_notification: Membership 388: quorum retained (1)
Jun  5 15:34:26 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section crm_config: OK (rc=0, origin=local/crmd/4, version=0.20.59)
Jun  5 15:34:26 vm1 crmd[4804]:   notice: crm_update_peer_state: pcmk_quorum_notification: Node vm1[2204477632] - state is now member (was (null))
Jun  5 15:34:26 vm1 crmd[4804]:     info: peer_update_callback: vm1 is now member (was (null))
Jun  5 15:34:26 vm1 crmd[4804]:     info: do_started: Delaying start, Config not read (0000000000000040)
Jun  5 15:34:26 vm1 crmd[4804]:     info: qb_ipcs_us_publish: server name: crmd
Jun  5 15:34:26 vm1 crmd[4804]:   notice: do_started: The local CRM is operational
Jun  5 15:34:26 vm1 crmd[4804]:     info: do_log: FSA: Input I_PENDING from do_started() received in state S_STARTING
Jun  5 15:34:26 vm1 crmd[4804]:     info: do_state_transition: State transition S_STARTING -> S_PENDING [ input=I_PENDING cause=C_FSA_INTERNAL origin=do_started ]
Jun  5 15:34:26 vm1 cib[4576]:     info: cib_process_readwrite: We are now in R/O mode
Jun  5 15:34:26 vm1 cib[4576]:     info: cib_process_request: Completed cib_slave operation for section 'all': OK (rc=0, origin=local/crmd/5, version=0.20.59)
Jun  5 15:34:26 vm1 corosync[4555]:   [QB    ] qb_ipcs_dispatch_connection_request HUP conn (4555-4804-31)
Jun  5 15:34:26 vm1 corosync[4555]:   [QB    ] qb_ipcs_disconnect qb_ipcs_disconnect(4555-4804-31) state:2
Jun  5 15:34:26 vm1 corosync[4555]:   [QB    ] _del epoll_ctl(del): Bad file descriptor (9)
Jun  5 15:34:26 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_closed cs_ipcs_connection_closed() 
Jun  5 15:34:26 vm1 corosync[4555]:   [CMAP  ] cmap_lib_exit_fn exit_fn for conn=0x7fab5fe7ac60
Jun  5 15:34:26 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_destroyed cs_ipcs_connection_destroyed() 
Jun  5 15:34:26 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-response-4555-4804-31-header
Jun  5 15:34:26 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-event-4555-4804-31-header
Jun  5 15:34:26 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-request-4555-4804-31-header
Jun  5 15:34:27 vm1 crmd[4804]:     info: pcmk_cpg_membership: Joined[0.0] crmd.2204477632 
Jun  5 15:34:27 vm1 crmd[4804]:     info: pcmk_cpg_membership: Member[0.0] crmd.2204477632 
Jun  5 15:34:28 vm1 stonith-ng[4577]:     info: crm_client_new: Connecting 0x1867d30 for uid=496 gid=492 pid=4804 id=3f09e89d-eb6a-457e-915d-845ab5a76cc6
Jun  5 15:34:28 vm1 stonith-ng[4577]:     info: stonith_command: Processed register from crmd.4804: OK (0)
Jun  5 15:34:28 vm1 stonith-ng[4577]:     info: stonith_command: Processed st_notify from crmd.4804: OK (0)
Jun  5 15:34:28 vm1 stonith-ng[4577]:     info: stonith_command: Processed st_notify from crmd.4804: OK (0)
Jun  5 15:34:47 vm1 crmd[4804]:     info: crm_timer_popped: Election Trigger (I_DC_TIMEOUT) just popped (20000ms)
Jun  5 15:34:47 vm1 crmd[4804]:  warning: do_log: FSA: Input I_DC_TIMEOUT from crm_timer_popped() received in state S_PENDING
Jun  5 15:34:47 vm1 crmd[4804]:     info: do_state_transition: State transition S_PENDING -> S_ELECTION [ input=I_DC_TIMEOUT cause=C_TIMER_POPPED origin=crm_timer_popped ]
Jun  5 15:34:47 vm1 crmd[4804]:     info: do_log: FSA: Input I_ELECTION_DC from do_election_check() received in state S_ELECTION
Jun  5 15:34:47 vm1 crmd[4804]:   notice: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_FSA_INTERNAL origin=do_election_check ]
Jun  5 15:34:47 vm1 crmd[4804]:     info: do_te_control: Registering TE UUID: a65eae5c-9275-4eb6-b4f8-f5f95078bad4
Jun  5 15:34:47 vm1 crmd[4804]:     info: set_graph_functions: Setting custom graph functions
Jun  5 15:34:47 vm1 pengine[4580]:     info: crm_client_new: Connecting 0x2290f90 for uid=496 gid=492 pid=4804 id=7fb6ee1f-d572-4c73-ac58-e5529ad55841
Jun  5 15:34:47 vm1 crmd[4804]:     info: do_dc_takeover: Taking over DC status for this partition
Jun  5 15:34:47 vm1 cib[4576]:     info: cib_process_readwrite: We are now in R/W mode
Jun  5 15:34:47 vm1 cib[4576]:     info: cib_process_request: Completed cib_master operation for section 'all': OK (rc=0, origin=local/crmd/6, version=0.20.59)
Jun  5 15:34:47 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section cib: OK (rc=0, origin=local/crmd/7, version=0.20.59)
Jun  5 15:34:47 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section //cib/configuration/crm_config//cluster_property_set//nvpair[@name='dc-version']: OK (rc=0, origin=local/crmd/8, version=0.20.59)
Jun  5 15:34:47 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section crm_config: OK (rc=0, origin=local/crmd/9, version=0.20.59)
Jun  5 15:34:47 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section //cib/configuration/crm_config//cluster_property_set//nvpair[@name='cluster-infrastructure']: OK (rc=0, origin=local/crmd/10, version=0.20.59)
Jun  5 15:34:47 vm1 crmd[4804]:     info: join_make_offer: Making join offers based on membership 388
Jun  5 15:34:47 vm1 crmd[4804]:     info: join_make_offer: join-1: Sending offer to vm1
Jun  5 15:34:47 vm1 crmd[4804]:     info: crm_update_peer_join: join_make_offer: Node vm1[2204477632] - join-1 phase 0 -> 1
Jun  5 15:34:47 vm1 crmd[4804]:     info: do_dc_join_offer_all: join-1: Waiting on 1 outstanding join acks
Jun  5 15:34:47 vm1 crmd[4804]:     info: update_dc: Set DC to vm1 (3.0.7)
Jun  5 15:34:47 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section crm_config: OK (rc=0, origin=local/crmd/11, version=0.20.59)
Jun  5 15:34:47 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section crm_config: OK (rc=0, origin=local/crmd/12, version=0.20.59)
Jun  5 15:34:47 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/13, version=0.20.59)
Jun  5 15:34:47 vm1 crmd[4804]:     info: crm_update_peer_join: do_dc_join_filter_offer: Node vm1[2204477632] - join-1 phase 1 -> 2
Jun  5 15:34:47 vm1 crmd[4804]:     info: crm_update_peer_expected: do_dc_join_filter_offer: Node vm1[2204477632] - expected state is now member
Jun  5 15:34:47 vm1 crmd[4804]:     info: do_state_transition: State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_FSA_INTERNAL origin=check_join_state ]
Jun  5 15:34:47 vm1 crmd[4804]:     info: crmd_join_phase_log: join-1: vm1=integrated
Jun  5 15:34:47 vm1 crmd[4804]:     info: do_dc_join_finalize: join-1: Syncing our CIB to the rest of the cluster
Jun  5 15:34:47 vm1 cib[4576]:     info: cib_process_request: Completed cib_sync operation for section 'all': OK (rc=0, origin=local/crmd/14, version=0.20.59)
Jun  5 15:34:47 vm1 crmd[4804]:     info: crm_update_peer_join: finalize_join_for: Node vm1[2204477632] - join-1 phase 2 -> 3
Jun  5 15:34:47 vm1 crmd[4804]:     info: erase_status_tag: Deleting xpath: //node_state[@uname='vm1']/transient_attributes
Jun  5 15:34:47 vm1 crmd[4804]:     info: update_attrd: Connecting to attrd... 5 retries remaining
Jun  5 15:34:47 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section nodes: OK (rc=0, origin=local/crmd/15, version=0.20.59)
Jun  5 15:34:47 vm1 cib[4576]:     info: cib_process_request: Completed cib_delete operation for section //node_state[@uname='vm1']/transient_attributes: OK (rc=0, origin=local/crmd/16, version=0.20.60)
Jun  5 15:34:47 vm1 attrd[4579]:     info: crm_client_new: Connecting 0x915d10 for uid=496 gid=492 pid=4804 id=4293376a-5769-4ec9-8b68-3769c5cb3e94
Jun  5 15:34:47 vm1 crmd[4804]:     info: crm_update_peer_join: do_dc_join_ack: Node vm1[2204477632] - join-1 phase 3 -> 4
Jun  5 15:34:47 vm1 crmd[4804]:     info: do_dc_join_ack: join-1: Updating node state to member for vm1
Jun  5 15:34:47 vm1 crmd[4804]:     info: erase_status_tag: Deleting xpath: //node_state[@uname='vm1']/lrm
Jun  5 15:34:47 vm1 cib[4576]:     info: cib_process_request: Completed cib_delete operation for section //node_state[@uname='vm1']/lrm: OK (rc=0, origin=local/crmd/17, version=0.20.61)
Jun  5 15:34:47 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/crmd/18, version=0.20.62)
Jun  5 15:34:47 vm1 crmd[4804]:     info: do_state_transition: State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state ]
Jun  5 15:34:47 vm1 crmd[4804]:     info: abort_transition_graph: do_te_invoke:155 - Triggered transition abort (complete=1) : Peer Cancelled
Jun  5 15:34:47 vm1 attrd[4579]:   notice: attrd_local_callback: Sending full refresh (origin=crmd)
Jun  5 15:34:47 vm1 attrd[4579]:   notice: attrd_trigger_update: Sending flush op to all hosts for: default_ping_set(1) (100)
Jun  5 15:34:47 vm1 attrd[4579]:   notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Jun  5 15:34:47 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section //cib/status//node_state[@id='2204477632']//transient_attributes//nvpair[@name='probe_complete']: No such device or address (rc=-6, origin=local/attrd/56, version=0.20.62)
Jun  5 15:34:47 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section nodes: OK (rc=0, origin=local/crmd/19, version=0.20.62)
Jun  5 15:34:47 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/crmd/20, version=0.20.62)
Jun  5 15:34:47 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section cib: OK (rc=0, origin=local/crmd/21, version=0.20.62)
Jun  5 15:34:47 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/22, version=0.20.62)
Jun  5 15:34:47 vm1 pengine[4580]:   notice: unpack_config: On loss of CCM Quorum: Ignore
Jun  5 15:34:47 vm1 pengine[4580]:  warning: unpack_nodes: Blind faith: not fencing unseen nodes
Jun  5 15:34:47 vm1 pengine[4580]:     info: determine_online_status_fencing: Node vm1 is active
Jun  5 15:34:47 vm1 pengine[4580]:     info: determine_online_status: Node vm1 is online
Jun  5 15:34:47 vm1 pengine[4580]:  warning: pe_fence_node: Node vm2 will be fenced because the node is no longer part of the cluster
Jun  5 15:34:47 vm1 pengine[4580]:  warning: determine_online_status: Node vm2 is unclean
Jun  5 15:34:47 vm1 pengine[4580]:     info: clone_print:  Clone Set: cl1 [st1]
Jun  5 15:34:47 vm1 pengine[4580]:     info: short_print:      Started: [ vm2 ]
Jun  5 15:34:47 vm1 pengine[4580]:     info: short_print:      Stopped: [ vm1 ]
Jun  5 15:34:47 vm1 pengine[4580]:     info: native_print: prmDummy#011(ocf::pacemaker:Dummy):#011Stopped 
Jun  5 15:34:47 vm1 pengine[4580]:     info: clone_print:  Clone Set: clnPing [prmPing]
Jun  5 15:34:47 vm1 pengine[4580]:     info: short_print:      Started: [ vm2 ]
Jun  5 15:34:47 vm1 pengine[4580]:     info: short_print:      Stopped: [ vm1 ]
Jun  5 15:34:47 vm1 pengine[4580]:     info: native_color: Resource st1:1 cannot run anywhere
Jun  5 15:34:47 vm1 pengine[4580]:     info: native_color: Resource prmDummy cannot run anywhere
Jun  5 15:34:47 vm1 pengine[4580]:     info: native_color: Resource prmPing:1 cannot run anywhere
Jun  5 15:34:47 vm1 pengine[4580]:  warning: custom_action: Action st1:0_stop_0 on vm2 is unrunnable (offline)
Jun  5 15:34:47 vm1 pengine[4580]:  warning: custom_action: Action prmPing:0_stop_0 on vm2 is unrunnable (offline)
Jun  5 15:34:47 vm1 pengine[4580]:     info: RecurringOp:  Start recurring monitor (10s) for prmPing:0 on vm1
Jun  5 15:34:47 vm1 pengine[4580]:  warning: stage6: Scheduling Node vm2 for STONITH
Jun  5 15:34:47 vm1 pengine[4580]:     info: native_stop_constraints: st1:0_stop_0 is implicit after vm2 is fenced
Jun  5 15:34:47 vm1 pengine[4580]:     info: native_stop_constraints: prmPing:0_stop_0 is implicit after vm2 is fenced
Jun  5 15:34:47 vm1 pengine[4580]:   notice: LogActions: Move    st1:0#011(Started vm2 -> vm1)
Jun  5 15:34:47 vm1 pengine[4580]:     info: LogActions: Leave   st1:1#011(Stopped)
Jun  5 15:34:47 vm1 pengine[4580]:     info: LogActions: Leave   prmDummy#011(Stopped)
Jun  5 15:34:47 vm1 pengine[4580]:   notice: LogActions: Move    prmPing:0#011(Started vm2 -> vm1)
Jun  5 15:34:47 vm1 pengine[4580]:     info: LogActions: Leave   prmPing:1#011(Stopped)
Jun  5 15:34:47 vm1 crmd[4804]:    error: crm_xml_err: XML Error: Entity: line 1: parser error : Specification mandate value for attribute CRM_meta_default_ping_set
Jun  5 15:34:47 vm1 crmd[4804]:    error: crm_xml_err: XML Error: 2" on_node="vm2" on_node_uuid="2221254848"><attributes CRM_meta_default_ping_set
Jun  5 15:34:47 vm1 crmd[4804]:    error: crm_xml_err: XML Error:                                                                                ^
Jun  5 15:34:47 vm1 crmd[4804]:    error: crm_xml_err: XML Error: Entity: line 1: parser error : attributes construct error
Jun  5 15:34:47 vm1 crmd[4804]:    error: crm_xml_err: XML Error: 2" on_node="vm2" on_node_uuid="2221254848"><attributes CRM_meta_default_ping_set
Jun  5 15:34:47 vm1 crmd[4804]:    error: crm_xml_err: XML Error:                                                                                ^
Jun  5 15:34:47 vm1 crmd[4804]:    error: crm_xml_err: XML Error: Entity: line 1: parser error : Couldn't find end of Start Tag attributes line 1
Jun  5 15:34:47 vm1 crmd[4804]:    error: crm_xml_err: XML Error: 2" on_node="vm2" on_node_uuid="2221254848"><attributes CRM_meta_default_ping_set
Jun  5 15:34:47 vm1 crmd[4804]:    error: crm_xml_err: XML Error:                                                                                ^
Jun  5 15:34:47 vm1 crmd[4804]:  warning: string2xml: Parsing failed (domain=1, level=3, code=73): Couldn't find end of Start Tag attributes line 1
Jun  5 15:34:47 vm1 pengine[4580]:  warning: process_pe_message: Calculated Transition 11: /var/lib/pacemaker/pengine/pe-warn-8.bz2
Jun  5 15:34:47 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section /cib: OK (rc=0, origin=local/attrd/57, version=0.20.62)
Jun  5 15:34:47 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/attrd/58, version=0.20.63)
Jun  5 15:34:47 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section //cib/status//node_state[@id='2204477632']//transient_attributes//nvpair[@name='default_ping_set(1)']: No such device or address (rc=-6, origin=local/attrd/59, version=0.20.63)
Jun  5 15:34:47 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section /cib: OK (rc=0, origin=local/attrd/60, version=0.20.63)
Jun  5 15:34:47 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/attrd/61, version=0.20.64)
Jun  5 15:34:48 vm1 cib[4576]:     info: crm_client_destroy: Destroying 0 events
Jun  5 15:34:48 vm1 lrmd[4578]:     info: crm_client_destroy: Destroying 0 events
Jun  5 15:34:48 vm1 stonith-ng[4577]:     info: crm_client_destroy: Destroying 0 events
Jun  5 15:34:48 vm1 pengine[4580]:     info: crm_client_destroy: Destroying 0 events
Jun  5 15:34:48 vm1 attrd[4579]:     info: crm_client_destroy: Destroying 0 events
Jun  5 15:34:48 vm1 pacemakerd[4574]:    error: child_death_dispatch: Managed process 4804 (crmd) dumped core
Jun  5 15:34:48 vm1 pacemakerd[4574]:   notice: pcmk_child_exit: Child process crmd terminated with signal 11 (pid=4804, core=1)
Jun  5 15:34:48 vm1 pacemakerd[4574]:   notice: pcmk_process_exit: Respawning failed child process: crmd
Jun  5 15:34:48 vm1 pacemakerd[4574]:     info: start_child: Using uid=496 and group=492 for process crmd
Jun  5 15:34:48 vm1 pacemakerd[4574]:     info: start_child: Forked child 4810 for process crmd
Jun  5 15:34:48 vm1 corosync[4555]:   [QB    ] qb_ipcs_dispatch_connection_request HUP conn (4555-4804-29)
Jun  5 15:34:48 vm1 corosync[4555]:   [QB    ] qb_ipcs_disconnect qb_ipcs_disconnect(4555-4804-29) state:2
Jun  5 15:34:48 vm1 corosync[4555]:   [QB    ] _del epoll_ctl(del): Bad file descriptor (9)
Jun  5 15:34:48 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_closed cs_ipcs_connection_closed() 
Jun  5 15:34:48 vm1 corosync[4555]:   [CPG   ] cpg_lib_exit_fn exit_fn for conn=0x7fab5fd78a30
Jun  5 15:34:48 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_destroyed cs_ipcs_connection_destroyed() 
Jun  5 15:34:48 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cpg-response-4555-4804-29-header
Jun  5 15:34:48 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cpg-event-4555-4804-29-header
Jun  5 15:34:48 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cpg-request-4555-4804-29-header
Jun  5 15:34:48 vm1 corosync[4555]:   [QB    ] qb_ipcs_dispatch_connection_request HUP conn (4555-4804-30)
Jun  5 15:34:48 vm1 corosync[4555]:   [QB    ] qb_ipcs_disconnect qb_ipcs_disconnect(4555-4804-30) state:2
Jun  5 15:34:48 vm1 corosync[4555]:   [QB    ] _del epoll_ctl(del): Bad file descriptor (9)
Jun  5 15:34:48 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_closed cs_ipcs_connection_closed() 
Jun  5 15:34:48 vm1 corosync[4555]:   [QUORUM] quorum_lib_exit_fn lib_exit_fn: conn=0x7fab5fe7b360
Jun  5 15:34:48 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_destroyed cs_ipcs_connection_destroyed() 
Jun  5 15:34:48 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-quorum-response-4555-4804-30-header
Jun  5 15:34:48 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-quorum-event-4555-4804-30-header
Jun  5 15:34:48 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-quorum-request-4555-4804-30-header
Jun  5 15:34:48 vm1 corosync[4555]:   [CPG   ] message_handler_req_exec_cpg_procleave got procleave message from cluster node -2090489664
Jun  5 15:34:48 vm1 crmd[4810]:   notice: crm_add_logfile: Additional logging available in /var/log/pacemaker.log
Jun  5 15:34:48 vm1 crmd[4810]:    debug: crm_update_callsites: Enabling callsites based on priority=7, files=(null), functions=(null), formats=(null), tags=(null)
Jun  5 15:34:48 vm1 crmd[4810]:     info: crm_log_init: Changed active directory to /var/lib/heartbeat/cores/hacluster
Jun  5 15:34:48 vm1 crmd[4810]:   notice: main: CRM Git Version: 7209c02
Jun  5 15:34:48 vm1 crmd[4810]:     info: do_log: FSA: Input I_STARTUP from crmd_init() received in state S_STARTING
Jun  5 15:34:48 vm1 crmd[4810]:     info: get_cluster_type: Verifying cluster type: 'corosync'
Jun  5 15:34:48 vm1 crmd[4810]:     info: get_cluster_type: Assuming an active 'corosync' cluster
Jun  5 15:34:48 vm1 cib[4576]:     info: crm_client_new: Connecting 0x11c2d40 for uid=496 gid=492 pid=4810 id=b8f52f4a-592e-42a1-adfa-6fcfeae42cb8
Jun  5 15:34:48 vm1 crmd[4810]:     info: do_cib_control: CIB connection established
Jun  5 15:34:48 vm1 crmd[4810]:   notice: crm_cluster_connect: Connecting to cluster infrastructure: corosync
Jun  5 15:34:48 vm1 corosync[4555]:   [QB    ] handle_new_connection IPC credentials authenticated (4555-4810-29)
Jun  5 15:34:48 vm1 corosync[4555]:   [QB    ] qb_ipcs_shm_connect connecting to client [4810]
Jun  5 15:34:48 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:34:48 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:34:48 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/2, version=0.20.64)
Jun  5 15:34:48 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:34:48 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_created connection created
Jun  5 15:34:48 vm1 corosync[4555]:   [CPG   ] cpg_lib_init_fn lib_init_fn: conn=0x7fab5fe7b360, cpd=0x7fab5fe798b4
Jun  5 15:34:48 vm1 crmd[4810]:     info: crm_get_peer: Node <null> now has id: 2204477632
Jun  5 15:34:48 vm1 crmd[4810]:     info: crm_update_peer_proc: init_cpg_connection: Node (null)[2204477632] - corosync-cpg is now online
Jun  5 15:34:48 vm1 corosync[4555]:   [QB    ] handle_new_connection IPC credentials authenticated (4555-4810-30)
Jun  5 15:34:48 vm1 corosync[4555]:   [QB    ] qb_ipcs_shm_connect connecting to client [4810]
Jun  5 15:34:48 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:34:48 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:34:48 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:34:48 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_created connection created
Jun  5 15:34:48 vm1 corosync[4555]:   [CMAP  ] cmap_lib_init_fn lib_init_fn: conn=0x7fab5fd78a30
Jun  5 15:34:48 vm1 crmd[4810]:   notice: corosync_node_name: Unable to get node name for nodeid 2204477632
Jun  5 15:34:48 vm1 crmd[4810]:   notice: get_local_node_name: Defaulting to uname -n for the local corosync node name
Jun  5 15:34:48 vm1 crmd[4810]:     info: init_cs_connection_once: Connection to 'corosync': established
Jun  5 15:34:48 vm1 crmd[4810]:     info: crm_get_peer: Node 2204477632 is now known as vm1
Jun  5 15:34:48 vm1 crmd[4810]:     info: peer_update_callback: vm1 is now (null)
Jun  5 15:34:48 vm1 crmd[4810]:     info: crm_get_peer: Node 2204477632 has uuid 2204477632
Jun  5 15:34:48 vm1 corosync[4555]:   [QB    ] qb_ipcs_dispatch_connection_request HUP conn (4555-4810-30)
Jun  5 15:34:48 vm1 corosync[4555]:   [QB    ] qb_ipcs_disconnect qb_ipcs_disconnect(4555-4810-30) state:2
Jun  5 15:34:48 vm1 corosync[4555]:   [QB    ] _del epoll_ctl(del): Bad file descriptor (9)
Jun  5 15:34:48 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_closed cs_ipcs_connection_closed() 
Jun  5 15:34:48 vm1 corosync[4555]:   [CMAP  ] cmap_lib_exit_fn exit_fn for conn=0x7fab5fd78a30
Jun  5 15:34:48 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_destroyed cs_ipcs_connection_destroyed() 
Jun  5 15:34:48 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-response-4555-4810-30-header
Jun  5 15:34:48 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-event-4555-4810-30-header
Jun  5 15:34:48 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-request-4555-4810-30-header
Jun  5 15:34:48 vm1 corosync[4555]:   [CPG   ] message_handler_req_exec_cpg_procjoin got procjoin message from cluster node -2090489664 (r(0) ip(192.168.101.131) r(1) ip(192.168.102.131) ) for pid 4810
Jun  5 15:34:48 vm1 corosync[4555]:   [QB    ] handle_new_connection IPC credentials authenticated (4555-4810-30)
Jun  5 15:34:48 vm1 corosync[4555]:   [QB    ] qb_ipcs_shm_connect connecting to client [4810]
Jun  5 15:34:48 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:34:48 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:34:48 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:34:48 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_created connection created
Jun  5 15:34:48 vm1 corosync[4555]:   [QUORUM] quorum_lib_init_fn lib_init_fn: conn=0x7fab5fd78a30
Jun  5 15:34:48 vm1 corosync[4555]:   [QUORUM] message_handler_req_lib_quorum_gettype got quorum_type request on 0x7fab5fd78a30
Jun  5 15:34:48 vm1 corosync[4555]:   [QUORUM] message_handler_req_lib_quorum_getquorate got quorate request on 0x7fab5fd78a30
Jun  5 15:34:48 vm1 crmd[4810]:   notice: init_quorum_connection: Quorum acquired
Jun  5 15:34:48 vm1 corosync[4555]:   [QUORUM] message_handler_req_lib_quorum_trackstart got trackstart request on 0x7fab5fd78a30
Jun  5 15:34:48 vm1 corosync[4555]:   [QUORUM] message_handler_req_lib_quorum_trackstart sending initial status to 0x7fab5fd78a30
Jun  5 15:34:48 vm1 corosync[4555]:   [QUORUM] send_library_notification sending quorum notification to 0x7fab5fd78a30, length = 52
Jun  5 15:34:48 vm1 corosync[4555]:   [QB    ] handle_new_connection IPC credentials authenticated (4555-4810-31)
Jun  5 15:34:48 vm1 corosync[4555]:   [QB    ] qb_ipcs_shm_connect connecting to client [4810]
Jun  5 15:34:48 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:34:48 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:34:48 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:34:48 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_created connection created
Jun  5 15:34:48 vm1 corosync[4555]:   [CMAP  ] cmap_lib_init_fn lib_init_fn: conn=0x7fab5fe7a1f0
Jun  5 15:34:48 vm1 corosync[4555]:   [QB    ] qb_ipcs_dispatch_connection_request HUP conn (4555-4810-31)
Jun  5 15:34:48 vm1 corosync[4555]:   [QB    ] qb_ipcs_disconnect qb_ipcs_disconnect(4555-4810-31) state:2
Jun  5 15:34:48 vm1 corosync[4555]:   [QB    ] _del epoll_ctl(del): Bad file descriptor (9)
Jun  5 15:34:48 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_closed cs_ipcs_connection_closed() 
Jun  5 15:34:48 vm1 corosync[4555]:   [CMAP  ] cmap_lib_exit_fn exit_fn for conn=0x7fab5fe7a1f0
Jun  5 15:34:48 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_destroyed cs_ipcs_connection_destroyed() 
Jun  5 15:34:48 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-response-4555-4810-31-header
Jun  5 15:34:48 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-event-4555-4810-31-header
Jun  5 15:34:48 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-request-4555-4810-31-header
Jun  5 15:34:48 vm1 corosync[4555]:   [QB    ] handle_new_connection IPC credentials authenticated (4555-4810-31)
Jun  5 15:34:48 vm1 corosync[4555]:   [QB    ] qb_ipcs_shm_connect connecting to client [4810]
Jun  5 15:34:48 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:34:48 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:34:48 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:34:48 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_created connection created
Jun  5 15:34:48 vm1 corosync[4555]:   [CMAP  ] cmap_lib_init_fn lib_init_fn: conn=0x7fab5fe7a970
Jun  5 15:34:48 vm1 crmd[4810]:     info: do_ha_control: Connected to the cluster
Jun  5 15:34:48 vm1 crmd[4810]:     info: lrmd_ipc_connect: Connecting to lrmd
Jun  5 15:34:48 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section nodes: OK (rc=0, origin=local/crmd/3, version=0.20.64)
Jun  5 15:34:48 vm1 lrmd[4578]:     info: crm_client_new: Connecting 0x2125d50 for uid=496 gid=492 pid=4810 id=e1e1e759-9787-4904-a6f6-2707fd6ab47f
Jun  5 15:34:48 vm1 crmd[4810]:     info: do_lrm_control: LRM connection established
Jun  5 15:34:48 vm1 crmd[4810]:     info: do_started: Delaying start, no membership data (0000000000100000)
Jun  5 15:34:48 vm1 crmd[4810]:     info: pcmk_quorum_notification: Membership 388: quorum retained (1)
Jun  5 15:34:48 vm1 crmd[4810]:   notice: crm_update_peer_state: pcmk_quorum_notification: Node vm1[2204477632] - state is now member (was (null))
Jun  5 15:34:48 vm1 crmd[4810]:     info: peer_update_callback: vm1 is now member (was (null))
Jun  5 15:34:48 vm1 crmd[4810]:     info: do_started: Delaying start, Config not read (0000000000000040)
Jun  5 15:34:48 vm1 crmd[4810]:     info: pcmk_cpg_membership: Joined[0.0] crmd.2204477632 
Jun  5 15:34:48 vm1 crmd[4810]:     info: pcmk_cpg_membership: Member[0.0] crmd.2204477632 
Jun  5 15:34:48 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section crm_config: OK (rc=0, origin=local/crmd/4, version=0.20.64)
Jun  5 15:34:48 vm1 crmd[4810]:     info: qb_ipcs_us_publish: server name: crmd
Jun  5 15:34:48 vm1 crmd[4810]:   notice: do_started: The local CRM is operational
Jun  5 15:34:48 vm1 crmd[4810]:     info: do_log: FSA: Input I_PENDING from do_started() received in state S_STARTING
Jun  5 15:34:48 vm1 crmd[4810]:     info: do_state_transition: State transition S_STARTING -> S_PENDING [ input=I_PENDING cause=C_FSA_INTERNAL origin=do_started ]
Jun  5 15:34:48 vm1 cib[4576]:     info: cib_process_readwrite: We are now in R/O mode
Jun  5 15:34:48 vm1 cib[4576]:     info: cib_process_request: Completed cib_slave operation for section 'all': OK (rc=0, origin=local/crmd/5, version=0.20.64)
Jun  5 15:34:48 vm1 corosync[4555]:   [QB    ] qb_ipcs_dispatch_connection_request HUP conn (4555-4810-31)
Jun  5 15:34:48 vm1 corosync[4555]:   [QB    ] qb_ipcs_disconnect qb_ipcs_disconnect(4555-4810-31) state:2
Jun  5 15:34:48 vm1 corosync[4555]:   [QB    ] _del epoll_ctl(del): Bad file descriptor (9)
Jun  5 15:34:48 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_closed cs_ipcs_connection_closed() 
Jun  5 15:34:48 vm1 corosync[4555]:   [CMAP  ] cmap_lib_exit_fn exit_fn for conn=0x7fab5fe7a970
Jun  5 15:34:48 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_destroyed cs_ipcs_connection_destroyed() 
Jun  5 15:34:48 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-response-4555-4810-31-header
Jun  5 15:34:48 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-event-4555-4810-31-header
Jun  5 15:34:48 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-request-4555-4810-31-header
Jun  5 15:34:50 vm1 stonith-ng[4577]:     info: crm_client_new: Connecting 0x1867d30 for uid=496 gid=492 pid=4810 id=f1f468ea-2e70-424e-a568-21a9fa23eff8
Jun  5 15:34:50 vm1 stonith-ng[4577]:     info: stonith_command: Processed register from crmd.4810: OK (0)
Jun  5 15:34:50 vm1 stonith-ng[4577]:     info: stonith_command: Processed st_notify from crmd.4810: OK (0)
Jun  5 15:34:50 vm1 stonith-ng[4577]:     info: stonith_command: Processed st_notify from crmd.4810: OK (0)
Jun  5 15:35:03 vm1 cib[4576]:     info: crm_client_destroy: Destroying 0 events
Jun  5 15:35:09 vm1 crmd[4810]:     info: crm_timer_popped: Election Trigger (I_DC_TIMEOUT) just popped (20000ms)
Jun  5 15:35:09 vm1 crmd[4810]:  warning: do_log: FSA: Input I_DC_TIMEOUT from crm_timer_popped() received in state S_PENDING
Jun  5 15:35:09 vm1 crmd[4810]:     info: do_state_transition: State transition S_PENDING -> S_ELECTION [ input=I_DC_TIMEOUT cause=C_TIMER_POPPED origin=crm_timer_popped ]
Jun  5 15:35:09 vm1 crmd[4810]:     info: do_log: FSA: Input I_ELECTION_DC from do_election_check() received in state S_ELECTION
Jun  5 15:35:09 vm1 crmd[4810]:   notice: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_FSA_INTERNAL origin=do_election_check ]
Jun  5 15:35:09 vm1 crmd[4810]:     info: do_te_control: Registering TE UUID: 8454eb06-27e5-4fcd-9477-c75b49241769
Jun  5 15:35:09 vm1 crmd[4810]:     info: set_graph_functions: Setting custom graph functions
Jun  5 15:35:09 vm1 pengine[4580]:     info: crm_client_new: Connecting 0x2290f90 for uid=496 gid=492 pid=4810 id=47c20df2-1a86-48ed-972a-d22eb1b2f78f
Jun  5 15:35:09 vm1 crmd[4810]:     info: do_dc_takeover: Taking over DC status for this partition
Jun  5 15:35:09 vm1 cib[4576]:     info: cib_process_readwrite: We are now in R/W mode
Jun  5 15:35:09 vm1 cib[4576]:     info: cib_process_request: Completed cib_master operation for section 'all': OK (rc=0, origin=local/crmd/6, version=0.20.64)
Jun  5 15:35:09 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section cib: OK (rc=0, origin=local/crmd/7, version=0.20.64)
Jun  5 15:35:09 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section //cib/configuration/crm_config//cluster_property_set//nvpair[@name='dc-version']: OK (rc=0, origin=local/crmd/8, version=0.20.64)
Jun  5 15:35:09 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section crm_config: OK (rc=0, origin=local/crmd/9, version=0.20.64)
Jun  5 15:35:09 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section //cib/configuration/crm_config//cluster_property_set//nvpair[@name='cluster-infrastructure']: OK (rc=0, origin=local/crmd/10, version=0.20.64)
Jun  5 15:35:09 vm1 crmd[4810]:     info: join_make_offer: Making join offers based on membership 388
Jun  5 15:35:09 vm1 crmd[4810]:     info: join_make_offer: join-1: Sending offer to vm1
Jun  5 15:35:09 vm1 crmd[4810]:     info: crm_update_peer_join: join_make_offer: Node vm1[2204477632] - join-1 phase 0 -> 1
Jun  5 15:35:09 vm1 crmd[4810]:     info: do_dc_join_offer_all: join-1: Waiting on 1 outstanding join acks
Jun  5 15:35:09 vm1 crmd[4810]:     info: update_dc: Set DC to vm1 (3.0.7)
Jun  5 15:35:09 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section crm_config: OK (rc=0, origin=local/crmd/11, version=0.20.64)
Jun  5 15:35:09 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section crm_config: OK (rc=0, origin=local/crmd/12, version=0.20.64)
Jun  5 15:35:09 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/13, version=0.20.64)
Jun  5 15:35:09 vm1 crmd[4810]:     info: crm_update_peer_join: do_dc_join_filter_offer: Node vm1[2204477632] - join-1 phase 1 -> 2
Jun  5 15:35:09 vm1 crmd[4810]:     info: crm_update_peer_expected: do_dc_join_filter_offer: Node vm1[2204477632] - expected state is now member
Jun  5 15:35:09 vm1 crmd[4810]:     info: do_state_transition: State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_FSA_INTERNAL origin=check_join_state ]
Jun  5 15:35:09 vm1 crmd[4810]:     info: crmd_join_phase_log: join-1: vm1=integrated
Jun  5 15:35:09 vm1 crmd[4810]:     info: do_dc_join_finalize: join-1: Syncing our CIB to the rest of the cluster
Jun  5 15:35:09 vm1 cib[4576]:     info: cib_process_request: Completed cib_sync operation for section 'all': OK (rc=0, origin=local/crmd/14, version=0.20.64)
Jun  5 15:35:09 vm1 crmd[4810]:     info: crm_update_peer_join: finalize_join_for: Node vm1[2204477632] - join-1 phase 2 -> 3
Jun  5 15:35:09 vm1 crmd[4810]:     info: erase_status_tag: Deleting xpath: //node_state[@uname='vm1']/transient_attributes
Jun  5 15:35:09 vm1 crmd[4810]:     info: update_attrd: Connecting to attrd... 5 retries remaining
Jun  5 15:35:09 vm1 attrd[4579]:     info: crm_client_new: Connecting 0x915d10 for uid=496 gid=492 pid=4810 id=d1a984f6-3cb9-4fbb-9d5b-5aedb46ba68a
Jun  5 15:35:09 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section nodes: OK (rc=0, origin=local/crmd/15, version=0.20.64)
Jun  5 15:35:09 vm1 crmd[4810]:     info: crm_update_peer_join: do_dc_join_ack: Node vm1[2204477632] - join-1 phase 3 -> 4
Jun  5 15:35:09 vm1 crmd[4810]:     info: do_dc_join_ack: join-1: Updating node state to member for vm1
Jun  5 15:35:09 vm1 crmd[4810]:     info: erase_status_tag: Deleting xpath: //node_state[@uname='vm1']/lrm
Jun  5 15:35:09 vm1 cib[4576]:     info: cib_process_request: Completed cib_delete operation for section //node_state[@uname='vm1']/transient_attributes: OK (rc=0, origin=local/crmd/16, version=0.20.65)
Jun  5 15:35:09 vm1 cib[4576]:     info: cib_process_request: Completed cib_delete operation for section //node_state[@uname='vm1']/lrm: OK (rc=0, origin=local/crmd/17, version=0.20.66)
Jun  5 15:35:09 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/crmd/18, version=0.20.67)
Jun  5 15:35:09 vm1 crmd[4810]:     info: do_state_transition: State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state ]
Jun  5 15:35:09 vm1 crmd[4810]:     info: abort_transition_graph: do_te_invoke:155 - Triggered transition abort (complete=1) : Peer Cancelled
Jun  5 15:35:09 vm1 attrd[4579]:   notice: attrd_local_callback: Sending full refresh (origin=crmd)
Jun  5 15:35:09 vm1 attrd[4579]:   notice: attrd_trigger_update: Sending flush op to all hosts for: default_ping_set(1) (100)
Jun  5 15:35:09 vm1 attrd[4579]:   notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Jun  5 15:35:09 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section //cib/status//node_state[@id='2204477632']//transient_attributes//nvpair[@name='probe_complete']: No such device or address (rc=-6, origin=local/attrd/62, version=0.20.67)
Jun  5 15:35:09 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section nodes: OK (rc=0, origin=local/crmd/19, version=0.20.67)
Jun  5 15:35:09 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/crmd/20, version=0.20.67)
Jun  5 15:35:09 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section cib: OK (rc=0, origin=local/crmd/21, version=0.20.67)
Jun  5 15:35:09 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/22, version=0.20.67)
Jun  5 15:35:09 vm1 pengine[4580]:   notice: unpack_config: On loss of CCM Quorum: Ignore
Jun  5 15:35:09 vm1 pengine[4580]:  warning: unpack_nodes: Blind faith: not fencing unseen nodes
Jun  5 15:35:09 vm1 pengine[4580]:     info: determine_online_status_fencing: Node vm1 is active
Jun  5 15:35:09 vm1 pengine[4580]:     info: determine_online_status: Node vm1 is online
Jun  5 15:35:09 vm1 pengine[4580]:  warning: pe_fence_node: Node vm2 will be fenced because the node is no longer part of the cluster
Jun  5 15:35:09 vm1 pengine[4580]:  warning: determine_online_status: Node vm2 is unclean
Jun  5 15:35:09 vm1 pengine[4580]:     info: clone_print:  Clone Set: cl1 [st1]
Jun  5 15:35:09 vm1 pengine[4580]:     info: short_print:      Started: [ vm2 ]
Jun  5 15:35:09 vm1 pengine[4580]:     info: short_print:      Stopped: [ vm1 ]
Jun  5 15:35:09 vm1 pengine[4580]:     info: native_print: prmDummy#011(ocf::pacemaker:Dummy):#011Stopped 
Jun  5 15:35:09 vm1 pengine[4580]:     info: clone_print:  Clone Set: clnPing [prmPing]
Jun  5 15:35:09 vm1 pengine[4580]:     info: short_print:      Started: [ vm2 ]
Jun  5 15:35:09 vm1 pengine[4580]:     info: short_print:      Stopped: [ vm1 ]
Jun  5 15:35:09 vm1 pengine[4580]:     info: native_color: Resource st1:1 cannot run anywhere
Jun  5 15:35:09 vm1 pengine[4580]:     info: native_color: Resource prmDummy cannot run anywhere
Jun  5 15:35:09 vm1 pengine[4580]:     info: native_color: Resource prmPing:1 cannot run anywhere
Jun  5 15:35:09 vm1 pengine[4580]:  warning: custom_action: Action st1:0_stop_0 on vm2 is unrunnable (offline)
Jun  5 15:35:09 vm1 pengine[4580]:  warning: custom_action: Action prmPing:0_stop_0 on vm2 is unrunnable (offline)
Jun  5 15:35:09 vm1 pengine[4580]:     info: RecurringOp:  Start recurring monitor (10s) for prmPing:0 on vm1
Jun  5 15:35:09 vm1 pengine[4580]:  warning: stage6: Scheduling Node vm2 for STONITH
Jun  5 15:35:09 vm1 pengine[4580]:     info: native_stop_constraints: st1:0_stop_0 is implicit after vm2 is fenced
Jun  5 15:35:09 vm1 pengine[4580]:     info: native_stop_constraints: prmPing:0_stop_0 is implicit after vm2 is fenced
Jun  5 15:35:09 vm1 pengine[4580]:   notice: LogActions: Move    st1:0#011(Started vm2 -> vm1)
Jun  5 15:35:09 vm1 pengine[4580]:     info: LogActions: Leave   st1:1#011(Stopped)
Jun  5 15:35:09 vm1 pengine[4580]:     info: LogActions: Leave   prmDummy#011(Stopped)
Jun  5 15:35:09 vm1 pengine[4580]:   notice: LogActions: Move    prmPing:0#011(Started vm2 -> vm1)
Jun  5 15:35:09 vm1 pengine[4580]:     info: LogActions: Leave   prmPing:1#011(Stopped)
Jun  5 15:35:09 vm1 crmd[4810]:    error: crm_xml_err: XML Error: Entity: line 1: parser error : Specification mandate value for attribute CRM_meta_default_ping_set
Jun  5 15:35:09 vm1 crmd[4810]:    error: crm_xml_err: XML Error: 2" on_node="vm2" on_node_uuid="2221254848"><attributes CRM_meta_default_ping_set
Jun  5 15:35:09 vm1 crmd[4810]:    error: crm_xml_err: XML Error:                                                                                ^
Jun  5 15:35:09 vm1 crmd[4810]:    error: crm_xml_err: XML Error: Entity: line 1: parser error : attributes construct error
Jun  5 15:35:09 vm1 crmd[4810]:    error: crm_xml_err: XML Error: 2" on_node="vm2" on_node_uuid="2221254848"><attributes CRM_meta_default_ping_set
Jun  5 15:35:09 vm1 crmd[4810]:    error: crm_xml_err: XML Error:                                                                                ^
Jun  5 15:35:09 vm1 crmd[4810]:    error: crm_xml_err: XML Error: Entity: line 1: parser error : Couldn't find end of Start Tag attributes line 1
Jun  5 15:35:09 vm1 crmd[4810]:    error: crm_xml_err: XML Error: 2" on_node="vm2" on_node_uuid="2221254848"><attributes CRM_meta_default_ping_set
Jun  5 15:35:09 vm1 crmd[4810]:    error: crm_xml_err: XML Error:                                                                                ^
Jun  5 15:35:09 vm1 crmd[4810]:  warning: string2xml: Parsing failed (domain=1, level=3, code=73): Couldn't find end of Start Tag attributes line 1
Jun  5 15:35:09 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section /cib: OK (rc=0, origin=local/attrd/63, version=0.20.67)
Jun  5 15:35:09 vm1 pengine[4580]:  warning: process_pe_message: Calculated Transition 12: /var/lib/pacemaker/pengine/pe-warn-9.bz2
Jun  5 15:35:09 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/attrd/64, version=0.20.68)
Jun  5 15:35:09 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section //cib/status//node_state[@id='2204477632']//transient_attributes//nvpair[@name='default_ping_set(1)']: No such device or address (rc=-6, origin=local/attrd/65, version=0.20.68)
Jun  5 15:35:09 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section /cib: OK (rc=0, origin=local/attrd/66, version=0.20.68)
Jun  5 15:35:09 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/attrd/67, version=0.20.69)
Jun  5 15:35:11 vm1 cib[4576]:     info: crm_client_destroy: Destroying 0 events
Jun  5 15:35:11 vm1 lrmd[4578]:     info: crm_client_destroy: Destroying 0 events
Jun  5 15:35:11 vm1 stonith-ng[4577]:     info: crm_client_destroy: Destroying 0 events
Jun  5 15:35:11 vm1 pengine[4580]:     info: crm_client_destroy: Destroying 0 events
Jun  5 15:35:11 vm1 attrd[4579]:     info: crm_client_destroy: Destroying 0 events
Jun  5 15:35:11 vm1 pacemakerd[4574]:    error: child_death_dispatch: Managed process 4810 (crmd) dumped core
Jun  5 15:35:11 vm1 pacemakerd[4574]:   notice: pcmk_child_exit: Child process crmd terminated with signal 11 (pid=4810, core=1)
Jun  5 15:35:11 vm1 pacemakerd[4574]:   notice: pcmk_process_exit: Respawning failed child process: crmd
Jun  5 15:35:11 vm1 pacemakerd[4574]:     info: start_child: Using uid=496 and group=492 for process crmd
Jun  5 15:35:11 vm1 pacemakerd[4574]:     info: start_child: Forked child 4822 for process crmd
Jun  5 15:35:11 vm1 corosync[4555]:   [QB    ] qb_ipcs_dispatch_connection_request HUP conn (4555-4810-29)
Jun  5 15:35:11 vm1 corosync[4555]:   [QB    ] qb_ipcs_disconnect qb_ipcs_disconnect(4555-4810-29) state:2
Jun  5 15:35:11 vm1 corosync[4555]:   [QB    ] _del epoll_ctl(del): Bad file descriptor (9)
Jun  5 15:35:11 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_closed cs_ipcs_connection_closed() 
Jun  5 15:35:11 vm1 corosync[4555]:   [CPG   ] cpg_lib_exit_fn exit_fn for conn=0x7fab5fe7b360
Jun  5 15:35:11 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_destroyed cs_ipcs_connection_destroyed() 
Jun  5 15:35:11 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cpg-response-4555-4810-29-header
Jun  5 15:35:11 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cpg-event-4555-4810-29-header
Jun  5 15:35:11 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cpg-request-4555-4810-29-header
Jun  5 15:35:11 vm1 corosync[4555]:   [QB    ] qb_ipcs_dispatch_connection_request HUP conn (4555-4810-30)
Jun  5 15:35:11 vm1 corosync[4555]:   [QB    ] qb_ipcs_disconnect qb_ipcs_disconnect(4555-4810-30) state:2
Jun  5 15:35:11 vm1 corosync[4555]:   [QB    ] _del epoll_ctl(del): Bad file descriptor (9)
Jun  5 15:35:11 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_closed cs_ipcs_connection_closed() 
Jun  5 15:35:11 vm1 corosync[4555]:   [QUORUM] quorum_lib_exit_fn lib_exit_fn: conn=0x7fab5fd78a30
Jun  5 15:35:11 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_destroyed cs_ipcs_connection_destroyed() 
Jun  5 15:35:11 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-quorum-response-4555-4810-30-header
Jun  5 15:35:11 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-quorum-event-4555-4810-30-header
Jun  5 15:35:11 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-quorum-request-4555-4810-30-header
Jun  5 15:35:11 vm1 corosync[4555]:   [CPG   ] message_handler_req_exec_cpg_procleave got procleave message from cluster node -2090489664
Jun  5 15:35:11 vm1 crmd[4822]:   notice: crm_add_logfile: Additional logging available in /var/log/pacemaker.log
Jun  5 15:35:11 vm1 crmd[4822]:    debug: crm_update_callsites: Enabling callsites based on priority=7, files=(null), functions=(null), formats=(null), tags=(null)
Jun  5 15:35:11 vm1 crmd[4822]:     info: crm_log_init: Changed active directory to /var/lib/heartbeat/cores/hacluster
Jun  5 15:35:11 vm1 crmd[4822]:   notice: main: CRM Git Version: 7209c02
Jun  5 15:35:11 vm1 crmd[4822]:     info: do_log: FSA: Input I_STARTUP from crmd_init() received in state S_STARTING
Jun  5 15:35:11 vm1 crmd[4822]:     info: get_cluster_type: Verifying cluster type: 'corosync'
Jun  5 15:35:11 vm1 crmd[4822]:     info: get_cluster_type: Assuming an active 'corosync' cluster
Jun  5 15:35:11 vm1 cib[4576]:     info: crm_client_new: Connecting 0x1356bb0 for uid=496 gid=492 pid=4822 id=c30b114e-653d-4728-b7b2-3edcd14155c1
Jun  5 15:35:11 vm1 crmd[4822]:     info: do_cib_control: CIB connection established
Jun  5 15:35:11 vm1 crmd[4822]:   notice: crm_cluster_connect: Connecting to cluster infrastructure: corosync
Jun  5 15:35:11 vm1 corosync[4555]:   [QB    ] handle_new_connection IPC credentials authenticated (4555-4822-29)
Jun  5 15:35:11 vm1 corosync[4555]:   [QB    ] qb_ipcs_shm_connect connecting to client [4822]
Jun  5 15:35:11 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:35:11 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:35:11 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/2, version=0.20.69)
Jun  5 15:35:11 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:35:11 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_created connection created
Jun  5 15:35:11 vm1 corosync[4555]:   [CPG   ] cpg_lib_init_fn lib_init_fn: conn=0x7fab5fe78d10, cpd=0x7fab5fe7bbe4
Jun  5 15:35:11 vm1 crmd[4822]:     info: crm_get_peer: Node <null> now has id: 2204477632
Jun  5 15:35:11 vm1 crmd[4822]:     info: crm_update_peer_proc: init_cpg_connection: Node (null)[2204477632] - corosync-cpg is now online
Jun  5 15:35:11 vm1 corosync[4555]:   [QB    ] handle_new_connection IPC credentials authenticated (4555-4822-30)
Jun  5 15:35:11 vm1 corosync[4555]:   [QB    ] qb_ipcs_shm_connect connecting to client [4822]
Jun  5 15:35:11 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:35:11 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:35:11 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:35:11 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_created connection created
Jun  5 15:35:11 vm1 corosync[4555]:   [CMAP  ] cmap_lib_init_fn lib_init_fn: conn=0x7fab5fd78a30
Jun  5 15:35:11 vm1 crmd[4822]:   notice: corosync_node_name: Unable to get node name for nodeid 2204477632
Jun  5 15:35:11 vm1 crmd[4822]:   notice: get_local_node_name: Defaulting to uname -n for the local corosync node name
Jun  5 15:35:11 vm1 crmd[4822]:     info: init_cs_connection_once: Connection to 'corosync': established
Jun  5 15:35:11 vm1 crmd[4822]:     info: crm_get_peer: Node 2204477632 is now known as vm1
Jun  5 15:35:11 vm1 crmd[4822]:     info: peer_update_callback: vm1 is now (null)
Jun  5 15:35:11 vm1 crmd[4822]:     info: crm_get_peer: Node 2204477632 has uuid 2204477632
Jun  5 15:35:11 vm1 corosync[4555]:   [QB    ] qb_ipcs_dispatch_connection_request HUP conn (4555-4822-30)
Jun  5 15:35:11 vm1 corosync[4555]:   [QB    ] qb_ipcs_disconnect qb_ipcs_disconnect(4555-4822-30) state:2
Jun  5 15:35:11 vm1 corosync[4555]:   [QB    ] _del epoll_ctl(del): Bad file descriptor (9)
Jun  5 15:35:11 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_closed cs_ipcs_connection_closed() 
Jun  5 15:35:11 vm1 corosync[4555]:   [CMAP  ] cmap_lib_exit_fn exit_fn for conn=0x7fab5fd78a30
Jun  5 15:35:11 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_destroyed cs_ipcs_connection_destroyed() 
Jun  5 15:35:11 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-response-4555-4822-30-header
Jun  5 15:35:11 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-event-4555-4822-30-header
Jun  5 15:35:11 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-request-4555-4822-30-header
Jun  5 15:35:11 vm1 corosync[4555]:   [CPG   ] message_handler_req_exec_cpg_procjoin got procjoin message from cluster node -2090489664 (r(0) ip(192.168.101.131) r(1) ip(192.168.102.131) ) for pid 4822
Jun  5 15:35:11 vm1 corosync[4555]:   [QB    ] handle_new_connection IPC credentials authenticated (4555-4822-30)
Jun  5 15:35:11 vm1 corosync[4555]:   [QB    ] qb_ipcs_shm_connect connecting to client [4822]
Jun  5 15:35:11 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:35:11 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:35:11 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:35:11 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_created connection created
Jun  5 15:35:11 vm1 corosync[4555]:   [QUORUM] quorum_lib_init_fn lib_init_fn: conn=0x7fab5fe7b360
Jun  5 15:35:11 vm1 corosync[4555]:   [QUORUM] message_handler_req_lib_quorum_gettype got quorum_type request on 0x7fab5fe7b360
Jun  5 15:35:11 vm1 corosync[4555]:   [QUORUM] message_handler_req_lib_quorum_getquorate got quorate request on 0x7fab5fe7b360
Jun  5 15:35:11 vm1 crmd[4822]:   notice: init_quorum_connection: Quorum acquired
Jun  5 15:35:11 vm1 corosync[4555]:   [QUORUM] message_handler_req_lib_quorum_trackstart got trackstart request on 0x7fab5fe7b360
Jun  5 15:35:11 vm1 corosync[4555]:   [QUORUM] message_handler_req_lib_quorum_trackstart sending initial status to 0x7fab5fe7b360
Jun  5 15:35:11 vm1 corosync[4555]:   [QUORUM] send_library_notification sending quorum notification to 0x7fab5fe7b360, length = 52
Jun  5 15:35:11 vm1 corosync[4555]:   [QB    ] handle_new_connection IPC credentials authenticated (4555-4822-31)
Jun  5 15:35:11 vm1 corosync[4555]:   [QB    ] qb_ipcs_shm_connect connecting to client [4822]
Jun  5 15:35:11 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:35:11 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:35:11 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:35:11 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_created connection created
Jun  5 15:35:11 vm1 corosync[4555]:   [CMAP  ] cmap_lib_init_fn lib_init_fn: conn=0x7fab5fe7a670
Jun  5 15:35:11 vm1 corosync[4555]:   [QB    ] qb_ipcs_dispatch_connection_request HUP conn (4555-4822-31)
Jun  5 15:35:11 vm1 corosync[4555]:   [QB    ] qb_ipcs_disconnect qb_ipcs_disconnect(4555-4822-31) state:2
Jun  5 15:35:11 vm1 corosync[4555]:   [QB    ] _del epoll_ctl(del): Bad file descriptor (9)
Jun  5 15:35:11 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_closed cs_ipcs_connection_closed() 
Jun  5 15:35:11 vm1 corosync[4555]:   [CMAP  ] cmap_lib_exit_fn exit_fn for conn=0x7fab5fe7a670
Jun  5 15:35:11 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_destroyed cs_ipcs_connection_destroyed() 
Jun  5 15:35:11 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-response-4555-4822-31-header
Jun  5 15:35:11 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-event-4555-4822-31-header
Jun  5 15:35:11 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-request-4555-4822-31-header
Jun  5 15:35:11 vm1 corosync[4555]:   [QB    ] handle_new_connection IPC credentials authenticated (4555-4822-31)
Jun  5 15:35:11 vm1 corosync[4555]:   [QB    ] qb_ipcs_shm_connect connecting to client [4822]
Jun  5 15:35:11 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:35:11 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:35:11 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:35:11 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_created connection created
Jun  5 15:35:11 vm1 corosync[4555]:   [CMAP  ] cmap_lib_init_fn lib_init_fn: conn=0x7fab5fe7cf90
Jun  5 15:35:11 vm1 crmd[4822]:     info: do_ha_control: Connected to the cluster
Jun  5 15:35:11 vm1 crmd[4822]:     info: lrmd_ipc_connect: Connecting to lrmd
Jun  5 15:35:11 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section nodes: OK (rc=0, origin=local/crmd/3, version=0.20.69)
Jun  5 15:35:11 vm1 lrmd[4578]:     info: crm_client_new: Connecting 0x2125d50 for uid=496 gid=492 pid=4822 id=56217c98-ea37-4e6a-8ad5-78eb9781f61c
Jun  5 15:35:11 vm1 crmd[4822]:     info: do_lrm_control: LRM connection established
Jun  5 15:35:11 vm1 crmd[4822]:     info: do_started: Delaying start, no membership data (0000000000100000)
Jun  5 15:35:11 vm1 crmd[4822]:     info: pcmk_quorum_notification: Membership 388: quorum retained (1)
Jun  5 15:35:11 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section crm_config: OK (rc=0, origin=local/crmd/4, version=0.20.69)
Jun  5 15:35:11 vm1 crmd[4822]:   notice: crm_update_peer_state: pcmk_quorum_notification: Node vm1[2204477632] - state is now member (was (null))
Jun  5 15:35:11 vm1 crmd[4822]:     info: peer_update_callback: vm1 is now member (was (null))
Jun  5 15:35:11 vm1 crmd[4822]:     info: do_started: Delaying start, Config not read (0000000000000040)
Jun  5 15:35:11 vm1 crmd[4822]:     info: qb_ipcs_us_publish: server name: crmd
Jun  5 15:35:11 vm1 crmd[4822]:   notice: do_started: The local CRM is operational
Jun  5 15:35:11 vm1 crmd[4822]:     info: do_log: FSA: Input I_PENDING from do_started() received in state S_STARTING
Jun  5 15:35:11 vm1 crmd[4822]:     info: do_state_transition: State transition S_STARTING -> S_PENDING [ input=I_PENDING cause=C_FSA_INTERNAL origin=do_started ]
Jun  5 15:35:11 vm1 cib[4576]:     info: cib_process_readwrite: We are now in R/O mode
Jun  5 15:35:11 vm1 cib[4576]:     info: cib_process_request: Completed cib_slave operation for section 'all': OK (rc=0, origin=local/crmd/5, version=0.20.69)
Jun  5 15:35:11 vm1 corosync[4555]:   [QB    ] qb_ipcs_dispatch_connection_request HUP conn (4555-4822-31)
Jun  5 15:35:11 vm1 corosync[4555]:   [QB    ] qb_ipcs_disconnect qb_ipcs_disconnect(4555-4822-31) state:2
Jun  5 15:35:11 vm1 corosync[4555]:   [QB    ] _del epoll_ctl(del): Bad file descriptor (9)
Jun  5 15:35:11 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_closed cs_ipcs_connection_closed() 
Jun  5 15:35:11 vm1 corosync[4555]:   [CMAP  ] cmap_lib_exit_fn exit_fn for conn=0x7fab5fe7cf90
Jun  5 15:35:11 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_destroyed cs_ipcs_connection_destroyed() 
Jun  5 15:35:11 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-response-4555-4822-31-header
Jun  5 15:35:11 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-event-4555-4822-31-header
Jun  5 15:35:11 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-request-4555-4822-31-header
Jun  5 15:35:12 vm1 crmd[4822]:     info: pcmk_cpg_membership: Joined[0.0] crmd.2204477632 
Jun  5 15:35:12 vm1 crmd[4822]:     info: pcmk_cpg_membership: Member[0.0] crmd.2204477632 
Jun  5 15:35:13 vm1 stonith-ng[4577]:     info: crm_client_new: Connecting 0x1867d30 for uid=496 gid=492 pid=4822 id=5fe28e02-3f8c-4350-9af7-8e37205c3c8d
Jun  5 15:35:13 vm1 stonith-ng[4577]:     info: stonith_command: Processed register from crmd.4822: OK (0)
Jun  5 15:35:13 vm1 stonith-ng[4577]:     info: stonith_command: Processed st_notify from crmd.4822: OK (0)
Jun  5 15:35:13 vm1 stonith-ng[4577]:     info: stonith_command: Processed st_notify from crmd.4822: OK (0)
Jun  5 15:35:32 vm1 crmd[4822]:     info: crm_timer_popped: Election Trigger (I_DC_TIMEOUT) just popped (20000ms)
Jun  5 15:35:32 vm1 crmd[4822]:  warning: do_log: FSA: Input I_DC_TIMEOUT from crm_timer_popped() received in state S_PENDING
Jun  5 15:35:32 vm1 crmd[4822]:     info: do_state_transition: State transition S_PENDING -> S_ELECTION [ input=I_DC_TIMEOUT cause=C_TIMER_POPPED origin=crm_timer_popped ]
Jun  5 15:35:32 vm1 crmd[4822]:     info: do_log: FSA: Input I_ELECTION_DC from do_election_check() received in state S_ELECTION
Jun  5 15:35:32 vm1 crmd[4822]:   notice: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_FSA_INTERNAL origin=do_election_check ]
Jun  5 15:35:32 vm1 crmd[4822]:     info: do_te_control: Registering TE UUID: 98d251be-2bbb-4075-b109-ecbd714b74da
Jun  5 15:35:32 vm1 crmd[4822]:     info: set_graph_functions: Setting custom graph functions
Jun  5 15:35:32 vm1 pengine[4580]:     info: crm_client_new: Connecting 0x2290f90 for uid=496 gid=492 pid=4822 id=544c2eac-69cf-4e86-ae57-63c959013e1f
Jun  5 15:35:32 vm1 crmd[4822]:     info: do_dc_takeover: Taking over DC status for this partition
Jun  5 15:35:32 vm1 cib[4576]:     info: cib_process_readwrite: We are now in R/W mode
Jun  5 15:35:32 vm1 cib[4576]:     info: cib_process_request: Completed cib_master operation for section 'all': OK (rc=0, origin=local/crmd/6, version=0.20.69)
Jun  5 15:35:32 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section cib: OK (rc=0, origin=local/crmd/7, version=0.20.69)
Jun  5 15:35:32 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section //cib/configuration/crm_config//cluster_property_set//nvpair[@name='dc-version']: OK (rc=0, origin=local/crmd/8, version=0.20.69)
Jun  5 15:35:32 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section crm_config: OK (rc=0, origin=local/crmd/9, version=0.20.69)
Jun  5 15:35:32 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section //cib/configuration/crm_config//cluster_property_set//nvpair[@name='cluster-infrastructure']: OK (rc=0, origin=local/crmd/10, version=0.20.69)
Jun  5 15:35:32 vm1 crmd[4822]:     info: join_make_offer: Making join offers based on membership 388
Jun  5 15:35:32 vm1 crmd[4822]:     info: join_make_offer: join-1: Sending offer to vm1
Jun  5 15:35:32 vm1 crmd[4822]:     info: crm_update_peer_join: join_make_offer: Node vm1[2204477632] - join-1 phase 0 -> 1
Jun  5 15:35:32 vm1 crmd[4822]:     info: do_dc_join_offer_all: join-1: Waiting on 1 outstanding join acks
Jun  5 15:35:32 vm1 crmd[4822]:     info: update_dc: Set DC to vm1 (3.0.7)
Jun  5 15:35:32 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section crm_config: OK (rc=0, origin=local/crmd/11, version=0.20.69)
Jun  5 15:35:32 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section crm_config: OK (rc=0, origin=local/crmd/12, version=0.20.69)
Jun  5 15:35:32 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/13, version=0.20.69)
Jun  5 15:35:32 vm1 crmd[4822]:     info: crm_update_peer_join: do_dc_join_filter_offer: Node vm1[2204477632] - join-1 phase 1 -> 2
Jun  5 15:35:32 vm1 crmd[4822]:     info: crm_update_peer_expected: do_dc_join_filter_offer: Node vm1[2204477632] - expected state is now member
Jun  5 15:35:32 vm1 crmd[4822]:     info: do_state_transition: State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_FSA_INTERNAL origin=check_join_state ]
Jun  5 15:35:32 vm1 crmd[4822]:     info: crmd_join_phase_log: join-1: vm1=integrated
Jun  5 15:35:32 vm1 crmd[4822]:     info: do_dc_join_finalize: join-1: Syncing our CIB to the rest of the cluster
Jun  5 15:35:32 vm1 cib[4576]:     info: cib_process_request: Completed cib_sync operation for section 'all': OK (rc=0, origin=local/crmd/14, version=0.20.69)
Jun  5 15:35:32 vm1 crmd[4822]:     info: crm_update_peer_join: finalize_join_for: Node vm1[2204477632] - join-1 phase 2 -> 3
Jun  5 15:35:32 vm1 crmd[4822]:     info: erase_status_tag: Deleting xpath: //node_state[@uname='vm1']/transient_attributes
Jun  5 15:35:32 vm1 crmd[4822]:     info: update_attrd: Connecting to attrd... 5 retries remaining
Jun  5 15:35:32 vm1 attrd[4579]:     info: crm_client_new: Connecting 0x915d10 for uid=496 gid=492 pid=4822 id=bc6ab2e4-458c-492a-ab4a-e988ae93179b
Jun  5 15:35:32 vm1 crmd[4822]:     info: crm_update_peer_join: do_dc_join_ack: Node vm1[2204477632] - join-1 phase 3 -> 4
Jun  5 15:35:32 vm1 crmd[4822]:     info: do_dc_join_ack: join-1: Updating node state to member for vm1
Jun  5 15:35:32 vm1 crmd[4822]:     info: erase_status_tag: Deleting xpath: //node_state[@uname='vm1']/lrm
Jun  5 15:35:32 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section nodes: OK (rc=0, origin=local/crmd/15, version=0.20.69)
Jun  5 15:35:32 vm1 cib[4576]:     info: cib_process_request: Completed cib_delete operation for section //node_state[@uname='vm1']/transient_attributes: OK (rc=0, origin=local/crmd/16, version=0.20.70)
Jun  5 15:35:32 vm1 cib[4576]:     info: cib_process_request: Completed cib_delete operation for section //node_state[@uname='vm1']/lrm: OK (rc=0, origin=local/crmd/17, version=0.20.71)
Jun  5 15:35:32 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/crmd/18, version=0.20.72)
Jun  5 15:35:32 vm1 crmd[4822]:     info: do_state_transition: State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state ]
Jun  5 15:35:32 vm1 crmd[4822]:     info: abort_transition_graph: do_te_invoke:155 - Triggered transition abort (complete=1) : Peer Cancelled
Jun  5 15:35:32 vm1 attrd[4579]:   notice: attrd_local_callback: Sending full refresh (origin=crmd)
Jun  5 15:35:32 vm1 attrd[4579]:   notice: attrd_trigger_update: Sending flush op to all hosts for: default_ping_set(1) (100)
Jun  5 15:35:32 vm1 attrd[4579]:   notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Jun  5 15:35:32 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section //cib/status//node_state[@id='2204477632']//transient_attributes//nvpair[@name='probe_complete']: No such device or address (rc=-6, origin=local/attrd/68, version=0.20.72)
Jun  5 15:35:32 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section nodes: OK (rc=0, origin=local/crmd/19, version=0.20.72)
Jun  5 15:35:32 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/crmd/20, version=0.20.72)
Jun  5 15:35:32 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section cib: OK (rc=0, origin=local/crmd/21, version=0.20.72)
Jun  5 15:35:32 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/22, version=0.20.72)
Jun  5 15:35:32 vm1 pengine[4580]:   notice: unpack_config: On loss of CCM Quorum: Ignore
Jun  5 15:35:32 vm1 pengine[4580]:  warning: unpack_nodes: Blind faith: not fencing unseen nodes
Jun  5 15:35:32 vm1 pengine[4580]:     info: determine_online_status_fencing: Node vm1 is active
Jun  5 15:35:32 vm1 pengine[4580]:     info: determine_online_status: Node vm1 is online
Jun  5 15:35:32 vm1 pengine[4580]:  warning: pe_fence_node: Node vm2 will be fenced because the node is no longer part of the cluster
Jun  5 15:35:32 vm1 pengine[4580]:  warning: determine_online_status: Node vm2 is unclean
Jun  5 15:35:32 vm1 pengine[4580]:     info: clone_print:  Clone Set: cl1 [st1]
Jun  5 15:35:32 vm1 pengine[4580]:     info: short_print:      Started: [ vm2 ]
Jun  5 15:35:32 vm1 pengine[4580]:     info: short_print:      Stopped: [ vm1 ]
Jun  5 15:35:32 vm1 pengine[4580]:     info: native_print: prmDummy#011(ocf::pacemaker:Dummy):#011Stopped 
Jun  5 15:35:32 vm1 pengine[4580]:     info: clone_print:  Clone Set: clnPing [prmPing]
Jun  5 15:35:32 vm1 pengine[4580]:     info: short_print:      Started: [ vm2 ]
Jun  5 15:35:32 vm1 pengine[4580]:     info: short_print:      Stopped: [ vm1 ]
Jun  5 15:35:32 vm1 pengine[4580]:     info: native_color: Resource st1:1 cannot run anywhere
Jun  5 15:35:32 vm1 pengine[4580]:     info: native_color: Resource prmDummy cannot run anywhere
Jun  5 15:35:32 vm1 pengine[4580]:     info: native_color: Resource prmPing:1 cannot run anywhere
Jun  5 15:35:32 vm1 pengine[4580]:  warning: custom_action: Action st1:0_stop_0 on vm2 is unrunnable (offline)
Jun  5 15:35:32 vm1 pengine[4580]:  warning: custom_action: Action prmPing:0_stop_0 on vm2 is unrunnable (offline)
Jun  5 15:35:32 vm1 pengine[4580]:     info: RecurringOp:  Start recurring monitor (10s) for prmPing:0 on vm1
Jun  5 15:35:32 vm1 pengine[4580]:  warning: stage6: Scheduling Node vm2 for STONITH
Jun  5 15:35:32 vm1 pengine[4580]:     info: native_stop_constraints: st1:0_stop_0 is implicit after vm2 is fenced
Jun  5 15:35:32 vm1 pengine[4580]:     info: native_stop_constraints: prmPing:0_stop_0 is implicit after vm2 is fenced
Jun  5 15:35:32 vm1 pengine[4580]:   notice: LogActions: Move    st1:0#011(Started vm2 -> vm1)
Jun  5 15:35:32 vm1 pengine[4580]:     info: LogActions: Leave   st1:1#011(Stopped)
Jun  5 15:35:32 vm1 pengine[4580]:     info: LogActions: Leave   prmDummy#011(Stopped)
Jun  5 15:35:32 vm1 pengine[4580]:   notice: LogActions: Move    prmPing:0#011(Started vm2 -> vm1)
Jun  5 15:35:32 vm1 pengine[4580]:     info: LogActions: Leave   prmPing:1#011(Stopped)
Jun  5 15:35:32 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section /cib: OK (rc=0, origin=local/attrd/69, version=0.20.72)
Jun  5 15:35:32 vm1 crmd[4822]:    error: crm_xml_err: XML Error: Entity: line 1: parser error : Specification mandate value for attribute CRM_meta_default_ping_set
Jun  5 15:35:32 vm1 crmd[4822]:    error: crm_xml_err: XML Error: 2" on_node="vm2" on_node_uuid="2221254848"><attributes CRM_meta_default_ping_set
Jun  5 15:35:32 vm1 crmd[4822]:    error: crm_xml_err: XML Error:                                                                                ^
Jun  5 15:35:32 vm1 crmd[4822]:    error: crm_xml_err: XML Error: Entity: line 1: parser error : attributes construct error
Jun  5 15:35:32 vm1 crmd[4822]:    error: crm_xml_err: XML Error: 2" on_node="vm2" on_node_uuid="2221254848"><attributes CRM_meta_default_ping_set
Jun  5 15:35:32 vm1 crmd[4822]:    error: crm_xml_err: XML Error:                                                                                ^
Jun  5 15:35:32 vm1 crmd[4822]:    error: crm_xml_err: XML Error: Entity: line 1: parser error : Couldn't find end of Start Tag attributes line 1
Jun  5 15:35:32 vm1 crmd[4822]:    error: crm_xml_err: XML Error: 2" on_node="vm2" on_node_uuid="2221254848"><attributes CRM_meta_default_ping_set
Jun  5 15:35:32 vm1 crmd[4822]:    error: crm_xml_err: XML Error:                                                                                ^
Jun  5 15:35:32 vm1 crmd[4822]:  warning: string2xml: Parsing failed (domain=1, level=3, code=73): Couldn't find end of Start Tag attributes line 1
Jun  5 15:35:32 vm1 pengine[4580]:  warning: process_pe_message: Calculated Transition 13: /var/lib/pacemaker/pengine/pe-warn-10.bz2
Jun  5 15:35:32 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/attrd/70, version=0.20.73)
Jun  5 15:35:32 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section //cib/status//node_state[@id='2204477632']//transient_attributes//nvpair[@name='default_ping_set(1)']: No such device or address (rc=-6, origin=local/attrd/71, version=0.20.73)
Jun  5 15:35:32 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section /cib: OK (rc=0, origin=local/attrd/72, version=0.20.73)
Jun  5 15:35:32 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/attrd/73, version=0.20.74)
Jun  5 15:35:33 vm1 cib[4576]:     info: crm_client_destroy: Destroying 0 events
Jun  5 15:35:33 vm1 stonith-ng[4577]:     info: crm_client_destroy: Destroying 0 events
Jun  5 15:35:33 vm1 pengine[4580]:     info: crm_client_destroy: Destroying 0 events
Jun  5 15:35:33 vm1 attrd[4579]:     info: crm_client_destroy: Destroying 0 events
Jun  5 15:35:33 vm1 lrmd[4578]:     info: crm_client_destroy: Destroying 0 events
Jun  5 15:35:33 vm1 pacemakerd[4574]:    error: child_death_dispatch: Managed process 4822 (crmd) dumped core
Jun  5 15:35:33 vm1 pacemakerd[4574]:   notice: pcmk_child_exit: Child process crmd terminated with signal 11 (pid=4822, core=1)
Jun  5 15:35:33 vm1 pacemakerd[4574]:   notice: pcmk_process_exit: Respawning failed child process: crmd
Jun  5 15:35:33 vm1 pacemakerd[4574]:     info: start_child: Using uid=496 and group=492 for process crmd
Jun  5 15:35:33 vm1 pacemakerd[4574]:     info: start_child: Forked child 4829 for process crmd
Jun  5 15:35:33 vm1 corosync[4555]:   [QB    ] qb_ipcs_dispatch_connection_request HUP conn (4555-4822-29)
Jun  5 15:35:33 vm1 corosync[4555]:   [QB    ] qb_ipcs_disconnect qb_ipcs_disconnect(4555-4822-29) state:2
Jun  5 15:35:33 vm1 corosync[4555]:   [QB    ] _del epoll_ctl(del): Bad file descriptor (9)
Jun  5 15:35:33 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_closed cs_ipcs_connection_closed() 
Jun  5 15:35:33 vm1 corosync[4555]:   [CPG   ] cpg_lib_exit_fn exit_fn for conn=0x7fab5fe78d10
Jun  5 15:35:33 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_destroyed cs_ipcs_connection_destroyed() 
Jun  5 15:35:33 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cpg-response-4555-4822-29-header
Jun  5 15:35:33 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cpg-event-4555-4822-29-header
Jun  5 15:35:33 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cpg-request-4555-4822-29-header
Jun  5 15:35:33 vm1 corosync[4555]:   [QB    ] qb_ipcs_dispatch_connection_request HUP conn (4555-4822-30)
Jun  5 15:35:33 vm1 corosync[4555]:   [QB    ] qb_ipcs_disconnect qb_ipcs_disconnect(4555-4822-30) state:2
Jun  5 15:35:33 vm1 corosync[4555]:   [QB    ] _del epoll_ctl(del): Bad file descriptor (9)
Jun  5 15:35:33 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_closed cs_ipcs_connection_closed() 
Jun  5 15:35:33 vm1 corosync[4555]:   [QUORUM] quorum_lib_exit_fn lib_exit_fn: conn=0x7fab5fe7b360
Jun  5 15:35:33 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_destroyed cs_ipcs_connection_destroyed() 
Jun  5 15:35:33 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-quorum-response-4555-4822-30-header
Jun  5 15:35:33 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-quorum-event-4555-4822-30-header
Jun  5 15:35:33 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-quorum-request-4555-4822-30-header
Jun  5 15:35:33 vm1 corosync[4555]:   [CPG   ] message_handler_req_exec_cpg_procleave got procleave message from cluster node -2090489664
Jun  5 15:35:33 vm1 crmd[4829]:   notice: crm_add_logfile: Additional logging available in /var/log/pacemaker.log
Jun  5 15:35:33 vm1 crmd[4829]:    debug: crm_update_callsites: Enabling callsites based on priority=7, files=(null), functions=(null), formats=(null), tags=(null)
Jun  5 15:35:33 vm1 crmd[4829]:     info: crm_log_init: Changed active directory to /var/lib/heartbeat/cores/hacluster
Jun  5 15:35:33 vm1 crmd[4829]:   notice: main: CRM Git Version: 7209c02
Jun  5 15:35:33 vm1 crmd[4829]:     info: do_log: FSA: Input I_STARTUP from crmd_init() received in state S_STARTING
Jun  5 15:35:33 vm1 crmd[4829]:     info: get_cluster_type: Verifying cluster type: 'corosync'
Jun  5 15:35:33 vm1 crmd[4829]:     info: get_cluster_type: Assuming an active 'corosync' cluster
Jun  5 15:35:33 vm1 cib[4576]:     info: crm_client_new: Connecting 0x11c7700 for uid=496 gid=492 pid=4829 id=e566b122-786c-4659-a2f9-5a05db0cbe4a
Jun  5 15:35:33 vm1 crmd[4829]:     info: do_cib_control: CIB connection established
Jun  5 15:35:33 vm1 crmd[4829]:   notice: crm_cluster_connect: Connecting to cluster infrastructure: corosync
Jun  5 15:35:33 vm1 corosync[4555]:   [QB    ] handle_new_connection IPC credentials authenticated (4555-4829-29)
Jun  5 15:35:33 vm1 corosync[4555]:   [QB    ] qb_ipcs_shm_connect connecting to client [4829]
Jun  5 15:35:33 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:35:33 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:35:33 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:35:33 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_created connection created
Jun  5 15:35:33 vm1 corosync[4555]:   [CPG   ] cpg_lib_init_fn lib_init_fn: conn=0x7fab5fe7b360, cpd=0x7fab5fe7a224
Jun  5 15:35:33 vm1 crmd[4829]:     info: crm_get_peer: Node <null> now has id: 2204477632
Jun  5 15:35:33 vm1 crmd[4829]:     info: crm_update_peer_proc: init_cpg_connection: Node (null)[2204477632] - corosync-cpg is now online
Jun  5 15:35:33 vm1 corosync[4555]:   [QB    ] handle_new_connection IPC credentials authenticated (4555-4829-30)
Jun  5 15:35:33 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/2, version=0.20.74)
Jun  5 15:35:33 vm1 corosync[4555]:   [QB    ] qb_ipcs_shm_connect connecting to client [4829]
Jun  5 15:35:33 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:35:33 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:35:33 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:35:33 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_created connection created
Jun  5 15:35:33 vm1 corosync[4555]:   [CMAP  ] cmap_lib_init_fn lib_init_fn: conn=0x7fab5fe78d10
Jun  5 15:35:33 vm1 crmd[4829]:   notice: corosync_node_name: Unable to get node name for nodeid 2204477632
Jun  5 15:35:33 vm1 crmd[4829]:   notice: get_local_node_name: Defaulting to uname -n for the local corosync node name
Jun  5 15:35:33 vm1 crmd[4829]:     info: init_cs_connection_once: Connection to 'corosync': established
Jun  5 15:35:33 vm1 crmd[4829]:     info: crm_get_peer: Node 2204477632 is now known as vm1
Jun  5 15:35:33 vm1 crmd[4829]:     info: peer_update_callback: vm1 is now (null)
Jun  5 15:35:33 vm1 crmd[4829]:     info: crm_get_peer: Node 2204477632 has uuid 2204477632
Jun  5 15:35:33 vm1 corosync[4555]:   [CPG   ] message_handler_req_exec_cpg_procjoin got procjoin message from cluster node -2090489664 (r(0) ip(192.168.101.131) r(1) ip(192.168.102.131) ) for pid 4829
Jun  5 15:35:33 vm1 corosync[4555]:   [QB    ] qb_ipcs_dispatch_connection_request HUP conn (4555-4829-30)
Jun  5 15:35:33 vm1 corosync[4555]:   [QB    ] qb_ipcs_disconnect qb_ipcs_disconnect(4555-4829-30) state:2
Jun  5 15:35:33 vm1 corosync[4555]:   [QB    ] _del epoll_ctl(del): Bad file descriptor (9)
Jun  5 15:35:33 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_closed cs_ipcs_connection_closed() 
Jun  5 15:35:33 vm1 corosync[4555]:   [CMAP  ] cmap_lib_exit_fn exit_fn for conn=0x7fab5fe78d10
Jun  5 15:35:33 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_destroyed cs_ipcs_connection_destroyed() 
Jun  5 15:35:33 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-response-4555-4829-30-header
Jun  5 15:35:33 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-event-4555-4829-30-header
Jun  5 15:35:33 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-request-4555-4829-30-header
Jun  5 15:35:33 vm1 corosync[4555]:   [QB    ] handle_new_connection IPC credentials authenticated (4555-4829-30)
Jun  5 15:35:33 vm1 corosync[4555]:   [QB    ] qb_ipcs_shm_connect connecting to client [4829]
Jun  5 15:35:33 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:35:33 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:35:33 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:35:33 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_created connection created
Jun  5 15:35:33 vm1 corosync[4555]:   [QUORUM] quorum_lib_init_fn lib_init_fn: conn=0x7fab5fe78d10
Jun  5 15:35:33 vm1 corosync[4555]:   [QUORUM] message_handler_req_lib_quorum_gettype got quorum_type request on 0x7fab5fe78d10
Jun  5 15:35:33 vm1 corosync[4555]:   [QUORUM] message_handler_req_lib_quorum_getquorate got quorate request on 0x7fab5fe78d10
Jun  5 15:35:33 vm1 crmd[4829]:   notice: init_quorum_connection: Quorum acquired
Jun  5 15:35:33 vm1 corosync[4555]:   [QUORUM] message_handler_req_lib_quorum_trackstart got trackstart request on 0x7fab5fe78d10
Jun  5 15:35:33 vm1 corosync[4555]:   [QUORUM] message_handler_req_lib_quorum_trackstart sending initial status to 0x7fab5fe78d10
Jun  5 15:35:33 vm1 corosync[4555]:   [QUORUM] send_library_notification sending quorum notification to 0x7fab5fe78d10, length = 52
Jun  5 15:35:33 vm1 corosync[4555]:   [QB    ] handle_new_connection IPC credentials authenticated (4555-4829-31)
Jun  5 15:35:33 vm1 corosync[4555]:   [QB    ] qb_ipcs_shm_connect connecting to client [4829]
Jun  5 15:35:33 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:35:33 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:35:33 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:35:33 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_created connection created
Jun  5 15:35:33 vm1 corosync[4555]:   [CMAP  ] cmap_lib_init_fn lib_init_fn: conn=0x7fab5fe7a640
Jun  5 15:35:33 vm1 corosync[4555]:   [QB    ] qb_ipcs_dispatch_connection_request HUP conn (4555-4829-31)
Jun  5 15:35:33 vm1 corosync[4555]:   [QB    ] qb_ipcs_disconnect qb_ipcs_disconnect(4555-4829-31) state:2
Jun  5 15:35:33 vm1 corosync[4555]:   [QB    ] _del epoll_ctl(del): Bad file descriptor (9)
Jun  5 15:35:33 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_closed cs_ipcs_connection_closed() 
Jun  5 15:35:33 vm1 corosync[4555]:   [CMAP  ] cmap_lib_exit_fn exit_fn for conn=0x7fab5fe7a640
Jun  5 15:35:33 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_destroyed cs_ipcs_connection_destroyed() 
Jun  5 15:35:33 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-response-4555-4829-31-header
Jun  5 15:35:33 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-event-4555-4829-31-header
Jun  5 15:35:33 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-request-4555-4829-31-header
Jun  5 15:35:33 vm1 corosync[4555]:   [QB    ] handle_new_connection IPC credentials authenticated (4555-4829-31)
Jun  5 15:35:33 vm1 corosync[4555]:   [QB    ] qb_ipcs_shm_connect connecting to client [4829]
Jun  5 15:35:33 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:35:33 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:35:33 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:35:33 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_created connection created
Jun  5 15:35:33 vm1 corosync[4555]:   [CMAP  ] cmap_lib_init_fn lib_init_fn: conn=0x7fab5fe7c220
Jun  5 15:35:33 vm1 crmd[4829]:     info: do_ha_control: Connected to the cluster
Jun  5 15:35:33 vm1 crmd[4829]:     info: lrmd_ipc_connect: Connecting to lrmd
Jun  5 15:35:33 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section nodes: OK (rc=0, origin=local/crmd/3, version=0.20.74)
Jun  5 15:35:33 vm1 lrmd[4578]:     info: crm_client_new: Connecting 0x2125d50 for uid=496 gid=492 pid=4829 id=b6eaa0ac-df88-4285-a581-b2a1207cd8dc
Jun  5 15:35:33 vm1 crmd[4829]:     info: do_lrm_control: LRM connection established
Jun  5 15:35:33 vm1 crmd[4829]:     info: do_started: Delaying start, no membership data (0000000000100000)
Jun  5 15:35:33 vm1 crmd[4829]:     info: pcmk_quorum_notification: Membership 388: quorum retained (1)
Jun  5 15:35:33 vm1 crmd[4829]:   notice: crm_update_peer_state: pcmk_quorum_notification: Node vm1[2204477632] - state is now member (was (null))
Jun  5 15:35:33 vm1 crmd[4829]:     info: peer_update_callback: vm1 is now member (was (null))
Jun  5 15:35:33 vm1 crmd[4829]:     info: do_started: Delaying start, Config not read (0000000000000040)
Jun  5 15:35:33 vm1 crmd[4829]:     info: pcmk_cpg_membership: Joined[0.0] crmd.2204477632 
Jun  5 15:35:33 vm1 crmd[4829]:     info: pcmk_cpg_membership: Member[0.0] crmd.2204477632 
Jun  5 15:35:33 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section crm_config: OK (rc=0, origin=local/crmd/4, version=0.20.74)
Jun  5 15:35:33 vm1 crmd[4829]:     info: qb_ipcs_us_publish: server name: crmd
Jun  5 15:35:33 vm1 crmd[4829]:   notice: do_started: The local CRM is operational
Jun  5 15:35:33 vm1 crmd[4829]:     info: do_log: FSA: Input I_PENDING from do_started() received in state S_STARTING
Jun  5 15:35:33 vm1 crmd[4829]:     info: do_state_transition: State transition S_STARTING -> S_PENDING [ input=I_PENDING cause=C_FSA_INTERNAL origin=do_started ]
Jun  5 15:35:33 vm1 cib[4576]:     info: cib_process_readwrite: We are now in R/O mode
Jun  5 15:35:33 vm1 cib[4576]:     info: cib_process_request: Completed cib_slave operation for section 'all': OK (rc=0, origin=local/crmd/5, version=0.20.74)
Jun  5 15:35:33 vm1 corosync[4555]:   [QB    ] qb_ipcs_dispatch_connection_request HUP conn (4555-4829-31)
Jun  5 15:35:33 vm1 corosync[4555]:   [QB    ] qb_ipcs_disconnect qb_ipcs_disconnect(4555-4829-31) state:2
Jun  5 15:35:33 vm1 corosync[4555]:   [QB    ] _del epoll_ctl(del): Bad file descriptor (9)
Jun  5 15:35:33 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_closed cs_ipcs_connection_closed() 
Jun  5 15:35:33 vm1 corosync[4555]:   [CMAP  ] cmap_lib_exit_fn exit_fn for conn=0x7fab5fe7c220
Jun  5 15:35:33 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_destroyed cs_ipcs_connection_destroyed() 
Jun  5 15:35:33 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-response-4555-4829-31-header
Jun  5 15:35:33 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-event-4555-4829-31-header
Jun  5 15:35:33 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-request-4555-4829-31-header
Jun  5 15:35:35 vm1 stonith-ng[4577]:     info: crm_client_new: Connecting 0x1867d30 for uid=496 gid=492 pid=4829 id=64e0f22b-0890-4841-961e-e43a2a260386
Jun  5 15:35:35 vm1 stonith-ng[4577]:     info: stonith_command: Processed register from crmd.4829: OK (0)
Jun  5 15:35:35 vm1 stonith-ng[4577]:     info: stonith_command: Processed st_notify from crmd.4829: OK (0)
Jun  5 15:35:35 vm1 stonith-ng[4577]:     info: stonith_command: Processed st_notify from crmd.4829: OK (0)
Jun  5 15:35:54 vm1 crmd[4829]:     info: crm_timer_popped: Election Trigger (I_DC_TIMEOUT) just popped (20000ms)
Jun  5 15:35:54 vm1 crmd[4829]:  warning: do_log: FSA: Input I_DC_TIMEOUT from crm_timer_popped() received in state S_PENDING
Jun  5 15:35:54 vm1 crmd[4829]:     info: do_state_transition: State transition S_PENDING -> S_ELECTION [ input=I_DC_TIMEOUT cause=C_TIMER_POPPED origin=crm_timer_popped ]
Jun  5 15:35:54 vm1 crmd[4829]:     info: do_log: FSA: Input I_ELECTION_DC from do_election_check() received in state S_ELECTION
Jun  5 15:35:54 vm1 crmd[4829]:   notice: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_FSA_INTERNAL origin=do_election_check ]
Jun  5 15:35:54 vm1 crmd[4829]:     info: do_te_control: Registering TE UUID: cf790079-44a3-4a03-aff3-1c990cc03b70
Jun  5 15:35:54 vm1 crmd[4829]:     info: set_graph_functions: Setting custom graph functions
Jun  5 15:35:54 vm1 pengine[4580]:     info: crm_client_new: Connecting 0x2290f90 for uid=496 gid=492 pid=4829 id=eddc3165-38c6-473c-821a-9d341e834883
Jun  5 15:35:54 vm1 crmd[4829]:     info: do_dc_takeover: Taking over DC status for this partition
Jun  5 15:35:54 vm1 cib[4576]:     info: cib_process_readwrite: We are now in R/W mode
Jun  5 15:35:54 vm1 cib[4576]:     info: cib_process_request: Completed cib_master operation for section 'all': OK (rc=0, origin=local/crmd/6, version=0.20.74)
Jun  5 15:35:54 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section cib: OK (rc=0, origin=local/crmd/7, version=0.20.74)
Jun  5 15:35:54 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section //cib/configuration/crm_config//cluster_property_set//nvpair[@name='dc-version']: OK (rc=0, origin=local/crmd/8, version=0.20.74)
Jun  5 15:35:54 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section crm_config: OK (rc=0, origin=local/crmd/9, version=0.20.74)
Jun  5 15:35:54 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section //cib/configuration/crm_config//cluster_property_set//nvpair[@name='cluster-infrastructure']: OK (rc=0, origin=local/crmd/10, version=0.20.74)
Jun  5 15:35:54 vm1 crmd[4829]:     info: join_make_offer: Making join offers based on membership 388
Jun  5 15:35:54 vm1 crmd[4829]:     info: join_make_offer: join-1: Sending offer to vm1
Jun  5 15:35:54 vm1 crmd[4829]:     info: crm_update_peer_join: join_make_offer: Node vm1[2204477632] - join-1 phase 0 -> 1
Jun  5 15:35:54 vm1 crmd[4829]:     info: do_dc_join_offer_all: join-1: Waiting on 1 outstanding join acks
Jun  5 15:35:54 vm1 crmd[4829]:     info: update_dc: Set DC to vm1 (3.0.7)
Jun  5 15:35:54 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section crm_config: OK (rc=0, origin=local/crmd/11, version=0.20.74)
Jun  5 15:35:54 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section crm_config: OK (rc=0, origin=local/crmd/12, version=0.20.74)
Jun  5 15:35:54 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/13, version=0.20.74)
Jun  5 15:35:54 vm1 crmd[4829]:     info: crm_update_peer_join: do_dc_join_filter_offer: Node vm1[2204477632] - join-1 phase 1 -> 2
Jun  5 15:35:54 vm1 crmd[4829]:     info: crm_update_peer_expected: do_dc_join_filter_offer: Node vm1[2204477632] - expected state is now member
Jun  5 15:35:54 vm1 crmd[4829]:     info: do_state_transition: State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_FSA_INTERNAL origin=check_join_state ]
Jun  5 15:35:54 vm1 crmd[4829]:     info: crmd_join_phase_log: join-1: vm1=integrated
Jun  5 15:35:54 vm1 crmd[4829]:     info: do_dc_join_finalize: join-1: Syncing our CIB to the rest of the cluster
Jun  5 15:35:54 vm1 cib[4576]:     info: cib_process_request: Completed cib_sync operation for section 'all': OK (rc=0, origin=local/crmd/14, version=0.20.74)
Jun  5 15:35:54 vm1 crmd[4829]:     info: crm_update_peer_join: finalize_join_for: Node vm1[2204477632] - join-1 phase 2 -> 3
Jun  5 15:35:54 vm1 crmd[4829]:     info: erase_status_tag: Deleting xpath: //node_state[@uname='vm1']/transient_attributes
Jun  5 15:35:54 vm1 crmd[4829]:     info: update_attrd: Connecting to attrd... 5 retries remaining
Jun  5 15:35:54 vm1 attrd[4579]:     info: crm_client_new: Connecting 0x915d10 for uid=496 gid=492 pid=4829 id=148e2ab5-8ea7-4cc5-a5a4-d2b0b73acb58
Jun  5 15:35:54 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section nodes: OK (rc=0, origin=local/crmd/15, version=0.20.74)
Jun  5 15:35:54 vm1 crmd[4829]:     info: crm_update_peer_join: do_dc_join_ack: Node vm1[2204477632] - join-1 phase 3 -> 4
Jun  5 15:35:54 vm1 crmd[4829]:     info: do_dc_join_ack: join-1: Updating node state to member for vm1
Jun  5 15:35:54 vm1 crmd[4829]:     info: erase_status_tag: Deleting xpath: //node_state[@uname='vm1']/lrm
Jun  5 15:35:54 vm1 cib[4576]:     info: cib_process_request: Completed cib_delete operation for section //node_state[@uname='vm1']/transient_attributes: OK (rc=0, origin=local/crmd/16, version=0.20.75)
Jun  5 15:35:54 vm1 cib[4576]:     info: cib_process_request: Completed cib_delete operation for section //node_state[@uname='vm1']/lrm: OK (rc=0, origin=local/crmd/17, version=0.20.76)
Jun  5 15:35:54 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/crmd/18, version=0.20.77)
Jun  5 15:35:54 vm1 crmd[4829]:     info: do_state_transition: State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state ]
Jun  5 15:35:54 vm1 crmd[4829]:     info: abort_transition_graph: do_te_invoke:155 - Triggered transition abort (complete=1) : Peer Cancelled
Jun  5 15:35:54 vm1 attrd[4579]:   notice: attrd_local_callback: Sending full refresh (origin=crmd)
Jun  5 15:35:54 vm1 attrd[4579]:   notice: attrd_trigger_update: Sending flush op to all hosts for: default_ping_set(1) (100)
Jun  5 15:35:54 vm1 attrd[4579]:   notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Jun  5 15:35:54 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section //cib/status//node_state[@id='2204477632']//transient_attributes//nvpair[@name='probe_complete']: No such device or address (rc=-6, origin=local/attrd/74, version=0.20.77)
Jun  5 15:35:54 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section nodes: OK (rc=0, origin=local/crmd/19, version=0.20.77)
Jun  5 15:35:54 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/crmd/20, version=0.20.77)
Jun  5 15:35:54 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section cib: OK (rc=0, origin=local/crmd/21, version=0.20.77)
Jun  5 15:35:54 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/22, version=0.20.77)
Jun  5 15:35:54 vm1 pengine[4580]:   notice: unpack_config: On loss of CCM Quorum: Ignore
Jun  5 15:35:54 vm1 pengine[4580]:  warning: unpack_nodes: Blind faith: not fencing unseen nodes
Jun  5 15:35:54 vm1 pengine[4580]:     info: determine_online_status_fencing: Node vm1 is active
Jun  5 15:35:54 vm1 pengine[4580]:     info: determine_online_status: Node vm1 is online
Jun  5 15:35:54 vm1 pengine[4580]:  warning: pe_fence_node: Node vm2 will be fenced because the node is no longer part of the cluster
Jun  5 15:35:54 vm1 pengine[4580]:  warning: determine_online_status: Node vm2 is unclean
Jun  5 15:35:54 vm1 pengine[4580]:     info: clone_print:  Clone Set: cl1 [st1]
Jun  5 15:35:54 vm1 pengine[4580]:     info: short_print:      Started: [ vm2 ]
Jun  5 15:35:54 vm1 pengine[4580]:     info: short_print:      Stopped: [ vm1 ]
Jun  5 15:35:54 vm1 pengine[4580]:     info: native_print: prmDummy#011(ocf::pacemaker:Dummy):#011Stopped 
Jun  5 15:35:54 vm1 pengine[4580]:     info: clone_print:  Clone Set: clnPing [prmPing]
Jun  5 15:35:54 vm1 pengine[4580]:     info: short_print:      Started: [ vm2 ]
Jun  5 15:35:54 vm1 pengine[4580]:     info: short_print:      Stopped: [ vm1 ]
Jun  5 15:35:54 vm1 pengine[4580]:     info: native_color: Resource st1:1 cannot run anywhere
Jun  5 15:35:54 vm1 pengine[4580]:     info: native_color: Resource prmDummy cannot run anywhere
Jun  5 15:35:54 vm1 pengine[4580]:     info: native_color: Resource prmPing:1 cannot run anywhere
Jun  5 15:35:54 vm1 pengine[4580]:  warning: custom_action: Action st1:0_stop_0 on vm2 is unrunnable (offline)
Jun  5 15:35:54 vm1 pengine[4580]:  warning: custom_action: Action prmPing:0_stop_0 on vm2 is unrunnable (offline)
Jun  5 15:35:54 vm1 pengine[4580]:     info: RecurringOp:  Start recurring monitor (10s) for prmPing:0 on vm1
Jun  5 15:35:54 vm1 pengine[4580]:  warning: stage6: Scheduling Node vm2 for STONITH
Jun  5 15:35:54 vm1 pengine[4580]:     info: native_stop_constraints: st1:0_stop_0 is implicit after vm2 is fenced
Jun  5 15:35:54 vm1 pengine[4580]:     info: native_stop_constraints: prmPing:0_stop_0 is implicit after vm2 is fenced
Jun  5 15:35:54 vm1 pengine[4580]:   notice: LogActions: Move    st1:0#011(Started vm2 -> vm1)
Jun  5 15:35:54 vm1 pengine[4580]:     info: LogActions: Leave   st1:1#011(Stopped)
Jun  5 15:35:54 vm1 pengine[4580]:     info: LogActions: Leave   prmDummy#011(Stopped)
Jun  5 15:35:54 vm1 pengine[4580]:   notice: LogActions: Move    prmPing:0#011(Started vm2 -> vm1)
Jun  5 15:35:54 vm1 pengine[4580]:     info: LogActions: Leave   prmPing:1#011(Stopped)
Jun  5 15:35:54 vm1 crmd[4829]:    error: crm_xml_err: XML Error: Entity: line 1: parser error : Specification mandate value for attribute CRM_meta_default_ping_set
Jun  5 15:35:54 vm1 crmd[4829]:    error: crm_xml_err: XML Error: 2" on_node="vm2" on_node_uuid="2221254848"><attributes CRM_meta_default_ping_set
Jun  5 15:35:54 vm1 crmd[4829]:    error: crm_xml_err: XML Error:                                                                                ^
Jun  5 15:35:54 vm1 crmd[4829]:    error: crm_xml_err: XML Error: Entity: line 1: parser error : attributes construct error
Jun  5 15:35:54 vm1 crmd[4829]:    error: crm_xml_err: XML Error: 2" on_node="vm2" on_node_uuid="2221254848"><attributes CRM_meta_default_ping_set
Jun  5 15:35:54 vm1 crmd[4829]:    error: crm_xml_err: XML Error:                                                                                ^
Jun  5 15:35:54 vm1 crmd[4829]:    error: crm_xml_err: XML Error: Entity: line 1: parser error : Couldn't find end of Start Tag attributes line 1
Jun  5 15:35:54 vm1 crmd[4829]:    error: crm_xml_err: XML Error: 2" on_node="vm2" on_node_uuid="2221254848"><attributes CRM_meta_default_ping_set
Jun  5 15:35:54 vm1 crmd[4829]:    error: crm_xml_err: XML Error:                                                                                ^
Jun  5 15:35:54 vm1 crmd[4829]:  warning: string2xml: Parsing failed (domain=1, level=3, code=73): Couldn't find end of Start Tag attributes line 1
Jun  5 15:35:54 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section /cib: OK (rc=0, origin=local/attrd/75, version=0.20.77)
Jun  5 15:35:54 vm1 pengine[4580]:  warning: process_pe_message: Calculated Transition 14: /var/lib/pacemaker/pengine/pe-warn-11.bz2
Jun  5 15:35:54 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/attrd/76, version=0.20.78)
Jun  5 15:35:54 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section //cib/status//node_state[@id='2204477632']//transient_attributes//nvpair[@name='default_ping_set(1)']: No such device or address (rc=-6, origin=local/attrd/77, version=0.20.78)
Jun  5 15:35:54 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section /cib: OK (rc=0, origin=local/attrd/78, version=0.20.78)
Jun  5 15:35:54 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/attrd/79, version=0.20.79)
Jun  5 15:35:56 vm1 cib[4576]:     info: crm_client_destroy: Destroying 0 events
Jun  5 15:35:56 vm1 lrmd[4578]:     info: crm_client_destroy: Destroying 0 events
Jun  5 15:35:56 vm1 stonith-ng[4577]:     info: crm_client_destroy: Destroying 0 events
Jun  5 15:35:56 vm1 attrd[4579]:     info: crm_client_destroy: Destroying 0 events
Jun  5 15:35:56 vm1 pacemakerd[4574]:    error: child_death_dispatch: Managed process 4829 (crmd) dumped core
Jun  5 15:35:56 vm1 pacemakerd[4574]:   notice: pcmk_child_exit: Child process crmd terminated with signal 11 (pid=4829, core=1)
Jun  5 15:35:56 vm1 pacemakerd[4574]:   notice: pcmk_process_exit: Respawning failed child process: crmd
Jun  5 15:35:56 vm1 pacemakerd[4574]:     info: start_child: Using uid=496 and group=492 for process crmd
Jun  5 15:35:56 vm1 pacemakerd[4574]:     info: start_child: Forked child 4839 for process crmd
Jun  5 15:35:56 vm1 pengine[4580]:     info: crm_client_destroy: Destroying 0 events
Jun  5 15:35:56 vm1 corosync[4555]:   [QB    ] qb_ipcs_dispatch_connection_request HUP conn (4555-4829-29)
Jun  5 15:35:56 vm1 corosync[4555]:   [QB    ] qb_ipcs_disconnect qb_ipcs_disconnect(4555-4829-29) state:2
Jun  5 15:35:56 vm1 corosync[4555]:   [QB    ] _del epoll_ctl(del): Bad file descriptor (9)
Jun  5 15:35:56 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_closed cs_ipcs_connection_closed() 
Jun  5 15:35:56 vm1 corosync[4555]:   [CPG   ] cpg_lib_exit_fn exit_fn for conn=0x7fab5fe7b360
Jun  5 15:35:56 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_destroyed cs_ipcs_connection_destroyed() 
Jun  5 15:35:56 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cpg-response-4555-4829-29-header
Jun  5 15:35:56 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cpg-event-4555-4829-29-header
Jun  5 15:35:56 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cpg-request-4555-4829-29-header
Jun  5 15:35:56 vm1 corosync[4555]:   [QB    ] qb_ipcs_dispatch_connection_request HUP conn (4555-4829-30)
Jun  5 15:35:56 vm1 corosync[4555]:   [QB    ] qb_ipcs_disconnect qb_ipcs_disconnect(4555-4829-30) state:2
Jun  5 15:35:56 vm1 corosync[4555]:   [QB    ] _del epoll_ctl(del): Bad file descriptor (9)
Jun  5 15:35:56 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_closed cs_ipcs_connection_closed() 
Jun  5 15:35:56 vm1 corosync[4555]:   [QUORUM] quorum_lib_exit_fn lib_exit_fn: conn=0x7fab5fe78d10
Jun  5 15:35:56 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_destroyed cs_ipcs_connection_destroyed() 
Jun  5 15:35:56 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-quorum-response-4555-4829-30-header
Jun  5 15:35:56 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-quorum-event-4555-4829-30-header
Jun  5 15:35:56 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-quorum-request-4555-4829-30-header
Jun  5 15:35:56 vm1 corosync[4555]:   [CPG   ] message_handler_req_exec_cpg_procleave got procleave message from cluster node -2090489664
Jun  5 15:35:56 vm1 crmd[4839]:   notice: crm_add_logfile: Additional logging available in /var/log/pacemaker.log
Jun  5 15:35:56 vm1 crmd[4839]:    debug: crm_update_callsites: Enabling callsites based on priority=7, files=(null), functions=(null), formats=(null), tags=(null)
Jun  5 15:35:56 vm1 crmd[4839]:     info: crm_log_init: Changed active directory to /var/lib/heartbeat/cores/hacluster
Jun  5 15:35:56 vm1 crmd[4839]:   notice: main: CRM Git Version: 7209c02
Jun  5 15:35:56 vm1 crmd[4839]:     info: do_log: FSA: Input I_STARTUP from crmd_init() received in state S_STARTING
Jun  5 15:35:56 vm1 crmd[4839]:     info: get_cluster_type: Verifying cluster type: 'corosync'
Jun  5 15:35:56 vm1 crmd[4839]:     info: get_cluster_type: Assuming an active 'corosync' cluster
Jun  5 15:35:56 vm1 cib[4576]:     info: crm_client_new: Connecting 0x11cfef0 for uid=496 gid=492 pid=4839 id=53437221-f6fa-4f39-a712-8e3d0df28aa8
Jun  5 15:35:56 vm1 crmd[4839]:     info: do_cib_control: CIB connection established
Jun  5 15:35:56 vm1 crmd[4839]:   notice: crm_cluster_connect: Connecting to cluster infrastructure: corosync
Jun  5 15:35:56 vm1 corosync[4555]:   [QB    ] handle_new_connection IPC credentials authenticated (4555-4839-29)
Jun  5 15:35:56 vm1 corosync[4555]:   [QB    ] qb_ipcs_shm_connect connecting to client [4839]
Jun  5 15:35:56 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:35:56 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:35:56 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:35:56 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_created connection created
Jun  5 15:35:56 vm1 corosync[4555]:   [CPG   ] cpg_lib_init_fn lib_init_fn: conn=0x7fab5fe7b360, cpd=0x7fab5fe7a224
Jun  5 15:35:56 vm1 crmd[4839]:     info: crm_get_peer: Node <null> now has id: 2204477632
Jun  5 15:35:56 vm1 crmd[4839]:     info: crm_update_peer_proc: init_cpg_connection: Node (null)[2204477632] - corosync-cpg is now online
Jun  5 15:35:56 vm1 corosync[4555]:   [QB    ] handle_new_connection IPC credentials authenticated (4555-4839-30)
Jun  5 15:35:56 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/2, version=0.20.79)
Jun  5 15:35:56 vm1 corosync[4555]:   [QB    ] qb_ipcs_shm_connect connecting to client [4839]
Jun  5 15:35:56 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:35:56 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:35:56 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:35:56 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_created connection created
Jun  5 15:35:56 vm1 corosync[4555]:   [CMAP  ] cmap_lib_init_fn lib_init_fn: conn=0x7fab5fe78d10
Jun  5 15:35:56 vm1 crmd[4839]:   notice: corosync_node_name: Unable to get node name for nodeid 2204477632
Jun  5 15:35:56 vm1 crmd[4839]:   notice: get_local_node_name: Defaulting to uname -n for the local corosync node name
Jun  5 15:35:56 vm1 crmd[4839]:     info: init_cs_connection_once: Connection to 'corosync': established
Jun  5 15:35:56 vm1 crmd[4839]:     info: crm_get_peer: Node 2204477632 is now known as vm1
Jun  5 15:35:56 vm1 crmd[4839]:     info: peer_update_callback: vm1 is now (null)
Jun  5 15:35:56 vm1 crmd[4839]:     info: crm_get_peer: Node 2204477632 has uuid 2204477632
Jun  5 15:35:56 vm1 corosync[4555]:   [QB    ] qb_ipcs_dispatch_connection_request HUP conn (4555-4839-30)
Jun  5 15:35:56 vm1 corosync[4555]:   [QB    ] qb_ipcs_disconnect qb_ipcs_disconnect(4555-4839-30) state:2
Jun  5 15:35:56 vm1 corosync[4555]:   [QB    ] _del epoll_ctl(del): Bad file descriptor (9)
Jun  5 15:35:56 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_closed cs_ipcs_connection_closed() 
Jun  5 15:35:56 vm1 corosync[4555]:   [CMAP  ] cmap_lib_exit_fn exit_fn for conn=0x7fab5fe78d10
Jun  5 15:35:56 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_destroyed cs_ipcs_connection_destroyed() 
Jun  5 15:35:56 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-response-4555-4839-30-header
Jun  5 15:35:56 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-event-4555-4839-30-header
Jun  5 15:35:56 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-request-4555-4839-30-header
Jun  5 15:35:56 vm1 corosync[4555]:   [CPG   ] message_handler_req_exec_cpg_procjoin got procjoin message from cluster node -2090489664 (r(0) ip(192.168.101.131) r(1) ip(192.168.102.131) ) for pid 4839
Jun  5 15:35:56 vm1 corosync[4555]:   [QB    ] handle_new_connection IPC credentials authenticated (4555-4839-30)
Jun  5 15:35:56 vm1 corosync[4555]:   [QB    ] qb_ipcs_shm_connect connecting to client [4839]
Jun  5 15:35:56 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:35:56 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:35:56 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:35:56 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_created connection created
Jun  5 15:35:56 vm1 corosync[4555]:   [QUORUM] quorum_lib_init_fn lib_init_fn: conn=0x7fab5fe78d10
Jun  5 15:35:56 vm1 corosync[4555]:   [QUORUM] message_handler_req_lib_quorum_gettype got quorum_type request on 0x7fab5fe78d10
Jun  5 15:35:56 vm1 corosync[4555]:   [QUORUM] message_handler_req_lib_quorum_getquorate got quorate request on 0x7fab5fe78d10
Jun  5 15:35:56 vm1 crmd[4839]:   notice: init_quorum_connection: Quorum acquired
Jun  5 15:35:56 vm1 corosync[4555]:   [QUORUM] message_handler_req_lib_quorum_trackstart got trackstart request on 0x7fab5fe78d10
Jun  5 15:35:56 vm1 corosync[4555]:   [QUORUM] message_handler_req_lib_quorum_trackstart sending initial status to 0x7fab5fe78d10
Jun  5 15:35:56 vm1 corosync[4555]:   [QUORUM] send_library_notification sending quorum notification to 0x7fab5fe78d10, length = 52
Jun  5 15:35:56 vm1 corosync[4555]:   [QB    ] handle_new_connection IPC credentials authenticated (4555-4839-31)
Jun  5 15:35:56 vm1 corosync[4555]:   [QB    ] qb_ipcs_shm_connect connecting to client [4839]
Jun  5 15:35:56 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:35:56 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:35:56 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:35:56 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_created connection created
Jun  5 15:35:56 vm1 corosync[4555]:   [CMAP  ] cmap_lib_init_fn lib_init_fn: conn=0x7fab5fe7a580
Jun  5 15:35:56 vm1 corosync[4555]:   [QB    ] qb_ipcs_dispatch_connection_request HUP conn (4555-4839-31)
Jun  5 15:35:56 vm1 corosync[4555]:   [QB    ] qb_ipcs_disconnect qb_ipcs_disconnect(4555-4839-31) state:2
Jun  5 15:35:56 vm1 corosync[4555]:   [QB    ] _del epoll_ctl(del): Bad file descriptor (9)
Jun  5 15:35:56 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_closed cs_ipcs_connection_closed() 
Jun  5 15:35:56 vm1 corosync[4555]:   [CMAP  ] cmap_lib_exit_fn exit_fn for conn=0x7fab5fe7a580
Jun  5 15:35:56 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_destroyed cs_ipcs_connection_destroyed() 
Jun  5 15:35:56 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-response-4555-4839-31-header
Jun  5 15:35:56 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-event-4555-4839-31-header
Jun  5 15:35:56 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-request-4555-4839-31-header
Jun  5 15:35:56 vm1 corosync[4555]:   [QB    ] handle_new_connection IPC credentials authenticated (4555-4839-31)
Jun  5 15:35:56 vm1 corosync[4555]:   [QB    ] qb_ipcs_shm_connect connecting to client [4839]
Jun  5 15:35:56 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:35:56 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:35:56 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:35:56 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_created connection created
Jun  5 15:35:56 vm1 corosync[4555]:   [CMAP  ] cmap_lib_init_fn lib_init_fn: conn=0x7fab5fe7c220
Jun  5 15:35:56 vm1 crmd[4839]:     info: do_ha_control: Connected to the cluster
Jun  5 15:35:56 vm1 crmd[4839]:     info: lrmd_ipc_connect: Connecting to lrmd
Jun  5 15:35:56 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section nodes: OK (rc=0, origin=local/crmd/3, version=0.20.79)
Jun  5 15:35:56 vm1 lrmd[4578]:     info: crm_client_new: Connecting 0x2125d50 for uid=496 gid=492 pid=4839 id=91ae1c9a-6025-4354-afb7-ea9f39ecb7d1
Jun  5 15:35:56 vm1 crmd[4839]:     info: do_lrm_control: LRM connection established
Jun  5 15:35:56 vm1 crmd[4839]:     info: do_started: Delaying start, no membership data (0000000000100000)
Jun  5 15:35:56 vm1 crmd[4839]:     info: pcmk_quorum_notification: Membership 388: quorum retained (1)
Jun  5 15:35:56 vm1 crmd[4839]:   notice: crm_update_peer_state: pcmk_quorum_notification: Node vm1[2204477632] - state is now member (was (null))
Jun  5 15:35:56 vm1 crmd[4839]:     info: peer_update_callback: vm1 is now member (was (null))
Jun  5 15:35:56 vm1 crmd[4839]:     info: do_started: Delaying start, Config not read (0000000000000040)
Jun  5 15:35:56 vm1 crmd[4839]:     info: pcmk_cpg_membership: Joined[0.0] crmd.2204477632 
Jun  5 15:35:56 vm1 crmd[4839]:     info: pcmk_cpg_membership: Member[0.0] crmd.2204477632 
Jun  5 15:35:56 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section crm_config: OK (rc=0, origin=local/crmd/4, version=0.20.79)
Jun  5 15:35:56 vm1 crmd[4839]:     info: qb_ipcs_us_publish: server name: crmd
Jun  5 15:35:56 vm1 crmd[4839]:   notice: do_started: The local CRM is operational
Jun  5 15:35:56 vm1 crmd[4839]:     info: do_log: FSA: Input I_PENDING from do_started() received in state S_STARTING
Jun  5 15:35:56 vm1 crmd[4839]:     info: do_state_transition: State transition S_STARTING -> S_PENDING [ input=I_PENDING cause=C_FSA_INTERNAL origin=do_started ]
Jun  5 15:35:56 vm1 cib[4576]:     info: cib_process_readwrite: We are now in R/O mode
Jun  5 15:35:56 vm1 cib[4576]:     info: cib_process_request: Completed cib_slave operation for section 'all': OK (rc=0, origin=local/crmd/5, version=0.20.79)
Jun  5 15:35:56 vm1 corosync[4555]:   [QB    ] qb_ipcs_dispatch_connection_request HUP conn (4555-4839-31)
Jun  5 15:35:56 vm1 corosync[4555]:   [QB    ] qb_ipcs_disconnect qb_ipcs_disconnect(4555-4839-31) state:2
Jun  5 15:35:56 vm1 corosync[4555]:   [QB    ] _del epoll_ctl(del): Bad file descriptor (9)
Jun  5 15:35:56 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_closed cs_ipcs_connection_closed() 
Jun  5 15:35:56 vm1 corosync[4555]:   [CMAP  ] cmap_lib_exit_fn exit_fn for conn=0x7fab5fe7c220
Jun  5 15:35:56 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_destroyed cs_ipcs_connection_destroyed() 
Jun  5 15:35:56 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-response-4555-4839-31-header
Jun  5 15:35:56 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-event-4555-4839-31-header
Jun  5 15:35:56 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-request-4555-4839-31-header
Jun  5 15:35:58 vm1 stonith-ng[4577]:     info: crm_client_new: Connecting 0x1867d30 for uid=496 gid=492 pid=4839 id=9e19acba-b9a8-43d7-9fb6-19f11d6b3d19
Jun  5 15:35:58 vm1 stonith-ng[4577]:     info: stonith_command: Processed register from crmd.4839: OK (0)
Jun  5 15:35:58 vm1 stonith-ng[4577]:     info: stonith_command: Processed st_notify from crmd.4839: OK (0)
Jun  5 15:35:58 vm1 stonith-ng[4577]:     info: stonith_command: Processed st_notify from crmd.4839: OK (0)
Jun  5 15:36:00 vm1 pacemakerd[4574]:     info: crm_signal_dispatch: Invoking handler for signal 15: Terminated
Jun  5 15:36:00 vm1 pacemakerd[4574]:   notice: pcmk_shutdown_worker: Shuting down Pacemaker
Jun  5 15:36:00 vm1 pacemakerd[4574]:   notice: stop_child: Stopping crmd: Sent -15 to process 4839
Jun  5 15:36:00 vm1 crmd[4839]:     info: crm_signal_dispatch: Invoking handler for signal 15: Terminated
Jun  5 15:36:00 vm1 crmd[4839]:   notice: crm_shutdown: Requesting shutdown, upper limit is 1200000ms
Jun  5 15:36:00 vm1 crmd[4839]:     info: do_shutdown_req: Sending shutdown request to <null>
Jun  5 15:36:17 vm1 crmd[4839]:     info: crm_timer_popped: Election Trigger (I_DC_TIMEOUT) just popped (20000ms)
Jun  5 15:36:17 vm1 crmd[4839]:  warning: do_log: FSA: Input I_DC_TIMEOUT from crm_timer_popped() received in state S_PENDING
Jun  5 15:36:17 vm1 crmd[4839]:     info: do_state_transition: State transition S_PENDING -> S_ELECTION [ input=I_DC_TIMEOUT cause=C_TIMER_POPPED origin=crm_timer_popped ]
Jun  5 15:36:17 vm1 crmd[4839]:     info: do_log: FSA: Input I_ELECTION_DC from do_election_check() received in state S_ELECTION
Jun  5 15:36:17 vm1 crmd[4839]:   notice: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_FSA_INTERNAL origin=do_election_check ]
Jun  5 15:36:17 vm1 crmd[4839]:     info: do_te_control: Registering TE UUID: 13671847-55bc-4fa1-bb62-8dcc8504bcdd
Jun  5 15:36:17 vm1 crmd[4839]:     info: set_graph_functions: Setting custom graph functions
Jun  5 15:36:17 vm1 pengine[4580]:     info: crm_client_new: Connecting 0x2290f90 for uid=496 gid=492 pid=4839 id=a1c1b02b-dc37-447e-b8fc-4ef9e35d9a64
Jun  5 15:36:17 vm1 crmd[4839]:     info: do_dc_takeover: Taking over DC status for this partition
Jun  5 15:36:17 vm1 cib[4576]:     info: cib_process_readwrite: We are now in R/W mode
Jun  5 15:36:17 vm1 cib[4576]:     info: cib_process_request: Completed cib_master operation for section 'all': OK (rc=0, origin=local/crmd/6, version=0.20.79)
Jun  5 15:36:17 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section cib: OK (rc=0, origin=local/crmd/7, version=0.20.79)
Jun  5 15:36:17 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section //cib/configuration/crm_config//cluster_property_set//nvpair[@name='dc-version']: OK (rc=0, origin=local/crmd/8, version=0.20.79)
Jun  5 15:36:17 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section crm_config: OK (rc=0, origin=local/crmd/9, version=0.20.79)
Jun  5 15:36:17 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section //cib/configuration/crm_config//cluster_property_set//nvpair[@name='cluster-infrastructure']: OK (rc=0, origin=local/crmd/10, version=0.20.79)
Jun  5 15:36:17 vm1 crmd[4839]:     info: join_make_offer: Making join offers based on membership 388
Jun  5 15:36:17 vm1 crmd[4839]:     info: join_make_offer: join-1: Sending offer to vm1
Jun  5 15:36:17 vm1 crmd[4839]:     info: crm_update_peer_join: join_make_offer: Node vm1[2204477632] - join-1 phase 0 -> 1
Jun  5 15:36:17 vm1 crmd[4839]:     info: do_dc_join_offer_all: join-1: Waiting on 1 outstanding join acks
Jun  5 15:36:17 vm1 crmd[4839]:     info: update_dc: Set DC to vm1 (3.0.7)
Jun  5 15:36:17 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section crm_config: OK (rc=0, origin=local/crmd/11, version=0.20.79)
Jun  5 15:36:17 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section crm_config: OK (rc=0, origin=local/crmd/12, version=0.20.79)
Jun  5 15:36:17 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/13, version=0.20.79)
Jun  5 15:36:17 vm1 crmd[4839]:     info: crm_update_peer_join: do_dc_join_filter_offer: Node vm1[2204477632] - join-1 phase 1 -> 2
Jun  5 15:36:17 vm1 crmd[4839]:     info: crm_update_peer_expected: do_dc_join_filter_offer: Node vm1[2204477632] - expected state is now member
Jun  5 15:36:17 vm1 crmd[4839]:     info: do_state_transition: State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_FSA_INTERNAL origin=check_join_state ]
Jun  5 15:36:17 vm1 crmd[4839]:     info: crmd_join_phase_log: join-1: vm1=integrated
Jun  5 15:36:17 vm1 crmd[4839]:     info: do_dc_join_finalize: join-1: Syncing our CIB to the rest of the cluster
Jun  5 15:36:17 vm1 cib[4576]:     info: cib_process_request: Completed cib_sync operation for section 'all': OK (rc=0, origin=local/crmd/14, version=0.20.79)
Jun  5 15:36:17 vm1 crmd[4839]:     info: crm_update_peer_join: finalize_join_for: Node vm1[2204477632] - join-1 phase 2 -> 3
Jun  5 15:36:17 vm1 crmd[4839]:     info: erase_status_tag: Deleting xpath: //node_state[@uname='vm1']/transient_attributes
Jun  5 15:36:17 vm1 crmd[4839]:     info: crm_update_peer_join: do_dc_join_ack: Node vm1[2204477632] - join-1 phase 3 -> 4
Jun  5 15:36:17 vm1 crmd[4839]:     info: do_dc_join_ack: join-1: Updating node state to member for vm1
Jun  5 15:36:17 vm1 crmd[4839]:     info: erase_status_tag: Deleting xpath: //node_state[@uname='vm1']/lrm
Jun  5 15:36:17 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section nodes: OK (rc=0, origin=local/crmd/15, version=0.20.79)
Jun  5 15:36:17 vm1 cib[4576]:     info: cib_process_request: Completed cib_delete operation for section //node_state[@uname='vm1']/transient_attributes: OK (rc=0, origin=local/crmd/16, version=0.20.80)
Jun  5 15:36:17 vm1 cib[4576]:     info: cib_process_request: Completed cib_delete operation for section //node_state[@uname='vm1']/lrm: OK (rc=0, origin=local/crmd/17, version=0.20.81)
Jun  5 15:36:17 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/crmd/18, version=0.20.82)
Jun  5 15:36:17 vm1 crmd[4839]:     info: do_state_transition: State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state ]
Jun  5 15:36:17 vm1 crmd[4839]:     info: update_attrd: Connecting to attrd... 5 retries remaining
Jun  5 15:36:17 vm1 attrd[4579]:     info: crm_client_new: Connecting 0x915d10 for uid=496 gid=492 pid=4839 id=99b22570-2132-4cb3-b1af-171b94642525
Jun  5 15:36:17 vm1 crmd[4839]:     info: abort_transition_graph: do_te_invoke:155 - Triggered transition abort (complete=1) : Peer Cancelled
Jun  5 15:36:17 vm1 attrd[4579]:   notice: attrd_local_callback: Sending full refresh (origin=crmd)
Jun  5 15:36:17 vm1 attrd[4579]:   notice: attrd_trigger_update: Sending flush op to all hosts for: default_ping_set(1) (100)
Jun  5 15:36:17 vm1 attrd[4579]:   notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)
Jun  5 15:36:17 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section //cib/status//node_state[@id='2204477632']//transient_attributes//nvpair[@name='probe_complete']: No such device or address (rc=-6, origin=local/attrd/80, version=0.20.82)
Jun  5 15:36:17 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section nodes: OK (rc=0, origin=local/crmd/19, version=0.20.82)
Jun  5 15:36:17 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/crmd/20, version=0.20.82)
Jun  5 15:36:17 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section cib: OK (rc=0, origin=local/crmd/21, version=0.20.82)
Jun  5 15:36:17 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/22, version=0.20.82)
Jun  5 15:36:17 vm1 pengine[4580]:   notice: unpack_config: On loss of CCM Quorum: Ignore
Jun  5 15:36:17 vm1 pengine[4580]:  warning: unpack_nodes: Blind faith: not fencing unseen nodes
Jun  5 15:36:17 vm1 pengine[4580]:     info: determine_online_status_fencing: Node vm1 is active
Jun  5 15:36:17 vm1 pengine[4580]:     info: determine_online_status: Node vm1 is online
Jun  5 15:36:17 vm1 pengine[4580]:  warning: pe_fence_node: Node vm2 will be fenced because the node is no longer part of the cluster
Jun  5 15:36:17 vm1 pengine[4580]:  warning: determine_online_status: Node vm2 is unclean
Jun  5 15:36:17 vm1 pengine[4580]:     info: clone_print:  Clone Set: cl1 [st1]
Jun  5 15:36:17 vm1 pengine[4580]:     info: short_print:      Started: [ vm2 ]
Jun  5 15:36:17 vm1 pengine[4580]:     info: short_print:      Stopped: [ vm1 ]
Jun  5 15:36:17 vm1 pengine[4580]:     info: native_print: prmDummy#011(ocf::pacemaker:Dummy):#011Stopped 
Jun  5 15:36:17 vm1 pengine[4580]:     info: clone_print:  Clone Set: clnPing [prmPing]
Jun  5 15:36:17 vm1 pengine[4580]:     info: short_print:      Started: [ vm2 ]
Jun  5 15:36:17 vm1 pengine[4580]:     info: short_print:      Stopped: [ vm1 ]
Jun  5 15:36:17 vm1 pengine[4580]:     info: native_color: Resource st1:1 cannot run anywhere
Jun  5 15:36:17 vm1 pengine[4580]:     info: native_color: Resource prmDummy cannot run anywhere
Jun  5 15:36:17 vm1 pengine[4580]:     info: native_color: Resource prmPing:1 cannot run anywhere
Jun  5 15:36:17 vm1 pengine[4580]:  warning: custom_action: Action st1:0_stop_0 on vm2 is unrunnable (offline)
Jun  5 15:36:17 vm1 pengine[4580]:  warning: custom_action: Action prmPing:0_stop_0 on vm2 is unrunnable (offline)
Jun  5 15:36:17 vm1 pengine[4580]:     info: RecurringOp:  Start recurring monitor (10s) for prmPing:0 on vm1
Jun  5 15:36:17 vm1 pengine[4580]:  warning: stage6: Scheduling Node vm2 for STONITH
Jun  5 15:36:17 vm1 pengine[4580]:     info: native_stop_constraints: st1:0_stop_0 is implicit after vm2 is fenced
Jun  5 15:36:17 vm1 pengine[4580]:     info: native_stop_constraints: prmPing:0_stop_0 is implicit after vm2 is fenced
Jun  5 15:36:17 vm1 pengine[4580]:   notice: LogActions: Move    st1:0#011(Started vm2 -> vm1)
Jun  5 15:36:17 vm1 pengine[4580]:     info: LogActions: Leave   st1:1#011(Stopped)
Jun  5 15:36:17 vm1 pengine[4580]:     info: LogActions: Leave   prmDummy#011(Stopped)
Jun  5 15:36:17 vm1 pengine[4580]:   notice: LogActions: Move    prmPing:0#011(Started vm2 -> vm1)
Jun  5 15:36:17 vm1 pengine[4580]:     info: LogActions: Leave   prmPing:1#011(Stopped)
Jun  5 15:36:17 vm1 crmd[4839]:    error: crm_xml_err: XML Error: Entity: line 1: parser error : Specification mandate value for attribute CRM_meta_default_ping_set
Jun  5 15:36:17 vm1 crmd[4839]:    error: crm_xml_err: XML Error: 2" on_node="vm2" on_node_uuid="2221254848"><attributes CRM_meta_default_ping_set
Jun  5 15:36:17 vm1 crmd[4839]:    error: crm_xml_err: XML Error:                                                                                ^
Jun  5 15:36:17 vm1 crmd[4839]:    error: crm_xml_err: XML Error: Entity: line 1: parser error : attributes construct error
Jun  5 15:36:17 vm1 crmd[4839]:    error: crm_xml_err: XML Error: 2" on_node="vm2" on_node_uuid="2221254848"><attributes CRM_meta_default_ping_set
Jun  5 15:36:17 vm1 crmd[4839]:    error: crm_xml_err: XML Error:                                                                                ^
Jun  5 15:36:17 vm1 crmd[4839]:    error: crm_xml_err: XML Error: Entity: line 1: parser error : Couldn't find end of Start Tag attributes line 1
Jun  5 15:36:17 vm1 crmd[4839]:    error: crm_xml_err: XML Error: 2" on_node="vm2" on_node_uuid="2221254848"><attributes CRM_meta_default_ping_set
Jun  5 15:36:17 vm1 crmd[4839]:    error: crm_xml_err: XML Error:                                                                                ^
Jun  5 15:36:17 vm1 crmd[4839]:  warning: string2xml: Parsing failed (domain=1, level=3, code=73): Couldn't find end of Start Tag attributes line 1
Jun  5 15:36:17 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section /cib: OK (rc=0, origin=local/attrd/81, version=0.20.82)
Jun  5 15:36:17 vm1 pengine[4580]:  warning: process_pe_message: Calculated Transition 15: /var/lib/pacemaker/pengine/pe-warn-12.bz2
Jun  5 15:36:17 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/attrd/82, version=0.20.83)
Jun  5 15:36:17 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section //cib/status//node_state[@id='2204477632']//transient_attributes//nvpair[@name='default_ping_set(1)']: No such device or address (rc=-6, origin=local/attrd/83, version=0.20.83)
Jun  5 15:36:17 vm1 cib[4576]:     info: cib_process_request: Completed cib_query operation for section /cib: OK (rc=0, origin=local/attrd/84, version=0.20.83)
Jun  5 15:36:17 vm1 cib[4576]:     info: cib_process_request: Completed cib_modify operation for section status: OK (rc=0, origin=local/attrd/85, version=0.20.84)
Jun  5 15:36:18 vm1 cib[4576]:     info: crm_client_destroy: Destroying 0 events
Jun  5 15:36:18 vm1 lrmd[4578]:     info: crm_client_destroy: Destroying 0 events
Jun  5 15:36:18 vm1 stonith-ng[4577]:     info: crm_client_destroy: Destroying 0 events
Jun  5 15:36:18 vm1 pengine[4580]:     info: crm_client_destroy: Destroying 0 events
Jun  5 15:36:18 vm1 attrd[4579]:     info: crm_client_destroy: Destroying 0 events
Jun  5 15:36:18 vm1 pacemakerd[4574]:    error: child_death_dispatch: Managed process 4839 (crmd) dumped core
Jun  5 15:36:18 vm1 pacemakerd[4574]:   notice: pcmk_child_exit: Child process crmd terminated with signal 11 (pid=4839, core=1)
Jun  5 15:36:18 vm1 pacemakerd[4574]:   notice: stop_child: Stopping pengine: Sent -15 to process 4580
Jun  5 15:36:18 vm1 pengine[4580]:     info: crm_signal_dispatch: Invoking handler for signal 15: Terminated
Jun  5 15:36:18 vm1 pengine[4580]:     info: qb_ipcs_us_withdraw: withdrawing server sockets
Jun  5 15:36:18 vm1 pengine[4580]:     info: crm_xml_cleanup: Cleaning up memory from libxml2
Jun  5 15:36:18 vm1 pacemakerd[4574]:     info: pcmk_child_exit: Child process pengine exited (pid=4580, rc=0)
Jun  5 15:36:18 vm1 pacemakerd[4574]:   notice: stop_child: Stopping attrd: Sent -15 to process 4579
Jun  5 15:36:18 vm1 attrd[4579]:     info: crm_signal_dispatch: Invoking handler for signal 15: Terminated
Jun  5 15:36:18 vm1 attrd[4579]:     info: attrd_shutdown: Exiting
Jun  5 15:36:18 vm1 attrd[4579]:   notice: main: Exiting...
Jun  5 15:36:18 vm1 attrd[4579]:     info: qb_ipcs_us_withdraw: withdrawing server sockets
Jun  5 15:36:18 vm1 attrd[4579]:     info: attrd_cib_connection_destroy: Connection to the CIB terminated...
Jun  5 15:36:18 vm1 attrd[4579]:     info: crm_xml_cleanup: Cleaning up memory from libxml2
Jun  5 15:36:18 vm1 pacemakerd[4574]:     info: pcmk_child_exit: Child process attrd exited (pid=4579, rc=0)
Jun  5 15:36:18 vm1 pacemakerd[4574]:   notice: stop_child: Stopping lrmd: Sent -15 to process 4578
Jun  5 15:36:18 vm1 lrmd[4578]:     info: crm_signal_dispatch: Invoking handler for signal 15: Terminated
Jun  5 15:36:18 vm1 lrmd[4578]:     info: lrmd_shutdown: Terminating with  0 clients
Jun  5 15:36:18 vm1 lrmd[4578]:     info: qb_ipcs_us_withdraw: withdrawing server sockets
Jun  5 15:36:18 vm1 lrmd[4578]:     info: crm_xml_cleanup: Cleaning up memory from libxml2
Jun  5 15:36:18 vm1 stonith-ng[4577]:     info: crm_client_destroy: Destroying 0 events
Jun  5 15:36:18 vm1 cib[4576]:     info: crm_client_destroy: Destroying 0 events
Jun  5 15:36:18 vm1 pacemakerd[4574]:     info: pcmk_child_exit: Child process lrmd exited (pid=4578, rc=0)
Jun  5 15:36:18 vm1 pacemakerd[4574]:   notice: stop_child: Stopping stonith-ng: Sent -15 to process 4577
Jun  5 15:36:18 vm1 stonith-ng[4577]:     info: crm_signal_dispatch: Invoking handler for signal 15: Terminated
Jun  5 15:36:18 vm1 stonith-ng[4577]:     info: stonith_shutdown: Terminating with  0 clients
Jun  5 15:36:18 vm1 stonith-ng[4577]:     info: cib_connection_destroy: Connection to the CIB closed.
Jun  5 15:36:18 vm1 stonith-ng[4577]:     info: qb_ipcs_us_withdraw: withdrawing server sockets
Jun  5 15:36:18 vm1 stonith-ng[4577]:     info: main: Done
Jun  5 15:36:18 vm1 stonith-ng[4577]:     info: crm_xml_cleanup: Cleaning up memory from libxml2
Jun  5 15:36:18 vm1 pacemakerd[4574]:     info: pcmk_child_exit: Child process stonith-ng exited (pid=4577, rc=0)
Jun  5 15:36:18 vm1 pacemakerd[4574]:   notice: stop_child: Stopping cib: Sent -15 to process 4576
Jun  5 15:36:18 vm1 cib[4576]:     info: crm_signal_dispatch: Invoking handler for signal 15: Terminated
Jun  5 15:36:18 vm1 cib[4576]:     info: crm_client_destroy: Destroying 0 events
Jun  5 15:36:18 vm1 cib[4576]:     info: cib_shutdown: All clients disconnected (0)
Jun  5 15:36:18 vm1 cib[4576]:     info: terminate_cib: initiate_exit: Disconnecting from cluster infrastructure
Jun  5 15:36:18 vm1 cib[4576]:     info: crm_cluster_disconnect: Disconnecting from cluster infrastructure: corosync
Jun  5 15:36:18 vm1 cib[4576]:   notice: terminate_cs_connection: Disconnecting from Corosync
Jun  5 15:36:18 vm1 corosync[4555]:   [QB    ] qb_ipcs_dispatch_connection_request HUP conn (4555-4839-29)
Jun  5 15:36:18 vm1 corosync[4555]:   [QB    ] qb_ipcs_disconnect qb_ipcs_disconnect(4555-4839-29) state:2
Jun  5 15:36:18 vm1 corosync[4555]:   [QB    ] _del epoll_ctl(del): Bad file descriptor (9)
Jun  5 15:36:18 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_closed cs_ipcs_connection_closed() 
Jun  5 15:36:18 vm1 corosync[4555]:   [CPG   ] cpg_lib_exit_fn exit_fn for conn=0x7fab5fe7b360
Jun  5 15:36:18 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_destroyed cs_ipcs_connection_destroyed() 
Jun  5 15:36:18 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cpg-response-4555-4839-29-header
Jun  5 15:36:18 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cpg-event-4555-4839-29-header
Jun  5 15:36:18 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cpg-request-4555-4839-29-header
Jun  5 15:36:18 vm1 corosync[4555]:   [QB    ] qb_ipcs_dispatch_connection_request HUP conn (4555-4839-30)
Jun  5 15:36:18 vm1 corosync[4555]:   [QB    ] qb_ipcs_disconnect qb_ipcs_disconnect(4555-4839-30) state:2
Jun  5 15:36:18 vm1 corosync[4555]:   [QB    ] _del epoll_ctl(del): Bad file descriptor (9)
Jun  5 15:36:18 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_closed cs_ipcs_connection_closed() 
Jun  5 15:36:18 vm1 corosync[4555]:   [QUORUM] quorum_lib_exit_fn lib_exit_fn: conn=0x7fab5fe78d10
Jun  5 15:36:18 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_destroyed cs_ipcs_connection_destroyed() 
Jun  5 15:36:18 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-quorum-response-4555-4839-30-header
Jun  5 15:36:18 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-quorum-event-4555-4839-30-header
Jun  5 15:36:18 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-quorum-request-4555-4839-30-header
Jun  5 15:36:18 vm1 corosync[4555]:   [QB    ] qb_ipcs_dispatch_connection_request HUP conn (4555-4579-26)
Jun  5 15:36:18 vm1 corosync[4555]:   [QB    ] qb_ipcs_disconnect qb_ipcs_disconnect(4555-4579-26) state:2
Jun  5 15:36:18 vm1 corosync[4555]:   [QB    ] _del epoll_ctl(del): Bad file descriptor (9)
Jun  5 15:36:18 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_closed cs_ipcs_connection_closed() 
Jun  5 15:36:18 vm1 corosync[4555]:   [CPG   ] cpg_lib_exit_fn exit_fn for conn=0x7fab5f76c5c0
Jun  5 15:36:18 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_destroyed cs_ipcs_connection_destroyed() 
Jun  5 15:36:18 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cpg-response-4555-4579-26-header
Jun  5 15:36:18 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cpg-event-4555-4579-26-header
Jun  5 15:36:18 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cpg-request-4555-4579-26-header
Jun  5 15:36:18 vm1 corosync[4555]:   [QB    ] qb_ipcs_dispatch_connection_request HUP conn (4555-4577-27)
Jun  5 15:36:18 vm1 corosync[4555]:   [QB    ] qb_ipcs_disconnect qb_ipcs_disconnect(4555-4577-27) state:2
Jun  5 15:36:18 vm1 corosync[4555]:   [QB    ] _del epoll_ctl(del): Bad file descriptor (9)
Jun  5 15:36:18 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_closed cs_ipcs_connection_closed() 
Jun  5 15:36:18 vm1 corosync[4555]:   [CPG   ] cpg_lib_exit_fn exit_fn for conn=0x7fab5fb73090
Jun  5 15:36:18 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_destroyed cs_ipcs_connection_destroyed() 
Jun  5 15:36:18 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cpg-response-4555-4577-27-header
Jun  5 15:36:18 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cpg-event-4555-4577-27-header
Jun  5 15:36:18 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cpg-request-4555-4577-27-header
Jun  5 15:36:18 vm1 corosync[4555]:   [CPG   ] message_handler_req_lib_cpg_leave got leave request on 0x7fab5fb74a70
Jun  5 15:36:18 vm1 corosync[4555]:   [CPG   ] message_handler_req_lib_cpg_finalize cpg finalize for conn=0x7fab5fb74a70
Jun  5 15:36:18 vm1 cib[4576]:     info: terminate_cs_connection: No Quorum connection
Jun  5 15:36:18 vm1 cib[4576]:     info: crm_cluster_disconnect: Disconnected from corosync
Jun  5 15:36:18 vm1 cib[4576]:     info: terminate_cib: initiate_exit: Exiting from mainloop...
Jun  5 15:36:18 vm1 cib[4576]:     info: cib_shutdown: Disconnected 1 clients
Jun  5 15:36:18 vm1 cib[4576]:     info: cib_shutdown: All clients disconnected (0)
Jun  5 15:36:18 vm1 cib[4576]:     info: terminate_cib: initiate_exit: Disconnecting from cluster infrastructure
Jun  5 15:36:18 vm1 cib[4576]:     info: crm_cluster_disconnect: Disconnecting from cluster infrastructure: corosync
Jun  5 15:36:18 vm1 cib[4576]:   notice: terminate_cs_connection: Disconnecting from Corosync
Jun  5 15:36:18 vm1 cib[4576]:     info: terminate_cs_connection: No CPG connection
Jun  5 15:36:18 vm1 cib[4576]:     info: terminate_cs_connection: No Quorum connection
Jun  5 15:36:18 vm1 cib[4576]:     info: crm_cluster_disconnect: Disconnected from corosync
Jun  5 15:36:18 vm1 cib[4576]:     info: terminate_cib: initiate_exit: Exiting from mainloop...
Jun  5 15:36:18 vm1 cib[4576]:     info: qb_ipcs_us_withdraw: withdrawing server sockets
Jun  5 15:36:18 vm1 cib[4576]:     info: qb_ipcs_us_withdraw: withdrawing server sockets
Jun  5 15:36:18 vm1 cib[4576]:     info: qb_ipcs_us_withdraw: withdrawing server sockets
Jun  5 15:36:18 vm1 cib[4576]:     info: crm_xml_cleanup: Cleaning up memory from libxml2
Jun  5 15:36:18 vm1 pacemakerd[4574]:     info: pcmk_child_exit: Child process cib exited (pid=4576, rc=0)
Jun  5 15:36:18 vm1 pacemakerd[4574]:   notice: pcmk_shutdown_worker: Shutdown complete
Jun  5 15:36:18 vm1 pacemakerd[4574]:     info: qb_ipcs_us_withdraw: withdrawing server sockets
Jun  5 15:36:18 vm1 corosync[4555]:   [QB    ] qb_ipcs_dispatch_connection_request HUP conn (4555-4576-28)
Jun  5 15:36:18 vm1 corosync[4555]:   [QB    ] qb_ipcs_disconnect qb_ipcs_disconnect(4555-4576-28) state:2
Jun  5 15:36:18 vm1 corosync[4555]:   [QB    ] _del epoll_ctl(del): Bad file descriptor (9)
Jun  5 15:36:18 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_closed cs_ipcs_connection_closed() 
Jun  5 15:36:18 vm1 corosync[4555]:   [CPG   ] cpg_lib_exit_fn exit_fn for conn=0x7fab5fb74a70
Jun  5 15:36:18 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_destroyed cs_ipcs_connection_destroyed() 
Jun  5 15:36:18 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cpg-response-4555-4576-28-header
Jun  5 15:36:18 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cpg-event-4555-4576-28-header
Jun  5 15:36:18 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cpg-request-4555-4576-28-header
Jun  5 15:36:18 vm1 corosync[4555]:   [CPG   ] message_handler_req_lib_cpg_finalize cpg finalize for conn=0x7fab5f76d400
Jun  5 15:36:18 vm1 pacemakerd[4574]:     info: main: Exiting pacemakerd
Jun  5 15:36:18 vm1 pacemakerd[4574]:     info: crm_xml_cleanup: Cleaning up memory from libxml2
Jun  5 15:36:18 vm1 corosync[4555]:   [CPG   ] message_handler_req_exec_cpg_procleave got procleave message from cluster node -2090489664
Jun  5 15:36:18 vm1 corosync[4555]:   [CPG   ] message_handler_req_exec_cpg_procleave got procleave message from cluster node -2090489664
Jun  5 15:36:18 vm1 corosync[4555]:   [CPG   ] message_handler_req_exec_cpg_procleave got procleave message from cluster node -2090489664
Jun  5 15:36:18 vm1 corosync[4555]:   [CPG   ] message_handler_req_exec_cpg_procleave got procleave message from cluster node -2090489664
Jun  5 15:36:18 vm1 corosync[4555]:   [QB    ] qb_ipcs_dispatch_connection_request HUP conn (4555-4574-25)
Jun  5 15:36:18 vm1 corosync[4555]:   [QB    ] qb_ipcs_disconnect qb_ipcs_disconnect(4555-4574-25) state:2
Jun  5 15:36:18 vm1 corosync[4555]:   [QB    ] _del epoll_ctl(del): Bad file descriptor (9)
Jun  5 15:36:18 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_closed cs_ipcs_connection_closed() 
Jun  5 15:36:18 vm1 corosync[4555]:   [CPG   ] cpg_lib_exit_fn exit_fn for conn=0x7fab5f76d400
Jun  5 15:36:18 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_destroyed cs_ipcs_connection_destroyed() 
Jun  5 15:36:18 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cpg-response-4555-4574-25-header
Jun  5 15:36:18 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cpg-event-4555-4574-25-header
Jun  5 15:36:18 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cpg-request-4555-4574-25-header
Jun  5 15:36:18 vm1 corosync[4555]:   [QB    ] qb_ipcs_dispatch_connection_request HUP conn (4555-4574-24)
Jun  5 15:36:18 vm1 corosync[4555]:   [QB    ] qb_ipcs_disconnect qb_ipcs_disconnect(4555-4574-24) state:2
Jun  5 15:36:18 vm1 corosync[4555]:   [QB    ] _del epoll_ctl(del): Bad file descriptor (9)
Jun  5 15:36:18 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_closed cs_ipcs_connection_closed() 
Jun  5 15:36:18 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_destroyed cs_ipcs_connection_destroyed() 
Jun  5 15:36:18 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cfg-response-4555-4574-24-header
Jun  5 15:36:18 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cfg-event-4555-4574-24-header
Jun  5 15:36:18 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cfg-request-4555-4574-24-header
Jun  5 15:36:18 vm1 corosync[4555]:   [CPG   ] message_handler_req_exec_cpg_procleave got procleave message from cluster node -2090489664
Jun  5 15:38:29 vm1 root: Mark:pcmk:1370414309
Jun  5 15:38:48 vm1 corosync[4555]:   [QB    ] handle_new_connection IPC credentials authenticated (4555-5609-24)
Jun  5 15:38:48 vm1 corosync[4555]:   [QB    ] qb_ipcs_shm_connect connecting to client [5609]
Jun  5 15:38:48 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:38:48 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:38:48 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:38:48 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_created connection created
Jun  5 15:38:48 vm1 corosync[4555]:   [CMAP  ] cmap_lib_init_fn lib_init_fn: conn=0x7fab5fe78d10
Jun  5 15:38:48 vm1 corosync[4555]:   [QB    ] qb_ipcs_dispatch_connection_request HUP conn (4555-5609-24)
Jun  5 15:38:48 vm1 corosync[4555]:   [QB    ] qb_ipcs_disconnect qb_ipcs_disconnect(4555-5609-24) state:2
Jun  5 15:38:48 vm1 corosync[4555]:   [QB    ] _del epoll_ctl(del): Bad file descriptor (9)
Jun  5 15:38:48 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_closed cs_ipcs_connection_closed() 
Jun  5 15:38:48 vm1 corosync[4555]:   [CMAP  ] cmap_lib_exit_fn exit_fn for conn=0x7fab5fe78d10
Jun  5 15:38:48 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_destroyed cs_ipcs_connection_destroyed() 
Jun  5 15:38:48 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-response-4555-5609-24-header
Jun  5 15:38:48 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-event-4555-5609-24-header
Jun  5 15:38:48 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-request-4555-5609-24-header
Jun  5 15:38:48 vm1 corosync[4555]:   [QB    ] handle_new_connection IPC credentials authenticated (4555-5611-24)
Jun  5 15:38:48 vm1 corosync[4555]:   [QB    ] qb_ipcs_shm_connect connecting to client [5611]
Jun  5 15:38:48 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:38:48 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:38:48 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:38:48 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_created connection created
Jun  5 15:38:48 vm1 corosync[4555]:   [CMAP  ] cmap_lib_init_fn lib_init_fn: conn=0x7fab5fe78d10
Jun  5 15:38:48 vm1 corosync[4555]:   [QB    ] qb_rb_write_to_file  writing total of: 8388620
Jun  5 15:38:48 vm1 corosync[4555]:   [QB    ] qb_ipcs_dispatch_connection_request HUP conn (4555-5611-24)
Jun  5 15:38:48 vm1 corosync[4555]:   [QB    ] qb_ipcs_disconnect qb_ipcs_disconnect(4555-5611-24) state:2
Jun  5 15:38:48 vm1 corosync[4555]:   [QB    ] _del epoll_ctl(del): Bad file descriptor (9)
Jun  5 15:38:48 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_closed cs_ipcs_connection_closed() 
Jun  5 15:38:48 vm1 corosync[4555]:   [CMAP  ] cmap_lib_exit_fn exit_fn for conn=0x7fab5fe78d10
Jun  5 15:38:48 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_destroyed cs_ipcs_connection_destroyed() 
Jun  5 15:38:48 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-response-4555-5611-24-header
Jun  5 15:38:48 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-event-4555-5611-24-header
Jun  5 15:38:48 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-request-4555-5611-24-header
Jun  5 15:38:48 vm1 corosync[4555]:   [QB    ] handle_new_connection IPC credentials authenticated (4555-5616-24)
Jun  5 15:38:48 vm1 corosync[4555]:   [QB    ] qb_ipcs_shm_connect connecting to client [5616]
Jun  5 15:38:48 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:38:48 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:38:48 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:38:48 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_created connection created
Jun  5 15:38:48 vm1 corosync[4555]:   [CMAP  ] cmap_lib_init_fn lib_init_fn: conn=0x7fab5fe78d10
Jun  5 15:38:48 vm1 corosync[4555]:   [QB    ] qb_ipcs_dispatch_connection_request HUP conn (4555-5616-24)
Jun  5 15:38:48 vm1 corosync[4555]:   [QB    ] qb_ipcs_disconnect qb_ipcs_disconnect(4555-5616-24) state:2
Jun  5 15:38:48 vm1 corosync[4555]:   [QB    ] _del epoll_ctl(del): Bad file descriptor (9)
Jun  5 15:38:48 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_closed cs_ipcs_connection_closed() 
Jun  5 15:38:48 vm1 corosync[4555]:   [CMAP  ] cmap_lib_exit_fn exit_fn for conn=0x7fab5fe78d10
Jun  5 15:38:48 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_destroyed cs_ipcs_connection_destroyed() 
Jun  5 15:38:48 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-response-4555-5616-24-header
Jun  5 15:38:48 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-event-4555-5616-24-header
Jun  5 15:38:48 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-request-4555-5616-24-header
Jun  5 15:39:20 vm1 root: Mark:pcmk:1370414360
Jun  5 15:39:37 vm1 corosync[4555]:   [QB    ] handle_new_connection IPC credentials authenticated (4555-7601-24)
Jun  5 15:39:37 vm1 corosync[4555]:   [QB    ] qb_ipcs_shm_connect connecting to client [7601]
Jun  5 15:39:37 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:39:37 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:39:37 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:39:37 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_created connection created
Jun  5 15:39:37 vm1 corosync[4555]:   [CMAP  ] cmap_lib_init_fn lib_init_fn: conn=0x7fab5fe78d10
Jun  5 15:39:37 vm1 corosync[4555]:   [QB    ] qb_ipcs_dispatch_connection_request HUP conn (4555-7601-24)
Jun  5 15:39:37 vm1 corosync[4555]:   [QB    ] qb_ipcs_disconnect qb_ipcs_disconnect(4555-7601-24) state:2
Jun  5 15:39:37 vm1 corosync[4555]:   [QB    ] _del epoll_ctl(del): Bad file descriptor (9)
Jun  5 15:39:37 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_closed cs_ipcs_connection_closed() 
Jun  5 15:39:37 vm1 corosync[4555]:   [CMAP  ] cmap_lib_exit_fn exit_fn for conn=0x7fab5fe78d10
Jun  5 15:39:37 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_destroyed cs_ipcs_connection_destroyed() 
Jun  5 15:39:37 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-response-4555-7601-24-header
Jun  5 15:39:37 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-event-4555-7601-24-header
Jun  5 15:39:37 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-request-4555-7601-24-header
Jun  5 15:39:37 vm1 corosync[4555]:   [QB    ] handle_new_connection IPC credentials authenticated (4555-7603-24)
Jun  5 15:39:37 vm1 corosync[4555]:   [QB    ] qb_ipcs_shm_connect connecting to client [7603]
Jun  5 15:39:37 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:39:37 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:39:37 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:39:37 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_created connection created
Jun  5 15:39:37 vm1 corosync[4555]:   [CMAP  ] cmap_lib_init_fn lib_init_fn: conn=0x7fab5fe78d10
Jun  5 15:39:37 vm1 corosync[4555]:   [QB    ] qb_rb_write_to_file  writing total of: 8388620
Jun  5 15:39:37 vm1 corosync[4555]:   [QB    ] qb_ipcs_dispatch_connection_request HUP conn (4555-7603-24)
Jun  5 15:39:37 vm1 corosync[4555]:   [QB    ] qb_ipcs_disconnect qb_ipcs_disconnect(4555-7603-24) state:2
Jun  5 15:39:37 vm1 corosync[4555]:   [QB    ] _del epoll_ctl(del): Bad file descriptor (9)
Jun  5 15:39:37 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_closed cs_ipcs_connection_closed() 
Jun  5 15:39:37 vm1 corosync[4555]:   [CMAP  ] cmap_lib_exit_fn exit_fn for conn=0x7fab5fe78d10
Jun  5 15:39:37 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_destroyed cs_ipcs_connection_destroyed() 
Jun  5 15:39:37 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-response-4555-7603-24-header
Jun  5 15:39:37 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-event-4555-7603-24-header
Jun  5 15:39:37 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-request-4555-7603-24-header
Jun  5 15:39:37 vm1 corosync[4555]:   [QB    ] handle_new_connection IPC credentials authenticated (4555-7608-24)
Jun  5 15:39:37 vm1 corosync[4555]:   [QB    ] qb_ipcs_shm_connect connecting to client [7608]
Jun  5 15:39:37 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:39:37 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:39:37 vm1 corosync[4555]:   [QB    ] qb_rb_open shm size:1048576; real_size:1048576; rb->word_size:262144
Jun  5 15:39:37 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_created connection created
Jun  5 15:39:37 vm1 corosync[4555]:   [CMAP  ] cmap_lib_init_fn lib_init_fn: conn=0x7fab5fe78d10
Jun  5 15:39:37 vm1 corosync[4555]:   [QB    ] qb_ipcs_dispatch_connection_request HUP conn (4555-7608-24)
Jun  5 15:39:37 vm1 corosync[4555]:   [QB    ] qb_ipcs_disconnect qb_ipcs_disconnect(4555-7608-24) state:2
Jun  5 15:39:37 vm1 corosync[4555]:   [QB    ] _del epoll_ctl(del): Bad file descriptor (9)
Jun  5 15:39:37 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_closed cs_ipcs_connection_closed() 
Jun  5 15:39:37 vm1 corosync[4555]:   [CMAP  ] cmap_lib_exit_fn exit_fn for conn=0x7fab5fe78d10
Jun  5 15:39:37 vm1 corosync[4555]:   [MAIN  ] cs_ipcs_connection_destroyed cs_ipcs_connection_destroyed() 
Jun  5 15:39:37 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-response-4555-7608-24-header
Jun  5 15:39:37 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-event-4555-7608-24-header
Jun  5 15:39:37 vm1 corosync[4555]:   [QB    ] qb_rb_close Free'ing ringbuffer: /dev/shm/qb-cmap-request-4555-7608-24-header
Jun  5 15:43:35 vm1 root: Mark:pcmk:1370414615
