Oct 15 15:15:47 [14853] vm1 corosync notice  [MAIN  ] main.c:main:1171 Corosync Cluster Engine ('2.3.2.4-805b3'): started and ready to provide service.
Oct 15 15:15:47 [14853] vm1 corosync info    [MAIN  ] main.c:main:1172 Corosync built-in features: watchdog upstart snmp pie relro bindnow
Oct 15 15:15:47 [14853] vm1 corosync debug   [TOTEM ] totempg.c:totempg_waiting_trans_ack_cb:285 waiting_trans_ack changed to 1
Oct 15 15:15:47 [14853] vm1 corosync debug   [TOTEM ] totemsrp.c:totemsrp_initialize:851 Token Timeout (1000 ms) retransmit timeout (238 ms)
Oct 15 15:15:47 [14853] vm1 corosync debug   [TOTEM ] totemsrp.c:totemsrp_initialize:854 token hold (180 ms) retransmits before loss (4 retrans)
Oct 15 15:15:47 [14853] vm1 corosync debug   [TOTEM ] totemsrp.c:totemsrp_initialize:861 join (50 ms) send_join (0 ms) consensus (1200 ms) merge (200 ms)
Oct 15 15:15:47 [14853] vm1 corosync debug   [TOTEM ] totemsrp.c:totemsrp_initialize:864 downcheck (1000 ms) fail to recv const (2500 msgs)
Oct 15 15:15:47 [14853] vm1 corosync debug   [TOTEM ] totemsrp.c:totemsrp_initialize:866 seqno unchanged const (30 rotations) Maximum network MTU 1401
Oct 15 15:15:47 [14853] vm1 corosync debug   [TOTEM ] totemsrp.c:totemsrp_initialize:870 window size per rotation (50 messages) maximum messages per rotation (17 messages)
Oct 15 15:15:47 [14853] vm1 corosync debug   [TOTEM ] totemsrp.c:totemsrp_initialize:874 missed count const (5 messages)
Oct 15 15:15:47 [14853] vm1 corosync debug   [TOTEM ] totemsrp.c:totemsrp_initialize:877 send threads (0 threads)
Oct 15 15:15:47 [14853] vm1 corosync debug   [TOTEM ] totemsrp.c:totemsrp_initialize:880 RRP token expired timeout (238 ms)
Oct 15 15:15:47 [14853] vm1 corosync debug   [TOTEM ] totemsrp.c:totemsrp_initialize:883 RRP token problem counter (10000 ms)
Oct 15 15:15:47 [14853] vm1 corosync debug   [TOTEM ] totemsrp.c:totemsrp_initialize:886 RRP threshold (10 problem count)
Oct 15 15:15:47 [14853] vm1 corosync debug   [TOTEM ] totemsrp.c:totemsrp_initialize:889 RRP multicast threshold (100 problem count)
Oct 15 15:15:47 [14853] vm1 corosync debug   [TOTEM ] totemsrp.c:totemsrp_initialize:892 RRP automatic recovery check timeout (1000 ms)
Oct 15 15:15:47 [14853] vm1 corosync debug   [TOTEM ] totemsrp.c:totemsrp_initialize:894 RRP mode set to active.
Oct 15 15:15:47 [14853] vm1 corosync debug   [TOTEM ] totemsrp.c:totemsrp_initialize:897 heartbeat_failures_allowed (0)
Oct 15 15:15:47 [14853] vm1 corosync debug   [TOTEM ] totemsrp.c:totemsrp_initialize:899 max_network_delay (50 ms)
Oct 15 15:15:47 [14853] vm1 corosync debug   [TOTEM ] totemsrp.c:totemsrp_initialize:922 HeartBeat is Disabled. To enable set heartbeat_failures_allowed > 0
Oct 15 15:15:47 [14853] vm1 corosync notice  [TOTEM ] totemnet.c:totemnet_instance_initialize:242 Initializing transport (UDP/IP Multicast).
Oct 15 15:15:47 [14853] vm1 corosync notice  [TOTEM ] totemcrypto.c:init_nss:579 Initializing transmit/receive security (NSS) crypto: aes256 hash: sha1
Oct 15 15:15:47 [14853] vm1 corosync notice  [TOTEM ] totemnet.c:totemnet_instance_initialize:242 Initializing transport (UDP/IP Multicast).
Oct 15 15:15:47 [14853] vm1 corosync notice  [TOTEM ] totemcrypto.c:init_nss:579 Initializing transmit/receive security (NSS) crypto: aes256 hash: sha1
Oct 15 15:15:47 [14853] vm1 corosync debug   [TOTEM ] totemudp.c:totemudp_build_sockets_ip:905 Receive multicast socket recv buffer size (320000 bytes).
Oct 15 15:15:47 [14853] vm1 corosync debug   [TOTEM ] totemudp.c:totemudp_build_sockets_ip:911 Transmit multicast socket send buffer size (320000 bytes).
Oct 15 15:15:47 [14853] vm1 corosync debug   [TOTEM ] totemudp.c:totemudp_build_sockets_ip:917 Local receive multicast loop socket recv buffer size (320000 bytes).
Oct 15 15:15:47 [14853] vm1 corosync debug   [TOTEM ] totemudp.c:totemudp_build_sockets_ip:923 Local transmit multicast loop socket send buffer size (320000 bytes).
Oct 15 15:15:47 [14853] vm1 corosync notice  [TOTEM ] totemudp.c:timer_function_netif_check_timeout:670 The network interface [192.168.101.141] is now up.
Oct 15 15:15:47 [14853] vm1 corosync debug   [TOTEM ] totemsrp.c:main_iface_change_fn:4586 Created or loaded sequence id 0.192.168.101.141 for this ring.
Oct 15 15:15:47 [14853] vm1 corosync notice  [SERV  ] service.c:corosync_service_link_and_init:174 Service engine loaded: corosync configuration map access [0]
Oct 15 15:15:47 [14853] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_service_init:865 Initializing IPC on cmap [0]
Oct 15 15:15:47 [14853] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_get_ipc_type:811 No configured qb.ipc_type. Using native ipc
Oct 15 15:15:47 [14853] vm1 corosync notice  [SERV  ] service.c:corosync_service_link_and_init:174 Service engine loaded: corosync configuration service [1]
Oct 15 15:15:47 [14853] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_service_init:865 Initializing IPC on cfg [1]
Oct 15 15:15:47 [14853] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_get_ipc_type:811 No configured qb.ipc_type. Using native ipc
Oct 15 15:15:47 [14853] vm1 corosync notice  [SERV  ] service.c:corosync_service_link_and_init:174 Service engine loaded: corosync cluster closed process group service v1.01 [2]
Oct 15 15:15:47 [14853] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_service_init:865 Initializing IPC on cpg [2]
Oct 15 15:15:47 [14853] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_get_ipc_type:811 No configured qb.ipc_type. Using native ipc
Oct 15 15:15:47 [14853] vm1 corosync notice  [SERV  ] service.c:corosync_service_link_and_init:174 Service engine loaded: corosync profile loading service [4]
Oct 15 15:15:47 [14853] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_service_init:851 NOT Initializing IPC on pload [4]
Oct 15 15:15:47 [14853] vm1 corosync info    [WD    ] wd.c:setup_watchdog:651 Watchdog is now been tickled by corosync.
Oct 15 15:15:47 [14853] vm1 corosync debug   [WD    ] wd.c:setup_watchdog:652 Software Watchdog
Oct 15 15:15:47 [14853] vm1 corosync info    [WD    ] wd.c:wd_scan_resources:580 no resources configured.
Oct 15 15:15:47 [14853] vm1 corosync notice  [SERV  ] service.c:corosync_service_link_and_init:174 Service engine loaded: corosync watchdog service [7]
Oct 15 15:15:47 [14853] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_service_init:851 NOT Initializing IPC on wd [7]
Oct 15 15:15:47 [14853] vm1 corosync notice  [QUORUM] vsf_quorum.c:quorum_exec_init_fn:274 Using quorum provider corosync_votequorum
Oct 15 15:15:47 [14853] vm1 corosync debug   [VOTEQ ] votequorum.c:votequorum_readconfig:967 Reading configuration (runtime: 0)
Oct 15 15:15:47 [14853] vm1 corosync debug   [VOTEQ ] votequorum.c:votequorum_read_nodelist_configuration:886 No nodelist defined or our node is not in the nodelist
Oct 15 15:15:47 [14853] vm1 corosync debug   [VOTEQ ] votequorum.c:recalculate_quorum:851 total_votes=1, expected_votes=2
Oct 15 15:15:47 [14853] vm1 corosync debug   [VOTEQ ] votequorum.c:calculate_quorum:670 node 3232261517 state=1, votes=1, expected=2
Oct 15 15:15:47 [14853] vm1 corosync notice  [VOTEQ ] votequorum.c:are_we_quorate:744 Waiting for all cluster members. Current votes: 1 expected_votes: 2
Oct 15 15:15:47 [14853] vm1 corosync debug   [VOTEQ ] votequorum.c:decode_flags:587 flags: quorate: No Leaving: No WFA Status: Yes First: Yes Qdevice: No QdeviceAlive: No QdeviceCastVote: No QdeviceMasterWins: No
Oct 15 15:15:47 [14853] vm1 corosync notice  [SERV  ] service.c:corosync_service_link_and_init:174 Service engine loaded: corosync vote quorum service v1.0 [5]
Oct 15 15:15:47 [14853] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_service_init:865 Initializing IPC on votequorum [5]
Oct 15 15:15:47 [14853] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_get_ipc_type:811 No configured qb.ipc_type. Using native ipc
Oct 15 15:15:47 [14853] vm1 corosync notice  [SERV  ] service.c:corosync_service_link_and_init:174 Service engine loaded: corosync cluster quorum service v0.1 [3]
Oct 15 15:15:47 [14853] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_service_init:865 Initializing IPC on quorum [3]
Oct 15 15:15:47 [14853] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_get_ipc_type:811 No configured qb.ipc_type. Using native ipc
Oct 15 15:15:47 [14853] vm1 corosync debug   [TOTEM ] totemudp.c:totemudp_build_sockets_ip:905 Receive multicast socket recv buffer size (320000 bytes).
Oct 15 15:15:47 [14853] vm1 corosync debug   [TOTEM ] totemudp.c:totemudp_build_sockets_ip:911 Transmit multicast socket send buffer size (320000 bytes).
Oct 15 15:15:47 [14853] vm1 corosync debug   [TOTEM ] totemudp.c:totemudp_build_sockets_ip:917 Local receive multicast loop socket recv buffer size (320000 bytes).
Oct 15 15:15:47 [14853] vm1 corosync debug   [TOTEM ] totemudp.c:totemudp_build_sockets_ip:923 Local transmit multicast loop socket send buffer size (320000 bytes).
Oct 15 15:15:47 [14853] vm1 corosync notice  [TOTEM ] totemudp.c:timer_function_netif_check_timeout:670 The network interface [192.168.102.141] is now up.
Oct 15 15:15:47 [14853] vm1 corosync debug   [TOTEM ] totemsrp.c:memb_state_gather_enter:2036 entering GATHER state from 15.
Oct 15 15:15:47 [14853] vm1 corosync debug   [TOTEM ] totemsrp.c:memb_state_commit_token_create:3087 Creating commit token because I am the rep.
Oct 15 15:15:47 [14853] vm1 corosync debug   [TOTEM ] totemsrp.c:old_ring_state_save:1500 Saving state aru 0 high seq received 0
Oct 15 15:15:47 [14853] vm1 corosync debug   [TOTEM ] totemsrp.c:memb_ring_id_set_and_store:3332 Storing new sequence id for ring 4
Oct 15 15:15:47 [14853] vm1 corosync debug   [TOTEM ] totemsrp.c:memb_state_commit_enter:2084 entering COMMIT state.
Oct 15 15:15:47 [14853] vm1 corosync debug   [TOTEM ] totemsrp.c:message_handler_memb_commit_token:4433 got commit token
Oct 15 15:15:47 [14853] vm1 corosync debug   [TOTEM ] totemsrp.c:memb_state_recovery_enter:2121 entering RECOVERY state.
Oct 15 15:15:47 [14853] vm1 corosync debug   [TOTEM ] totemsrp.c:memb_state_recovery_enter:2167 position [0] member 192.168.101.141:
Oct 15 15:15:47 [14853] vm1 corosync debug   [TOTEM ] totemsrp.c:memb_state_recovery_enter:2171 previous ring seq 0 rep 192.168.101.141
Oct 15 15:15:47 [14853] vm1 corosync debug   [TOTEM ] totemsrp.c:memb_state_recovery_enter:2177 aru 0 high delivered 0 received flag 1
Oct 15 15:15:47 [14853] vm1 corosync debug   [TOTEM ] totemsrp.c:memb_state_recovery_enter:2275 Did not need to originate any messages in recovery.
Oct 15 15:15:47 [14853] vm1 corosync debug   [TOTEM ] totemsrp.c:message_handler_memb_commit_token:4433 got commit token
Oct 15 15:15:47 [14853] vm1 corosync debug   [TOTEM ] totemsrp.c:message_handler_memb_commit_token:4486 Sending initial ORF token
Oct 15 15:15:47 [14853] vm1 corosync debug   [TOTEM ] totemsrp.c:message_handler_memb_commit_token:4433 got commit token
Oct 15 15:15:47 [14853] vm1 corosync debug   [TOTEM ] totemsrp.c:message_handler_memb_commit_token:4486 Sending initial ORF token
Oct 15 15:15:47 [14853] vm1 corosync debug   [TOTEM ] totemsrp.c:message_handler_memb_commit_token:4433 got commit token
Oct 15 15:15:47 [14853] vm1 corosync debug   [TOTEM ] totemsrp.c:message_handler_memb_commit_token:4486 Sending initial ORF token
Oct 15 15:15:47 [14853] vm1 corosync debug   [TOTEM ] totemsrp.c:message_handler_memb_commit_token:4433 got commit token
Oct 15 15:15:47 [14853] vm1 corosync debug   [TOTEM ] totemsrp.c:message_handler_memb_commit_token:4486 Sending initial ORF token
Oct 15 15:15:47 [14853] vm1 corosync debug   [TOTEM ] totemsrp.c:message_handler_memb_commit_token:4433 got commit token
Oct 15 15:15:47 [14853] vm1 corosync debug   [TOTEM ] totemsrp.c:message_handler_memb_commit_token:4486 Sending initial ORF token
Oct 15 15:15:47 [14853] vm1 corosync debug   [TOTEM ] totemsrp.c:message_handler_orf_token:3748 token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 0, aru 0
Oct 15 15:15:47 [14853] vm1 corosync debug   [TOTEM ] totemsrp.c:message_handler_orf_token:3759 install seq 0 aru 0 high seq received 0
Oct 15 15:15:48 [14853] vm1 corosync debug   [TOTEM ] totemsrp.c:message_handler_orf_token:3748 token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 1, aru 0
Oct 15 15:15:48 [14853] vm1 corosync debug   [TOTEM ] totemsrp.c:message_handler_orf_token:3759 install seq 0 aru 0 high seq received 0
Oct 15 15:15:48 [14853] vm1 corosync debug   [TOTEM ] totemsrp.c:message_handler_orf_token:3748 token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 2, aru 0
Oct 15 15:15:48 [14853] vm1 corosync debug   [TOTEM ] totemsrp.c:message_handler_orf_token:3759 install seq 0 aru 0 high seq received 0
Oct 15 15:15:48 [14853] vm1 corosync debug   [TOTEM ] totemsrp.c:message_handler_orf_token:3748 token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 3, aru 0
Oct 15 15:15:48 [14853] vm1 corosync debug   [TOTEM ] totemsrp.c:message_handler_orf_token:3759 install seq 0 aru 0 high seq received 0
Oct 15 15:15:48 [14853] vm1 corosync debug   [TOTEM ] totemsrp.c:message_handler_orf_token:3778 retrans flag count 4 token aru 0 install seq 0 aru 0 0
Oct 15 15:15:48 [14853] vm1 corosync debug   [TOTEM ] totemsrp.c:old_ring_state_reset:1516 Resetting old ring state
Oct 15 15:15:48 [14853] vm1 corosync debug   [TOTEM ] totemsrp.c:deliver_messages_from_recovery_to_regular:1722 recovery to regular 1-0
Oct 15 15:15:48 [14853] vm1 corosync debug   [TOTEM ] totempg.c:totempg_waiting_trans_ack_cb:285 waiting_trans_ack changed to 1
Oct 15 15:15:48 [14853] vm1 corosync debug   [MAIN  ] main.c:member_object_joined:333 Member joined: r(0) ip(192.168.101.141) r(1) ip(192.168.102.141) 
Oct 15 15:15:48 [14853] vm1 corosync debug   [TOTEM ] totemsrp.c:memb_state_operational_enter:1960 entering OPERATIONAL state.
Oct 15 15:15:48 [14853] vm1 corosync notice  [TOTEM ] totemsrp.c:memb_state_operational_enter:1966 A new membership (192.168.101.141:4) was formed. Members joined: -1062705779
Oct 15 15:15:48 [14853] vm1 corosync debug   [VOTEQ ] votequorum.c:message_handler_req_exec_votequorum_nodeinfo:1604 got nodeinfo message from cluster node 3232261517
Oct 15 15:15:48 [14853] vm1 corosync debug   [VOTEQ ] votequorum.c:message_handler_req_exec_votequorum_nodeinfo:1609 nodeinfo message[3232261517]: votes: 1, expected: 2 flags: 12
Oct 15 15:15:48 [14853] vm1 corosync debug   [VOTEQ ] votequorum.c:decode_flags:587 flags: quorate: No Leaving: No WFA Status: Yes First: Yes Qdevice: No QdeviceAlive: No QdeviceCastVote: No QdeviceMasterWins: No
Oct 15 15:15:48 [14853] vm1 corosync debug   [VOTEQ ] votequorum.c:recalculate_quorum:851 total_votes=1, expected_votes=2
Oct 15 15:15:48 [14853] vm1 corosync debug   [VOTEQ ] votequorum.c:calculate_quorum:670 node 3232261517 state=1, votes=1, expected=2
Oct 15 15:15:48 [14853] vm1 corosync notice  [VOTEQ ] votequorum.c:are_we_quorate:744 Waiting for all cluster members. Current votes: 1 expected_votes: 2
Oct 15 15:15:48 [14853] vm1 corosync debug   [SYNC  ] sync.c:sync_barrier_handler:232 Committing synchronization for corosync configuration map access
Oct 15 15:15:48 [14853] vm1 corosync debug   [CMAP  ] cmap.c:cmap_sync_activate:386 Single node sync -> no action
Oct 15 15:15:48 [14853] vm1 corosync debug   [CPG   ] cpg.c:downlist_log:776 comparing: sender r(0) ip(192.168.101.141) r(1) ip(192.168.102.141) ; members(old:0 left:0)
Oct 15 15:15:48 [14853] vm1 corosync debug   [CPG   ] cpg.c:downlist_log:776 chosen downlist: sender r(0) ip(192.168.101.141) r(1) ip(192.168.102.141) ; members(old:0 left:0)
Oct 15 15:15:48 [14853] vm1 corosync debug   [SYNC  ] sync.c:sync_barrier_handler:232 Committing synchronization for corosync cluster closed process group service v1.01
Oct 15 15:15:48 [14853] vm1 corosync debug   [VOTEQ ] votequorum.c:decode_flags:587 flags: quorate: No Leaving: No WFA Status: Yes First: Yes Qdevice: No QdeviceAlive: No QdeviceCastVote: No QdeviceMasterWins: No
Oct 15 15:15:48 [14853] vm1 corosync debug   [VOTEQ ] votequorum.c:message_handler_req_exec_votequorum_nodeinfo:1604 got nodeinfo message from cluster node 3232261517
Oct 15 15:15:48 [14853] vm1 corosync debug   [VOTEQ ] votequorum.c:message_handler_req_exec_votequorum_nodeinfo:1609 nodeinfo message[3232261517]: votes: 1, expected: 2 flags: 12
Oct 15 15:15:48 [14853] vm1 corosync debug   [VOTEQ ] votequorum.c:decode_flags:587 flags: quorate: No Leaving: No WFA Status: Yes First: Yes Qdevice: No QdeviceAlive: No QdeviceCastVote: No QdeviceMasterWins: No
Oct 15 15:15:48 [14853] vm1 corosync debug   [VOTEQ ] votequorum.c:recalculate_quorum:851 total_votes=1, expected_votes=2
Oct 15 15:15:48 [14853] vm1 corosync debug   [VOTEQ ] votequorum.c:calculate_quorum:670 node 3232261517 state=1, votes=1, expected=2
Oct 15 15:15:48 [14853] vm1 corosync notice  [VOTEQ ] votequorum.c:are_we_quorate:744 Waiting for all cluster members. Current votes: 1 expected_votes: 2
Oct 15 15:15:48 [14853] vm1 corosync debug   [VOTEQ ] votequorum.c:message_handler_req_exec_votequorum_nodeinfo:1604 got nodeinfo message from cluster node 3232261517
Oct 15 15:15:48 [14853] vm1 corosync debug   [VOTEQ ] votequorum.c:message_handler_req_exec_votequorum_nodeinfo:1609 nodeinfo message[0]: votes: 0, expected: 0 flags: 0
Oct 15 15:15:48 [14853] vm1 corosync debug   [SYNC  ] sync.c:sync_barrier_handler:232 Committing synchronization for corosync vote quorum service v1.0
Oct 15 15:15:48 [14853] vm1 corosync debug   [VOTEQ ] votequorum.c:recalculate_quorum:851 total_votes=1, expected_votes=2
Oct 15 15:15:48 [14853] vm1 corosync debug   [VOTEQ ] votequorum.c:calculate_quorum:670 node 3232261517 state=1, votes=1, expected=2
Oct 15 15:15:48 [14853] vm1 corosync notice  [VOTEQ ] votequorum.c:are_we_quorate:744 Waiting for all cluster members. Current votes: 1 expected_votes: 2
Oct 15 15:15:48 [14853] vm1 corosync notice  [QUORUM] vsf_quorum.c:log_view_list:132 Members[1]: -1062705779
Oct 15 15:15:48 [14853] vm1 corosync debug   [QUORUM] vsf_quorum.c:send_library_notification:359 sending quorum notification to (nil), length = 52
Oct 15 15:15:48 [14853] vm1 corosync notice  [MAIN  ] main.c:corosync_sync_completed:276 Completed service synchronization, ready to provide service.
Oct 15 15:15:48 [14853] vm1 corosync debug   [TOTEM ] totempg.c:totempg_waiting_trans_ack_cb:285 waiting_trans_ack changed to 0
Oct 15 15:15:48 [14853] vm1 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [14860]
Oct 15 15:15:48 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:48 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:48 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:48 [14853] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Oct 15 15:15:48 [14853] vm1 corosync debug   [QB    ] ipcs.c:qb_ipcs_dispatch_connection_request:757 HUP conn (14855-14860-26)
Oct 15 15:15:48 [14853] vm1 corosync debug   [QB    ] ipcs.c:qb_ipcs_disconnect:605 qb_ipcs_disconnect(14855-14860-26) state:2
Oct 15 15:15:48 [14853] vm1 corosync debug   [QB    ] loop_poll_epoll.c:_del:117 epoll_ctl(del): Bad file descriptor (9)
Oct 15 15:15:48 [14853] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_closed:417 cs_ipcs_connection_closed() 
Oct 15 15:15:48 [14853] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_destroyed:390 cs_ipcs_connection_destroyed() 
Oct 15 15:15:48 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cfg-response-14855-14860-26-header
Oct 15 15:15:48 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cfg-event-14855-14860-26-header
Oct 15 15:15:48 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cfg-request-14855-14860-26-header
Oct 15 15:15:48 [14853] vm1 corosync debug   [TOTEM ] totemsrp.c:memb_state_gather_enter:2036 entering GATHER state from 11.
Oct 15 15:15:48 [14853] vm1 corosync debug   [TOTEM ] totemsrp.c:memb_state_commit_token_create:3087 Creating commit token because I am the rep.
Oct 15 15:15:48 [14853] vm1 corosync debug   [TOTEM ] totemsrp.c:old_ring_state_save:1500 Saving state aru 6 high seq received 6
Oct 15 15:15:48 [14853] vm1 corosync debug   [TOTEM ] totemsrp.c:memb_ring_id_set_and_store:3332 Storing new sequence id for ring 8
Oct 15 15:15:48 [14853] vm1 corosync debug   [TOTEM ] totemsrp.c:memb_state_commit_enter:2084 entering COMMIT state.
Oct 15 15:15:48 [14853] vm1 corosync debug   [TOTEM ] totemsrp.c:message_handler_memb_commit_token:4433 got commit token
Oct 15 15:15:49 [14853] vm1 corosync debug   [TOTEM ] totemsrp.c:timer_function_orf_token_timeout:1655 The token was lost in the COMMIT state.
Oct 15 15:15:49 [14853] vm1 corosync debug   [TOTEM ] totemsrp.c:memb_state_gather_enter:2036 entering GATHER state from 4.
Oct 15 15:15:49 [14853] vm1 corosync debug   [TOTEM ] totemsrp.c:memb_state_gather_enter:2036 entering GATHER state from 11.
Oct 15 15:15:49 [14853] vm1 corosync debug   [TOTEM ] totemsrp.c:memb_state_commit_token_create:3087 Creating commit token because I am the rep.
Oct 15 15:15:49 [14853] vm1 corosync debug   [TOTEM ] totemsrp.c:memb_ring_id_set_and_store:3332 Storing new sequence id for ring c
Oct 15 15:15:49 [14853] vm1 corosync debug   [TOTEM ] totemsrp.c:memb_state_commit_enter:2084 entering COMMIT state.
Oct 15 15:15:49 [14853] vm1 corosync debug   [TOTEM ] totemsrp.c:message_handler_memb_commit_token:4433 got commit token
Oct 15 15:15:49 [14853] vm1 corosync debug   [TOTEM ] totemsrp.c:memb_state_recovery_enter:2121 entering RECOVERY state.
Oct 15 15:15:49 [14853] vm1 corosync debug   [TOTEM ] totemsrp.c:memb_state_recovery_enter:2163 TRANS [0] member 192.168.101.141:
Oct 15 15:15:49 [14853] vm1 corosync debug   [TOTEM ] totemsrp.c:memb_state_recovery_enter:2167 position [0] member 192.168.101.141:
Oct 15 15:15:49 [14853] vm1 corosync debug   [TOTEM ] totemsrp.c:memb_state_recovery_enter:2171 previous ring seq 4 rep 192.168.101.141
Oct 15 15:15:49 [14853] vm1 corosync debug   [TOTEM ] totemsrp.c:memb_state_recovery_enter:2177 aru 6 high delivered 6 received flag 1
Oct 15 15:15:49 [14853] vm1 corosync debug   [TOTEM ] totemsrp.c:memb_state_recovery_enter:2167 position [1] member 192.168.101.142:
Oct 15 15:15:49 [14853] vm1 corosync debug   [TOTEM ] totemsrp.c:memb_state_recovery_enter:2171 previous ring seq 4 rep 192.168.101.142
Oct 15 15:15:49 [14853] vm1 corosync debug   [TOTEM ] totemsrp.c:memb_state_recovery_enter:2177 aru 6 high delivered 6 received flag 1
Oct 15 15:15:49 [14853] vm1 corosync debug   [TOTEM ] totemsrp.c:memb_state_recovery_enter:2167 position [2] member 192.168.101.143:
Oct 15 15:15:49 [14853] vm1 corosync debug   [TOTEM ] totemsrp.c:memb_state_recovery_enter:2171 previous ring seq 4 rep 192.168.101.143
Oct 15 15:15:49 [14853] vm1 corosync debug   [TOTEM ] totemsrp.c:memb_state_recovery_enter:2177 aru 6 high delivered 6 received flag 1
Oct 15 15:15:49 [14853] vm1 corosync debug   [TOTEM ] totemsrp.c:memb_state_recovery_enter:2275 Did not need to originate any messages in recovery.
Oct 15 15:15:49 [14853] vm1 corosync debug   [TOTEM ] totemsrp.c:message_handler_memb_commit_token:4433 got commit token
Oct 15 15:15:49 [14853] vm1 corosync debug   [TOTEM ] totemsrp.c:message_handler_memb_commit_token:4486 Sending initial ORF token
Oct 15 15:15:49 [14853] vm1 corosync debug   [TOTEM ] totemsrp.c:message_handler_memb_commit_token:4433 got commit token
Oct 15 15:15:49 [14853] vm1 corosync debug   [TOTEM ] totemsrp.c:message_handler_memb_commit_token:4486 Sending initial ORF token
Oct 15 15:15:49 [14853] vm1 corosync debug   [TOTEM ] totemsrp.c:message_handler_memb_commit_token:4433 got commit token
Oct 15 15:15:49 [14853] vm1 corosync debug   [TOTEM ] totemsrp.c:message_handler_memb_commit_token:4486 Sending initial ORF token
Oct 15 15:15:49 [14853] vm1 corosync debug   [TOTEM ] totemsrp.c:message_handler_memb_commit_token:4433 got commit token
Oct 15 15:15:49 [14853] vm1 corosync debug   [TOTEM ] totemsrp.c:message_handler_memb_commit_token:4486 Sending initial ORF token
Oct 15 15:15:49 [14853] vm1 corosync debug   [TOTEM ] totemsrp.c:message_handler_orf_token:3748 token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 0, aru 0
Oct 15 15:15:49 [14853] vm1 corosync debug   [TOTEM ] totemsrp.c:message_handler_orf_token:3759 install seq 0 aru 0 high seq received 0
Oct 15 15:15:49 [14853] vm1 corosync debug   [TOTEM ] totemsrp.c:message_handler_orf_token:3748 token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 1, aru 0
Oct 15 15:15:49 [14853] vm1 corosync debug   [TOTEM ] totemsrp.c:message_handler_orf_token:3759 install seq 0 aru 0 high seq received 0
Oct 15 15:15:49 [14853] vm1 corosync debug   [TOTEM ] totemsrp.c:message_handler_orf_token:3748 token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 2, aru 0
Oct 15 15:15:49 [14853] vm1 corosync debug   [TOTEM ] totemsrp.c:message_handler_orf_token:3759 install seq 0 aru 0 high seq received 0
Oct 15 15:15:49 [14853] vm1 corosync debug   [TOTEM ] totemsrp.c:message_handler_orf_token:3748 token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 3, aru 0
Oct 15 15:15:49 [14853] vm1 corosync debug   [TOTEM ] totemsrp.c:message_handler_orf_token:3759 install seq 0 aru 0 high seq received 0
Oct 15 15:15:49 [14853] vm1 corosync debug   [TOTEM ] totemsrp.c:message_handler_orf_token:3778 retrans flag count 4 token aru 0 install seq 0 aru 0 0
Oct 15 15:15:49 [14853] vm1 corosync debug   [TOTEM ] totemsrp.c:old_ring_state_reset:1516 Resetting old ring state
Oct 15 15:15:49 [14853] vm1 corosync debug   [TOTEM ] totemsrp.c:deliver_messages_from_recovery_to_regular:1722 recovery to regular 1-0
Oct 15 15:15:49 [14853] vm1 corosync debug   [TOTEM ] totempg.c:totempg_waiting_trans_ack_cb:285 waiting_trans_ack changed to 1
Oct 15 15:15:49 [14853] vm1 corosync debug   [MAIN  ] main.c:member_object_joined:333 Member joined: r(0) ip(192.168.101.142) r(1) ip(192.168.102.142) 
Oct 15 15:15:49 [14853] vm1 corosync debug   [MAIN  ] main.c:member_object_joined:333 Member joined: r(0) ip(192.168.101.143) r(1) ip(192.168.102.143) 
Oct 15 15:15:49 [14853] vm1 corosync debug   [TOTEM ] totemsrp.c:memb_state_operational_enter:1960 entering OPERATIONAL state.
Oct 15 15:15:49 [14853] vm1 corosync notice  [TOTEM ] totemsrp.c:memb_state_operational_enter:1966 A new membership (192.168.101.141:12) was formed. Members joined: -1062705778 -1062705777
Oct 15 15:15:49 [14853] vm1 corosync debug   [SYNC  ] sync.c:sync_barrier_handler:232 Committing synchronization for corosync configuration map access
Oct 15 15:15:49 [14853] vm1 corosync debug   [CPG   ] cpg.c:downlist_log:776 comparing: sender r(0) ip(192.168.101.142) r(1) ip(192.168.102.142) ; members(old:1 left:0)
Oct 15 15:15:49 [14853] vm1 corosync debug   [CPG   ] cpg.c:downlist_log:776 comparing: sender r(0) ip(192.168.101.141) r(1) ip(192.168.102.141) ; members(old:1 left:0)
Oct 15 15:15:49 [14853] vm1 corosync debug   [CPG   ] cpg.c:downlist_log:776 comparing: sender r(0) ip(192.168.101.143) r(1) ip(192.168.102.143) ; members(old:1 left:0)
Oct 15 15:15:49 [14853] vm1 corosync debug   [CPG   ] cpg.c:downlist_log:776 chosen downlist: sender r(0) ip(192.168.101.141) r(1) ip(192.168.102.141) ; members(old:1 left:0)
Oct 15 15:15:49 [14853] vm1 corosync debug   [SYNC  ] sync.c:sync_barrier_handler:232 Committing synchronization for corosync cluster closed process group service v1.01
Oct 15 15:15:49 [14853] vm1 corosync debug   [VOTEQ ] votequorum.c:decode_flags:587 flags: quorate: No Leaving: No WFA Status: Yes First: No Qdevice: No QdeviceAlive: No QdeviceCastVote: No QdeviceMasterWins: No
Oct 15 15:15:49 [14853] vm1 corosync debug   [VOTEQ ] votequorum.c:message_handler_req_exec_votequorum_nodeinfo:1604 got nodeinfo message from cluster node 3232261518
Oct 15 15:15:49 [14853] vm1 corosync debug   [VOTEQ ] votequorum.c:message_handler_req_exec_votequorum_nodeinfo:1609 nodeinfo message[3232261518]: votes: 1, expected: 2 flags: 4
Oct 15 15:15:49 [14853] vm1 corosync debug   [VOTEQ ] votequorum.c:decode_flags:587 flags: quorate: No Leaving: No WFA Status: Yes First: No Qdevice: No QdeviceAlive: No QdeviceCastVote: No QdeviceMasterWins: No
Oct 15 15:15:49 [14853] vm1 corosync debug   [VOTEQ ] votequorum.c:recalculate_quorum:851 total_votes=2, expected_votes=2
Oct 15 15:15:49 [14853] vm1 corosync debug   [VOTEQ ] votequorum.c:calculate_quorum:670 node 3232261517 state=1, votes=1, expected=2
Oct 15 15:15:49 [14853] vm1 corosync debug   [VOTEQ ] votequorum.c:calculate_quorum:670 node 3232261518 state=1, votes=1, expected=2
Oct 15 15:15:49 [14853] vm1 corosync debug   [VOTEQ ] votequorum.c:get_lowest_node_id:527 lowest node id: -1062705779 us: -1062705779
Oct 15 15:15:49 [14853] vm1 corosync debug   [VOTEQ ] votequorum.c:are_we_quorate:777 quorum regained, resuming activity
Oct 15 15:15:49 [14853] vm1 corosync debug   [VOTEQ ] votequorum.c:message_handler_req_exec_votequorum_nodeinfo:1604 got nodeinfo message from cluster node 3232261518
Oct 15 15:15:49 [14853] vm1 corosync debug   [VOTEQ ] votequorum.c:message_handler_req_exec_votequorum_nodeinfo:1609 nodeinfo message[0]: votes: 0, expected: 0 flags: 0
Oct 15 15:15:49 [14853] vm1 corosync debug   [VOTEQ ] votequorum.c:message_handler_req_exec_votequorum_nodeinfo:1604 got nodeinfo message from cluster node 3232261519
Oct 15 15:15:49 [14853] vm1 corosync debug   [VOTEQ ] votequorum.c:message_handler_req_exec_votequorum_nodeinfo:1609 nodeinfo message[3232261519]: votes: 1, expected: 2 flags: 4
Oct 15 15:15:49 [14853] vm1 corosync debug   [VOTEQ ] votequorum.c:decode_flags:587 flags: quorate: No Leaving: No WFA Status: Yes First: No Qdevice: No QdeviceAlive: No QdeviceCastVote: No QdeviceMasterWins: No
Oct 15 15:15:49 [14853] vm1 corosync debug   [VOTEQ ] votequorum.c:recalculate_quorum:851 total_votes=3, expected_votes=2
Oct 15 15:15:49 [14853] vm1 corosync debug   [VOTEQ ] votequorum.c:votequorum_exec_send_expectedvotes_notification:1417 Sending expected votes callback
Oct 15 15:15:49 [14853] vm1 corosync debug   [VOTEQ ] votequorum.c:calculate_quorum:670 node 3232261517 state=1, votes=1, expected=3
Oct 15 15:15:49 [14853] vm1 corosync debug   [VOTEQ ] votequorum.c:calculate_quorum:670 node 3232261518 state=1, votes=1, expected=2
Oct 15 15:15:49 [14853] vm1 corosync debug   [VOTEQ ] votequorum.c:calculate_quorum:670 node 3232261519 state=1, votes=1, expected=2
Oct 15 15:15:49 [14853] vm1 corosync debug   [VOTEQ ] votequorum.c:get_lowest_node_id:527 lowest node id: -1062705779 us: -1062705779
Oct 15 15:15:49 [14853] vm1 corosync debug   [VOTEQ ] votequorum.c:message_handler_req_exec_votequorum_nodeinfo:1604 got nodeinfo message from cluster node 3232261519
Oct 15 15:15:49 [14853] vm1 corosync debug   [VOTEQ ] votequorum.c:message_handler_req_exec_votequorum_nodeinfo:1609 nodeinfo message[0]: votes: 0, expected: 0 flags: 0
Oct 15 15:15:49 [14853] vm1 corosync debug   [VOTEQ ] votequorum.c:message_handler_req_exec_votequorum_nodeinfo:1604 got nodeinfo message from cluster node 3232261517
Oct 15 15:15:49 [14853] vm1 corosync debug   [VOTEQ ] votequorum.c:message_handler_req_exec_votequorum_nodeinfo:1609 nodeinfo message[3232261517]: votes: 1, expected: 2 flags: 4
Oct 15 15:15:49 [14853] vm1 corosync debug   [VOTEQ ] votequorum.c:decode_flags:587 flags: quorate: No Leaving: No WFA Status: Yes First: No Qdevice: No QdeviceAlive: No QdeviceCastVote: No QdeviceMasterWins: No
Oct 15 15:15:49 [14853] vm1 corosync debug   [VOTEQ ] votequorum.c:recalculate_quorum:851 total_votes=3, expected_votes=3
Oct 15 15:15:49 [14853] vm1 corosync debug   [VOTEQ ] votequorum.c:calculate_quorum:670 node 3232261517 state=1, votes=1, expected=3
Oct 15 15:15:49 [14853] vm1 corosync debug   [VOTEQ ] votequorum.c:calculate_quorum:670 node 3232261518 state=1, votes=1, expected=2
Oct 15 15:15:49 [14853] vm1 corosync debug   [VOTEQ ] votequorum.c:calculate_quorum:670 node 3232261519 state=1, votes=1, expected=2
Oct 15 15:15:49 [14853] vm1 corosync debug   [VOTEQ ] votequorum.c:get_lowest_node_id:527 lowest node id: -1062705779 us: -1062705779
Oct 15 15:15:49 [14853] vm1 corosync debug   [VOTEQ ] votequorum.c:message_handler_req_exec_votequorum_nodeinfo:1604 got nodeinfo message from cluster node 3232261517
Oct 15 15:15:49 [14853] vm1 corosync debug   [VOTEQ ] votequorum.c:message_handler_req_exec_votequorum_nodeinfo:1609 nodeinfo message[0]: votes: 0, expected: 0 flags: 0
Oct 15 15:15:49 [14853] vm1 corosync debug   [SYNC  ] sync.c:sync_barrier_handler:232 Committing synchronization for corosync vote quorum service v1.0
Oct 15 15:15:49 [14853] vm1 corosync debug   [VOTEQ ] votequorum.c:recalculate_quorum:851 total_votes=3, expected_votes=3
Oct 15 15:15:49 [14853] vm1 corosync debug   [VOTEQ ] votequorum.c:calculate_quorum:670 node 3232261517 state=1, votes=1, expected=3
Oct 15 15:15:49 [14853] vm1 corosync debug   [VOTEQ ] votequorum.c:calculate_quorum:670 node 3232261518 state=1, votes=1, expected=2
Oct 15 15:15:49 [14853] vm1 corosync debug   [VOTEQ ] votequorum.c:calculate_quorum:670 node 3232261519 state=1, votes=1, expected=2
Oct 15 15:15:49 [14853] vm1 corosync debug   [VOTEQ ] votequorum.c:get_lowest_node_id:527 lowest node id: -1062705779 us: -1062705779
Oct 15 15:15:49 [14853] vm1 corosync notice  [QUORUM] vsf_quorum.c:quorum_api_set_quorum:148 This node is within the primary component and will provide service.
Oct 15 15:15:49 [14853] vm1 corosync notice  [QUORUM] vsf_quorum.c:log_view_list:132 Members[3]: -1062705779 -1062705778 -1062705777
Oct 15 15:15:49 [14853] vm1 corosync debug   [QUORUM] vsf_quorum.c:send_library_notification:359 sending quorum notification to (nil), length = 60
Oct 15 15:15:49 [14853] vm1 corosync notice  [MAIN  ] main.c:corosync_sync_completed:276 Completed service synchronization, ready to provide service.
Oct 15 15:15:49 [14853] vm1 corosync debug   [TOTEM ] totempg.c:totempg_waiting_trans_ack_cb:285 waiting_trans_ack changed to 0
Oct 15 15:15:50 [14865] vm1 pacemakerd:     info: crm_log_init: 	Changed active directory to /var/lib/heartbeat/cores/root
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: main: 	Checking for old instances of pacemakerd
Oct 15 15:15:50 [14865] vm1 pacemakerd:     info: crm_ipc_connect: 	Could not establish pacemakerd connection: Connection refused (111)
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [14865]
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: get_cluster_type: 	Testing with Corosync
Oct 15 15:15:50 [14853] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Oct 15 15:15:50 [14853] vm1 corosync debug   [CMAP  ] cmap.c:cmap_lib_init_fn:306 lib_init_fn: conn=0x7f0caafb43b0
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [14865]
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: qb_ipcc_disconnect: 	qb_ipcc_disconnect()
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-request-14855-14865-27-header
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-response-14855-14865-27-header
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-event-14855-14865-27-header
Oct 15 15:15:50 [14865] vm1 pacemakerd:     info: get_cluster_type: 	Detected an active 'corosync' cluster
Oct 15 15:15:50 [14865] vm1 pacemakerd:     info: mcp_read_config: 	Reading configure for stack: corosync
Oct 15 15:15:50 [14853] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Oct 15 15:15:50 [14853] vm1 corosync debug   [CMAP  ] cmap.c:cmap_lib_init_fn:306 lib_init_fn: conn=0x7f0cab0b5cf0
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ipcs.c:qb_ipcs_dispatch_connection_request:757 HUP conn (14855-14865-27)
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ipcs.c:qb_ipcs_disconnect:605 qb_ipcs_disconnect(14855-14865-27) state:2
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] loop_poll_epoll.c:_del:117 epoll_ctl(del): Bad file descriptor (9)
Oct 15 15:15:50 [14853] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_closed:417 cs_ipcs_connection_closed() 
Oct 15 15:15:50 [14853] vm1 corosync debug   [CMAP  ] cmap.c:cmap_lib_exit_fn:325 exit_fn for conn=0x7f0cab0b5cf0
Oct 15 15:15:50 [14853] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_destroyed:390 cs_ipcs_connection_destroyed() 
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-response-14855-14865-27-header
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-event-14855-14865-27-header
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-request-14855-14865-27-header
Oct 15 15:15:50 [14865] vm1 pacemakerd:   notice: mcp_read_config: 	Configured corosync to accept connections from group 492: OK (1)
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: qb_ipcc_disconnect: 	qb_ipcc_disconnect()
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-request-14855-14865-26-header
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-response-14855-14865-26-header
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-event-14855-14865-26-header
Oct 15 15:15:50 [14865] vm1 pacemakerd:   notice: crm_add_logfile: 	Additional logging available in /var/log/ha-debug
Oct 15 15:15:50 [14865] vm1 pacemakerd:   notice: main: 	Starting Pacemaker 1.1.11-0.284.6a5e863.git.el6 (Build: 6a5e863):  generated-manpages agent-manpages ascii-docs ncurses libqb-logging libqb-ipc lha-fencing nagios  corosync-native snmp
Oct 15 15:15:50 [14865] vm1 pacemakerd:   notice: main: 	Starting Pacemaker 1.1.11-0.284.6a5e863.git.el6 (Build: 6a5e863):  generated-manpages agent-manpages ascii-docs ncurses libqb-logging libqb-ipc lha-fencing nagios  corosync-native snmp
Oct 15 15:15:50 [14865] vm1 pacemakerd:     info: main: 	Maximum core file size is: 18446744073709551615
Oct 15 15:15:50 [14865] vm1 pacemakerd:     info: main: 	Maximum core file size is: 18446744073709551615
Oct 15 15:15:50 [14865] vm1 pacemakerd:     info: qb_ipcs_us_publish: 	server name: pacemakerd
Oct 15 15:15:50 [14865] vm1 pacemakerd:     info: qb_ipcs_us_publish: 	server name: pacemakerd
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ipcs.c:qb_ipcs_dispatch_connection_request:757 HUP conn (14855-14865-26)
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ipcs.c:qb_ipcs_disconnect:605 qb_ipcs_disconnect(14855-14865-26) state:2
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] loop_poll_epoll.c:_del:117 epoll_ctl(del): Bad file descriptor (9)
Oct 15 15:15:50 [14853] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_closed:417 cs_ipcs_connection_closed() 
Oct 15 15:15:50 [14853] vm1 corosync debug   [CMAP  ] cmap.c:cmap_lib_exit_fn:325 exit_fn for conn=0x7f0caafb43b0
Oct 15 15:15:50 [14853] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_destroyed:390 cs_ipcs_connection_destroyed() 
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-response-14855-14865-26-header
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-event-14855-14865-26-header
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-request-14855-14865-26-header
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [14865]
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14853] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: cluster_connect_cfg: 	Our nodeid: -1062705779
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: cluster_connect_cfg: 	Our nodeid: -1062705779
Oct 15 15:15:50 [14853] vm1 corosync debug   [CPG   ] cpg.c:message_handler_req_exec_cpg_procjoin:1260 got procjoin message from cluster node -1062705778 (r(0) ip(192.168.101.142) r(1) ip(192.168.102.142) ) for pid 9155
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [14865]
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14853] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Oct 15 15:15:50 [14853] vm1 corosync debug   [CPG   ] cpg.c:cpg_lib_init_fn:1459 lib_init_fn: conn=0x7f0cab0b7270, cpd=0x7f0cab1b7884
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: get_local_nodeid: 	Local nodeid is 3232261517
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: get_local_nodeid: 	Local nodeid is 3232261517
Oct 15 15:15:50 [14865] vm1 pacemakerd:     info: crm_get_peer: 	Created entry 24ff3df3-060a-4a43-b8d7-5e0a3f9c874e/0x962120 for node (null)/3232261517 (1 total)
Oct 15 15:15:50 [14865] vm1 pacemakerd:     info: crm_get_peer: 	Created entry 24ff3df3-060a-4a43-b8d7-5e0a3f9c874e/0x962120 for node (null)/3232261517 (1 total)
Oct 15 15:15:50 [14865] vm1 pacemakerd:     info: crm_get_peer: 	Node 3232261517 has uuid 3232261517
Oct 15 15:15:50 [14865] vm1 pacemakerd:     info: crm_get_peer: 	Node 3232261517 has uuid 3232261517
Oct 15 15:15:50 [14865] vm1 pacemakerd:     info: crm_update_peer_proc: 	cluster_connect_cpg: Node (null)[3232261517] - corosync-cpg is now online
Oct 15 15:15:50 [14865] vm1 pacemakerd:     info: crm_update_peer_proc: 	cluster_connect_cpg: Node (null)[3232261517] - corosync-cpg is now online
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: cluster_connect_quorum: 	Configuring Pacemaker to obtain quorum from Corosync
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: cluster_connect_quorum: 	Configuring Pacemaker to obtain quorum from Corosync
Oct 15 15:15:50 [14853] vm1 corosync debug   [CPG   ] cpg.c:message_handler_req_exec_cpg_procjoin:1260 got procjoin message from cluster node -1062705779 (r(0) ip(192.168.101.141) r(1) ip(192.168.102.141) ) for pid 14865
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [14865]
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14853] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Oct 15 15:15:50 [14853] vm1 corosync debug   [QUORUM] vsf_quorum.c:quorum_lib_init_fn:316 lib_init_fn: conn=0x7f0cab0b4c00
Oct 15 15:15:50 [14853] vm1 corosync debug   [QUORUM] vsf_quorum.c:message_handler_req_lib_quorum_gettype:471 got quorum_type request on 0x7f0cab0b4c00
Oct 15 15:15:50 [14853] vm1 corosync debug   [QUORUM] vsf_quorum.c:message_handler_req_lib_quorum_getquorate:395 got quorate request on 0x7f0cab0b4c00
Oct 15 15:15:50 [14865] vm1 pacemakerd:   notice: cluster_connect_quorum: 	Quorum acquired
Oct 15 15:15:50 [14865] vm1 pacemakerd:   notice: cluster_connect_quorum: 	Quorum acquired
Oct 15 15:15:50 [14853] vm1 corosync debug   [QUORUM] vsf_quorum.c:message_handler_req_lib_quorum_trackstart:412 got trackstart request on 0x7f0cab0b4c00
Oct 15 15:15:50 [14853] vm1 corosync debug   [QUORUM] vsf_quorum.c:message_handler_req_lib_quorum_trackstart:420 sending initial status to 0x7f0cab0b4c00
Oct 15 15:15:50 [14853] vm1 corosync debug   [QUORUM] vsf_quorum.c:send_library_notification:359 sending quorum notification to 0x7f0cab0b4c00, length = 60
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [14865]
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14853] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Oct 15 15:15:50 [14853] vm1 corosync debug   [CMAP  ] cmap.c:cmap_lib_init_fn:306 lib_init_fn: conn=0x7f0cab0b66f0
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: qb_ipcc_disconnect: 	qb_ipcc_disconnect()
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: qb_ipcc_disconnect: 	qb_ipcc_disconnect()
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-request-14855-14865-29-header
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-request-14855-14865-29-header
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-response-14855-14865-29-header
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-response-14855-14865-29-header
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-event-14855-14865-29-header
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-event-14855-14865-29-header
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ipcs.c:qb_ipcs_dispatch_connection_request:757 HUP conn (14855-14865-29)
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ipcs.c:qb_ipcs_disconnect:605 qb_ipcs_disconnect(14855-14865-29) state:2
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] loop_poll_epoll.c:_del:117 epoll_ctl(del): Bad file descriptor (9)
Oct 15 15:15:50 [14853] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_closed:417 cs_ipcs_connection_closed() 
Oct 15 15:15:50 [14853] vm1 corosync debug   [CMAP  ] cmap.c:cmap_lib_exit_fn:325 exit_fn for conn=0x7f0cab0b66f0
Oct 15 15:15:50 [14853] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_destroyed:390 cs_ipcs_connection_destroyed() 
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-response-14855-14865-29-header
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-event-14855-14865-29-header
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-request-14855-14865-29-header
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [14865]
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14853] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Oct 15 15:15:50 [14853] vm1 corosync debug   [CMAP  ] cmap.c:cmap_lib_init_fn:306 lib_init_fn: conn=0x7f0cab0b66f0
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: qb_ipcc_disconnect: 	qb_ipcc_disconnect()
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: qb_ipcc_disconnect: 	qb_ipcc_disconnect()
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-request-14855-14865-29-header
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-request-14855-14865-29-header
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-response-14855-14865-29-header
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-response-14855-14865-29-header
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-event-14855-14865-29-header
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-event-14855-14865-29-header
Oct 15 15:15:50 [14865] vm1 pacemakerd:   notice: corosync_node_name: 	Unable to get node name for nodeid 3232261517
Oct 15 15:15:50 [14865] vm1 pacemakerd:   notice: corosync_node_name: 	Unable to get node name for nodeid 3232261517
Oct 15 15:15:50 [14865] vm1 pacemakerd:   notice: get_node_name: 	Defaulting to uname -n for the local corosync node name
Oct 15 15:15:50 [14865] vm1 pacemakerd:   notice: get_node_name: 	Defaulting to uname -n for the local corosync node name
Oct 15 15:15:50 [14865] vm1 pacemakerd:     info: crm_get_peer: 	Node 3232261517 is now known as vm1
Oct 15 15:15:50 [14865] vm1 pacemakerd:     info: crm_get_peer: 	Node 3232261517 is now known as vm1
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ipcs.c:qb_ipcs_dispatch_connection_request:757 HUP conn (14855-14865-29)
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ipcs.c:qb_ipcs_disconnect:605 qb_ipcs_disconnect(14855-14865-29) state:2
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] loop_poll_epoll.c:_del:117 epoll_ctl(del): Bad file descriptor (9)
Oct 15 15:15:50 [14853] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_closed:417 cs_ipcs_connection_closed() 
Oct 15 15:15:50 [14853] vm1 corosync debug   [CMAP  ] cmap.c:cmap_lib_exit_fn:325 exit_fn for conn=0x7f0cab0b66f0
Oct 15 15:15:50 [14853] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_destroyed:390 cs_ipcs_connection_destroyed() 
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-response-14855-14865-29-header
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-event-14855-14865-29-header
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-request-14855-14865-29-header
Oct 15 15:15:50 [14865] vm1 pacemakerd:     info: start_child: 	Using uid=496 and group=492 for process cib
Oct 15 15:15:50 [14865] vm1 pacemakerd:     info: start_child: 	Using uid=496 and group=492 for process cib
Oct 15 15:15:50 [14865] vm1 pacemakerd:     info: start_child: 	Forked child 14869 for process cib
Oct 15 15:15:50 [14865] vm1 pacemakerd:     info: start_child: 	Forked child 14869 for process cib
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: update_node_processes: 	Node vm1 now has process list: 00000000000000000000000000000100 (was 00000000000000000000000004000000)
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: update_node_processes: 	Node vm1 now has process list: 00000000000000000000000000000100 (was 00000000000000000000000004000000)
Oct 15 15:15:50 [14865] vm1 pacemakerd:     info: start_child: 	Forked child 14870 for process stonith-ng
Oct 15 15:15:50 [14865] vm1 pacemakerd:     info: start_child: 	Forked child 14870 for process stonith-ng
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: update_node_processes: 	Node vm1 now has process list: 00000000000000000000000000100100 (was 00000000000000000000000000000100)
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: update_node_processes: 	Node vm1 now has process list: 00000000000000000000000000100100 (was 00000000000000000000000000000100)
Oct 15 15:15:50 [14865] vm1 pacemakerd:     info: start_child: 	Forked child 14871 for process lrmd
Oct 15 15:15:50 [14865] vm1 pacemakerd:     info: start_child: 	Forked child 14871 for process lrmd
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: update_node_processes: 	Node vm1 now has process list: 00000000000000000000000000100110 (was 00000000000000000000000000100100)
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: update_node_processes: 	Node vm1 now has process list: 00000000000000000000000000100110 (was 00000000000000000000000000100100)
Oct 15 15:15:50 [14865] vm1 pacemakerd:     info: start_child: 	Using uid=496 and group=492 for process attrd
Oct 15 15:15:50 [14865] vm1 pacemakerd:     info: start_child: 	Using uid=496 and group=492 for process attrd
Oct 15 15:15:50 [14865] vm1 pacemakerd:     info: start_child: 	Forked child 14872 for process attrd
Oct 15 15:15:50 [14865] vm1 pacemakerd:     info: start_child: 	Forked child 14872 for process attrd
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: update_node_processes: 	Node vm1 now has process list: 00000000000000000000000000101110 (was 00000000000000000000000000100110)
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: update_node_processes: 	Node vm1 now has process list: 00000000000000000000000000101110 (was 00000000000000000000000000100110)
Oct 15 15:15:50 [14865] vm1 pacemakerd:     info: start_child: 	Using uid=496 and group=492 for process pengine
Oct 15 15:15:50 [14865] vm1 pacemakerd:     info: start_child: 	Using uid=496 and group=492 for process pengine
Oct 15 15:15:50 [14865] vm1 pacemakerd:     info: start_child: 	Forked child 14873 for process pengine
Oct 15 15:15:50 [14865] vm1 pacemakerd:     info: start_child: 	Forked child 14873 for process pengine
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: update_node_processes: 	Node vm1 now has process list: 00000000000000000000000000111110 (was 00000000000000000000000000101110)
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: update_node_processes: 	Node vm1 now has process list: 00000000000000000000000000111110 (was 00000000000000000000000000101110)
Oct 15 15:15:50 [14865] vm1 pacemakerd:     info: start_child: 	Using uid=496 and group=492 for process crmd
Oct 15 15:15:50 [14865] vm1 pacemakerd:     info: start_child: 	Using uid=496 and group=492 for process crmd
Oct 15 15:15:50 [14865] vm1 pacemakerd:     info: start_child: 	Forked child 14874 for process crmd
Oct 15 15:15:50 [14865] vm1 pacemakerd:     info: start_child: 	Forked child 14874 for process crmd
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: update_node_processes: 	Node vm1 now has process list: 00000000000000000000000000111310 (was 00000000000000000000000000111110)
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: update_node_processes: 	Node vm1 now has process list: 00000000000000000000000000111310 (was 00000000000000000000000000111110)
Oct 15 15:15:50 [14865] vm1 pacemakerd:     info: main: 	Starting mainloop
Oct 15 15:15:50 [14865] vm1 pacemakerd:     info: main: 	Starting mainloop
Oct 15 15:15:50 [14865] vm1 pacemakerd:     info: pcmk_quorum_notification: 	Membership 12: quorum retained (3)
Oct 15 15:15:50 [14865] vm1 pacemakerd:     info: pcmk_quorum_notification: 	Membership 12: quorum retained (3)
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: pcmk_quorum_notification: 	Member[0] 3232261517 
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: pcmk_quorum_notification: 	Member[0] 3232261517 
Oct 15 15:15:50 [14865] vm1 pacemakerd:   notice: crm_update_peer_state: 	pcmk_quorum_notification: Node vm1[3232261517] - state is now member (was (null))
Oct 15 15:15:50 [14865] vm1 pacemakerd:   notice: crm_update_peer_state: 	pcmk_quorum_notification: Node vm1[3232261517] - state is now member (was (null))
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: pcmk_quorum_notification: 	Member[1] 3232261518 
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: pcmk_quorum_notification: 	Member[1] 3232261518 
Oct 15 15:15:50 [14865] vm1 pacemakerd:     info: crm_get_peer: 	Created entry 59e9d29e-9a67-417b-9412-5ee33908531a/0xa63cc0 for node (null)/3232261518 (2 total)
Oct 15 15:15:50 [14865] vm1 pacemakerd:     info: crm_get_peer: 	Created entry 59e9d29e-9a67-417b-9412-5ee33908531a/0xa63cc0 for node (null)/3232261518 (2 total)
Oct 15 15:15:50 [14865] vm1 pacemakerd:     info: crm_get_peer: 	Node 3232261518 has uuid 3232261518
Oct 15 15:15:50 [14865] vm1 pacemakerd:     info: crm_get_peer: 	Node 3232261518 has uuid 3232261518
Oct 15 15:15:50 [14865] vm1 pacemakerd:     info: pcmk_quorum_notification: 	Obtaining name for new node 3232261518
Oct 15 15:15:50 [14865] vm1 pacemakerd:     info: pcmk_quorum_notification: 	Obtaining name for new node 3232261518
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [14865]
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14869] vm1        cib:     info: crm_log_init: 	Changed active directory to /var/lib/heartbeat/cores/hacluster
Oct 15 15:15:50 [14869] vm1        cib:   notice: main: 	Using new config location: /var/lib/pacemaker/cib
Oct 15 15:15:50 [14870] vm1 stonith-ng:     info: crm_log_init: 	Changed active directory to /var/lib/heartbeat/cores/root
Oct 15 15:15:50 [14870] vm1 stonith-ng:     info: get_cluster_type: 	Verifying cluster type: 'corosync'
Oct 15 15:15:50 [14870] vm1 stonith-ng:     info: get_cluster_type: 	Assuming an active 'corosync' cluster
Oct 15 15:15:50 [14870] vm1 stonith-ng:   notice: crm_cluster_connect: 	Connecting to cluster infrastructure: corosync
Oct 15 15:15:50 [14869] vm1        cib:     info: get_cluster_type: 	Verifying cluster type: 'corosync'
Oct 15 15:15:50 [14869] vm1        cib:     info: get_cluster_type: 	Assuming an active 'corosync' cluster
Oct 15 15:15:50 [14869] vm1        cib:     info: retrieveCib: 	Reading cluster configuration from: /var/lib/pacemaker/cib/cib.xml (digest: /var/lib/pacemaker/cib/cib.xml.sig)
Oct 15 15:15:50 [14869] vm1        cib:  warning: retrieveCib: 	Cluster configuration not found: /var/lib/pacemaker/cib/cib.xml
Oct 15 15:15:50 [14869] vm1        cib:  warning: readCibXmlFile: 	Primary configuration corrupt or unusable, trying backups in /var/lib/pacemaker/cib
Oct 15 15:15:50 [14869] vm1        cib:  warning: readCibXmlFile: 	Continuing with an empty configuration.
Oct 15 15:15:50 [14869] vm1        cib:     info: validate_with_relaxng: 	Creating RNG parser context
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14872] vm1      attrd:     info: crm_log_init: 	Changed active directory to /var/lib/heartbeat/cores/hacluster
Oct 15 15:15:50 [14872] vm1      attrd:     info: main: 	Starting up
Oct 15 15:15:50 [14872] vm1      attrd:     info: get_cluster_type: 	Verifying cluster type: 'corosync'
Oct 15 15:15:50 [14872] vm1      attrd:     info: get_cluster_type: 	Assuming an active 'corosync' cluster
Oct 15 15:15:50 [14872] vm1      attrd:   notice: crm_cluster_connect: 	Connecting to cluster infrastructure: corosync
Oct 15 15:15:50 [14871] vm1       lrmd:     info: crm_log_init: 	Changed active directory to /var/lib/heartbeat/cores/root
Oct 15 15:15:50 [14871] vm1       lrmd:     info: qb_ipcs_us_publish: 	server name: lrmd
Oct 15 15:15:50 [14871] vm1       lrmd:     info: main: 	Starting
Oct 15 15:15:50 [14873] vm1    pengine:     info: crm_log_init: 	Changed active directory to /var/lib/heartbeat/cores/hacluster
Oct 15 15:15:50 [14873] vm1    pengine:    debug: main: 	Init server comms
Oct 15 15:15:50 [14873] vm1    pengine:     info: qb_ipcs_us_publish: 	server name: pengine
Oct 15 15:15:50 [14873] vm1    pengine:     info: main: 	Starting pengine
Oct 15 15:15:50 [14853] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Oct 15 15:15:50 [14853] vm1 corosync debug   [CMAP  ] cmap.c:cmap_lib_init_fn:306 lib_init_fn: conn=0x7f0cab0b66f0
Oct 15 15:15:50 [14853] vm1 corosync debug   [CPG   ] cpg.c:message_handler_req_exec_cpg_procjoin:1260 got procjoin message from cluster node -1062705778 (r(0) ip(192.168.101.142) r(1) ip(192.168.102.142) ) for pid 9160
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14853] vm1 corosync debug   [CPG   ] cpg.c:message_handler_req_exec_cpg_procjoin:1260 got procjoin message from cluster node -1062705778 (r(0) ip(192.168.101.142) r(1) ip(192.168.102.142) ) for pid 9162
Oct 15 15:15:50 [14853] vm1 corosync debug   [CPG   ] cpg.c:message_handler_req_exec_cpg_procjoin:1260 got procjoin message from cluster node -1062705778 (r(0) ip(192.168.101.142) r(1) ip(192.168.102.142) ) for pid 9159
Oct 15 15:15:50 [14853] vm1 corosync debug   [CPG   ] cpg.c:message_handler_req_exec_cpg_procjoin:1260 got procjoin message from cluster node -1062705777 (r(0) ip(192.168.101.143) r(1) ip(192.168.102.143) ) for pid 8193
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [14870]
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: qb_ipcc_disconnect: 	qb_ipcc_disconnect()
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: qb_ipcc_disconnect: 	qb_ipcc_disconnect()
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-request-14855-14865-29-header
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-request-14855-14865-29-header
Oct 15 15:15:50 [14874] vm1       crmd:     info: crm_log_init: 	Changed active directory to /var/lib/heartbeat/cores/hacluster
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-response-14855-14865-29-header
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-response-14855-14865-29-header
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-event-14855-14865-29-header
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-event-14855-14865-29-header
Oct 15 15:15:50 [14865] vm1 pacemakerd:   notice: corosync_node_name: 	Unable to get node name for nodeid 3232261518
Oct 15 15:15:50 [14865] vm1 pacemakerd:   notice: corosync_node_name: 	Unable to get node name for nodeid 3232261518
Oct 15 15:15:50 [14865] vm1 pacemakerd:   notice: crm_update_peer_state: 	pcmk_quorum_notification: Node (null)[3232261518] - state is now member (was (null))
Oct 15 15:15:50 [14865] vm1 pacemakerd:   notice: crm_update_peer_state: 	pcmk_quorum_notification: Node (null)[3232261518] - state is now member (was (null))
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: pcmk_quorum_notification: 	Member[2] 3232261519 
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: pcmk_quorum_notification: 	Member[2] 3232261519 
Oct 15 15:15:50 [14865] vm1 pacemakerd:     info: crm_get_peer: 	Created entry a0bc9878-acc9-4510-bfd4-6416b4e4ddd2/0xa63530 for node (null)/3232261519 (3 total)
Oct 15 15:15:50 [14865] vm1 pacemakerd:     info: crm_get_peer: 	Created entry a0bc9878-acc9-4510-bfd4-6416b4e4ddd2/0xa63530 for node (null)/3232261519 (3 total)
Oct 15 15:15:50 [14865] vm1 pacemakerd:     info: crm_get_peer: 	Node 3232261519 has uuid 3232261519
Oct 15 15:15:50 [14865] vm1 pacemakerd:     info: crm_get_peer: 	Node 3232261519 has uuid 3232261519
Oct 15 15:15:50 [14874] vm1       crmd:   notice: main: 	CRM Git Version: 6a5e863
Oct 15 15:15:50 [14874] vm1       crmd:    debug: crmd_init: 	Starting crmd
Oct 15 15:15:50 [14874] vm1       crmd:    debug: s_crmd_fsa: 	Processing I_STARTUP: [ state=S_STARTING cause=C_STARTUP origin=crmd_init ]
Oct 15 15:15:50 [14874] vm1       crmd:     info: do_log: 	FSA: Input I_STARTUP from crmd_init() received in state S_STARTING
Oct 15 15:15:50 [14874] vm1       crmd:    debug: do_startup: 	Registering Signal Handlers
Oct 15 15:15:50 [14874] vm1       crmd:    debug: do_startup: 	Creating CIB and LRM objects
Oct 15 15:15:50 [14874] vm1       crmd:     info: get_cluster_type: 	Verifying cluster type: 'corosync'
Oct 15 15:15:50 [14874] vm1       crmd:     info: get_cluster_type: 	Assuming an active 'corosync' cluster
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14865] vm1 pacemakerd:     info: pcmk_quorum_notification: 	Obtaining name for new node 3232261519
Oct 15 15:15:50 [14865] vm1 pacemakerd:     info: pcmk_quorum_notification: 	Obtaining name for new node 3232261519
Oct 15 15:15:50 [14874] vm1       crmd:     info: crm_ipc_connect: 	Could not establish cib_shm connection: Connection refused (111)
Oct 15 15:15:50 [14874] vm1       crmd:    debug: cib_native_signon_raw: 	Connection unsuccessful (0 (nil))
Oct 15 15:15:50 [14874] vm1       crmd:    debug: cib_native_signon_raw: 	Connection to CIB failed: Transport endpoint is not connected
Oct 15 15:15:50 [14874] vm1       crmd:    debug: cib_native_signoff: 	Signing out of the CIB Service
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14869] vm1        cib:    debug: activateCibXml: 	Triggering CIB write for start op
Oct 15 15:15:50 [14869] vm1        cib:     info: startCib: 	CIB Initialization completed successfully
Oct 15 15:15:50 [14869] vm1        cib:   notice: crm_cluster_connect: 	Connecting to cluster infrastructure: corosync
Oct 15 15:15:50 [14870] vm1 stonith-ng:    debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14870] vm1 stonith-ng:    debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14870] vm1 stonith-ng:    debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14853] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Oct 15 15:15:50 [14853] vm1 corosync debug   [CPG   ] cpg.c:cpg_lib_init_fn:1459 lib_init_fn: conn=0x7f0cab0c3c30, cpd=0x7f0cab0c4864
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ipcs.c:qb_ipcs_dispatch_connection_request:757 HUP conn (14855-14865-29)
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ipcs.c:qb_ipcs_disconnect:605 qb_ipcs_disconnect(14855-14865-29) state:2
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] loop_poll_epoll.c:_del:117 epoll_ctl(del): Bad file descriptor (9)
Oct 15 15:15:50 [14853] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_closed:417 cs_ipcs_connection_closed() 
Oct 15 15:15:50 [14853] vm1 corosync debug   [CMAP  ] cmap.c:cmap_lib_exit_fn:325 exit_fn for conn=0x7f0cab0b66f0
Oct 15 15:15:50 [14853] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_destroyed:390 cs_ipcs_connection_destroyed() 
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-response-14855-14865-29-header
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-event-14855-14865-29-header
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-request-14855-14865-29-header
Oct 15 15:15:50 [14870] vm1 stonith-ng:    debug: get_local_nodeid: 	Local nodeid is 3232261517
Oct 15 15:15:50 [14870] vm1 stonith-ng:     info: crm_get_peer: 	Created entry 9bc0ac17-61d3-4850-b65d-ec668a119d7d/0xd2c660 for node (null)/3232261517 (1 total)
Oct 15 15:15:50 [14870] vm1 stonith-ng:     info: crm_get_peer: 	Node 3232261517 has uuid 3232261517
Oct 15 15:15:50 [14870] vm1 stonith-ng:     info: crm_update_peer_proc: 	cluster_connect_cpg: Node (null)[3232261517] - corosync-cpg is now online
Oct 15 15:15:50 [14870] vm1 stonith-ng:     info: init_cs_connection_once: 	Connection to 'corosync': established
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [14872]
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14872] vm1      attrd:    debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14872] vm1      attrd:    debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14872] vm1      attrd:    debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14853] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Oct 15 15:15:50 [14853] vm1 corosync debug   [CPG   ] cpg.c:cpg_lib_init_fn:1459 lib_init_fn: conn=0x7f0cab0b66f0, cpd=0x7f0cab0beb94
Oct 15 15:15:50 [14872] vm1      attrd:    debug: get_local_nodeid: 	Local nodeid is 3232261517
Oct 15 15:15:50 [14853] vm1 corosync debug   [CPG   ] cpg.c:message_handler_req_exec_cpg_procjoin:1260 got procjoin message from cluster node -1062705779 (r(0) ip(192.168.101.141) r(1) ip(192.168.102.141) ) for pid 14870
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [14865]
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14853] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Oct 15 15:15:50 [14853] vm1 corosync debug   [CMAP  ] cmap.c:cmap_lib_init_fn:306 lib_init_fn: conn=0x7f0cab0b9c10
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [14869]
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14853] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Oct 15 15:15:50 [14853] vm1 corosync debug   [CPG   ] cpg.c:cpg_lib_init_fn:1459 lib_init_fn: conn=0x7f0cab0bb530, cpd=0x7f0cab0bbc64
Oct 15 15:15:50 [14869] vm1        cib:    debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14869] vm1        cib:    debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14869] vm1        cib:    debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14872] vm1      attrd:     info: crm_get_peer: 	Created entry 3643c6c1-537a-499b-95bb-5577c12b9937/0x1760120 for node (null)/3232261517 (1 total)
Oct 15 15:15:50 [14872] vm1      attrd:     info: crm_get_peer: 	Node 3232261517 has uuid 3232261517
Oct 15 15:15:50 [14872] vm1      attrd:     info: crm_update_peer_proc: 	cluster_connect_cpg: Node (null)[3232261517] - corosync-cpg is now online
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: qb_ipcc_disconnect: 	qb_ipcc_disconnect()
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: qb_ipcc_disconnect: 	qb_ipcc_disconnect()
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-request-14855-14865-31-header
Oct 15 15:15:50 [14872] vm1      attrd:   notice: crm_update_peer_state: 	attrd_peer_change_cb: Node (null)[3232261517] - state is now member (was (null))
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-request-14855-14865-31-header
Oct 15 15:15:50 [14872] vm1      attrd:     info: init_cs_connection_once: 	Connection to 'corosync': established
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-response-14855-14865-31-header
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-response-14855-14865-31-header
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-event-14855-14865-31-header
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-event-14855-14865-31-header
Oct 15 15:15:50 [14865] vm1 pacemakerd:   notice: corosync_node_name: 	Unable to get node name for nodeid 3232261519
Oct 15 15:15:50 [14865] vm1 pacemakerd:   notice: corosync_node_name: 	Unable to get node name for nodeid 3232261519
Oct 15 15:15:50 [14865] vm1 pacemakerd:   notice: crm_update_peer_state: 	pcmk_quorum_notification: Node (null)[3232261519] - state is now member (was (null))
Oct 15 15:15:50 [14865] vm1 pacemakerd:   notice: crm_update_peer_state: 	pcmk_quorum_notification: Node (null)[3232261519] - state is now member (was (null))
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [14870]
Oct 15 15:15:50 [14865] vm1 pacemakerd:     info: crm_get_peer: 	Node 3232261518 is now known as vm2
Oct 15 15:15:50 [14865] vm1 pacemakerd:     info: crm_get_peer: 	Node 3232261518 is now known as vm2
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: update_node_processes: 	Node vm2 now has process list: 00000000000000000000000000000100 (was 00000000000000000000000000000000)
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: update_node_processes: 	Node vm2 now has process list: 00000000000000000000000000000100 (was 00000000000000000000000000000000)
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: update_node_processes: 	Node vm2 now has process list: 00000000000000000000000000100100 (was 00000000000000000000000000000100)
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: update_node_processes: 	Node vm2 now has process list: 00000000000000000000000000100100 (was 00000000000000000000000000000100)
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: update_node_processes: 	Node vm2 now has process list: 00000000000000000000000000100110 (was 00000000000000000000000000100100)
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: update_node_processes: 	Node vm2 now has process list: 00000000000000000000000000100110 (was 00000000000000000000000000100100)
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: update_node_processes: 	Node vm2 now has process list: 00000000000000000000000000101110 (was 00000000000000000000000000100110)
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: update_node_processes: 	Node vm2 now has process list: 00000000000000000000000000101110 (was 00000000000000000000000000100110)
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: update_node_processes: 	Node vm2 now has process list: 00000000000000000000000000111110 (was 00000000000000000000000000101110)
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: update_node_processes: 	Node vm2 now has process list: 00000000000000000000000000111110 (was 00000000000000000000000000101110)
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: update_node_processes: 	Node vm2 now has process list: 00000000000000000000000000111310 (was 00000000000000000000000000111110)
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: update_node_processes: 	Node vm2 now has process list: 00000000000000000000000000111310 (was 00000000000000000000000000111110)
Oct 15 15:15:50 [14865] vm1 pacemakerd:     info: crm_get_peer: 	Node 3232261519 is now known as vm3
Oct 15 15:15:50 [14865] vm1 pacemakerd:     info: crm_get_peer: 	Node 3232261519 is now known as vm3
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: update_node_processes: 	Node vm3 now has process list: 00000000000000000000000000000100 (was 00000000000000000000000000000000)
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: update_node_processes: 	Node vm3 now has process list: 00000000000000000000000000000100 (was 00000000000000000000000000000000)
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: update_node_processes: 	Node vm3 now has process list: 00000000000000000000000000100100 (was 00000000000000000000000000000100)
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: update_node_processes: 	Node vm3 now has process list: 00000000000000000000000000100100 (was 00000000000000000000000000000100)
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: update_node_processes: 	Node vm3 now has process list: 00000000000000000000000000100110 (was 00000000000000000000000000100100)
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: update_node_processes: 	Node vm3 now has process list: 00000000000000000000000000100110 (was 00000000000000000000000000100100)
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: update_node_processes: 	Node vm3 now has process list: 00000000000000000000000000101110 (was 00000000000000000000000000100110)
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: update_node_processes: 	Node vm3 now has process list: 00000000000000000000000000101110 (was 00000000000000000000000000100110)
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: update_node_processes: 	Node vm3 now has process list: 00000000000000000000000000111110 (was 00000000000000000000000000101110)
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: update_node_processes: 	Node vm3 now has process list: 00000000000000000000000000111110 (was 00000000000000000000000000101110)
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: update_node_processes: 	Node vm3 now has process list: 00000000000000000000000000111310 (was 00000000000000000000000000111110)
Oct 15 15:15:50 [14865] vm1 pacemakerd:    debug: update_node_processes: 	Node vm3 now has process list: 00000000000000000000000000111310 (was 00000000000000000000000000111110)
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14870] vm1 stonith-ng:    debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14870] vm1 stonith-ng:    debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14870] vm1 stonith-ng:    debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14853] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Oct 15 15:15:50 [14853] vm1 corosync debug   [CMAP  ] cmap.c:cmap_lib_init_fn:306 lib_init_fn: conn=0x7f0cab0bfc30
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ipcs.c:qb_ipcs_dispatch_connection_request:757 HUP conn (14855-14865-31)
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ipcs.c:qb_ipcs_disconnect:605 qb_ipcs_disconnect(14855-14865-31) state:2
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] loop_poll_epoll.c:_del:117 epoll_ctl(del): Bad file descriptor (9)
Oct 15 15:15:50 [14853] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_closed:417 cs_ipcs_connection_closed() 
Oct 15 15:15:50 [14853] vm1 corosync debug   [CMAP  ] cmap.c:cmap_lib_exit_fn:325 exit_fn for conn=0x7f0cab0b9c10
Oct 15 15:15:50 [14853] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_destroyed:390 cs_ipcs_connection_destroyed() 
Oct 15 15:15:50 [14869] vm1        cib:    debug: get_local_nodeid: 	Local nodeid is 3232261517
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-response-14855-14865-31-header
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-event-14855-14865-31-header
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-request-14855-14865-31-header
Oct 15 15:15:50 [14870] vm1 stonith-ng:    debug: qb_ipcc_disconnect: 	qb_ipcc_disconnect()
Oct 15 15:15:50 [14870] vm1 stonith-ng:    debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-request-14855-14870-33-header
Oct 15 15:15:50 [14870] vm1 stonith-ng:    debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-response-14855-14870-33-header
Oct 15 15:15:50 [14870] vm1 stonith-ng:    debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-event-14855-14870-33-header
Oct 15 15:15:50 [14870] vm1 stonith-ng:   notice: corosync_node_name: 	Unable to get node name for nodeid 3232261517
Oct 15 15:15:50 [14870] vm1 stonith-ng:   notice: get_node_name: 	Defaulting to uname -n for the local corosync node name
Oct 15 15:15:50 [14870] vm1 stonith-ng:     info: crm_get_peer: 	Node 3232261517 is now known as vm1
Oct 15 15:15:50 [14870] vm1 stonith-ng:     info: crm_ipc_connect: 	Could not establish cib_rw connection: Connection refused (111)
Oct 15 15:15:50 [14870] vm1 stonith-ng:    debug: cib_native_signon_raw: 	Connection unsuccessful (0 (nil))
Oct 15 15:15:50 [14870] vm1 stonith-ng:    debug: cib_native_signon_raw: 	Connection to CIB failed: Transport endpoint is not connected
Oct 15 15:15:50 [14870] vm1 stonith-ng:    debug: cib_native_signoff: 	Signing out of the CIB Service
Oct 15 15:15:50 [14853] vm1 corosync debug   [CPG   ] cpg.c:message_handler_req_exec_cpg_procjoin:1260 got procjoin message from cluster node -1062705779 (r(0) ip(192.168.101.141) r(1) ip(192.168.102.141) ) for pid 14872
Oct 15 15:15:50 [14869] vm1        cib:     info: crm_get_peer: 	Created entry 625a1694-8174-49bc-8f1b-1f96d5a5319c/0x122c0b0 for node (null)/3232261517 (1 total)
Oct 15 15:15:50 [14869] vm1        cib:     info: crm_get_peer: 	Node 3232261517 has uuid 3232261517
Oct 15 15:15:50 [14869] vm1        cib:     info: crm_update_peer_proc: 	cluster_connect_cpg: Node (null)[3232261517] - corosync-cpg is now online
Oct 15 15:15:50 [14869] vm1        cib:     info: init_cs_connection_once: 	Connection to 'corosync': established
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ipcs.c:qb_ipcs_dispatch_connection_request:757 HUP conn (14855-14870-33)
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ipcs.c:qb_ipcs_disconnect:605 qb_ipcs_disconnect(14855-14870-33) state:2
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] loop_poll_epoll.c:_del:117 epoll_ctl(del): Bad file descriptor (9)
Oct 15 15:15:50 [14853] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_closed:417 cs_ipcs_connection_closed() 
Oct 15 15:15:50 [14853] vm1 corosync debug   [CMAP  ] cmap.c:cmap_lib_exit_fn:325 exit_fn for conn=0x7f0cab0bfc30
Oct 15 15:15:50 [14853] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_destroyed:390 cs_ipcs_connection_destroyed() 
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-response-14855-14870-33-header
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-event-14855-14870-33-header
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-request-14855-14870-33-header
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [14872]
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14872] vm1      attrd:    debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14872] vm1      attrd:    debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14872] vm1      attrd:    debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14853] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Oct 15 15:15:50 [14853] vm1 corosync debug   [CMAP  ] cmap.c:cmap_lib_init_fn:306 lib_init_fn: conn=0x7f0cab0b9dd0
Oct 15 15:15:50 [14872] vm1      attrd:    debug: qb_ipcc_disconnect: 	qb_ipcc_disconnect()
Oct 15 15:15:50 [14872] vm1      attrd:    debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-request-14855-14872-31-header
Oct 15 15:15:50 [14872] vm1      attrd:    debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-response-14855-14872-31-header
Oct 15 15:15:50 [14872] vm1      attrd:    debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-event-14855-14872-31-header
Oct 15 15:15:50 [14872] vm1      attrd:   notice: corosync_node_name: 	Unable to get node name for nodeid 3232261517
Oct 15 15:15:50 [14872] vm1      attrd:   notice: get_node_name: 	Defaulting to uname -n for the local corosync node name
Oct 15 15:15:50 [14872] vm1      attrd:     info: crm_get_peer: 	Node 3232261517 is now known as vm1
Oct 15 15:15:50 [14872] vm1      attrd:     info: main: 	Cluster connection active
Oct 15 15:15:50 [14872] vm1      attrd:     info: qb_ipcs_us_publish: 	server name: attrd
Oct 15 15:15:50 [14872] vm1      attrd:     info: main: 	Accepting attribute updates
Oct 15 15:15:50 [14872] vm1      attrd:    debug: attrd_cib_connect: 	CIB signon attempt 1
Oct 15 15:15:50 [14872] vm1      attrd:     info: crm_ipc_connect: 	Could not establish cib_rw connection: Connection refused (111)
Oct 15 15:15:50 [14872] vm1      attrd:    debug: cib_native_signon_raw: 	Connection unsuccessful (0 (nil))
Oct 15 15:15:50 [14872] vm1      attrd:    debug: cib_native_signon_raw: 	Connection to CIB failed: Transport endpoint is not connected
Oct 15 15:15:50 [14872] vm1      attrd:    debug: cib_native_signoff: 	Signing out of the CIB Service
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [14869]
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14869] vm1        cib:    debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14869] vm1        cib:    debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14869] vm1        cib:    debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:50 [14853] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Oct 15 15:15:50 [14853] vm1 corosync debug   [CMAP  ] cmap.c:cmap_lib_init_fn:306 lib_init_fn: conn=0x7f0cab0c0dc0
Oct 15 15:15:50 [14853] vm1 corosync debug   [CPG   ] cpg.c:message_handler_req_exec_cpg_procjoin:1260 got procjoin message from cluster node -1062705777 (r(0) ip(192.168.101.143) r(1) ip(192.168.102.143) ) for pid 8198
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ipcs.c:qb_ipcs_dispatch_connection_request:757 HUP conn (14855-14872-31)
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ipcs.c:qb_ipcs_disconnect:605 qb_ipcs_disconnect(14855-14872-31) state:2
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] loop_poll_epoll.c:_del:117 epoll_ctl(del): Bad file descriptor (9)
Oct 15 15:15:50 [14853] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_closed:417 cs_ipcs_connection_closed() 
Oct 15 15:15:50 [14853] vm1 corosync debug   [CMAP  ] cmap.c:cmap_lib_exit_fn:325 exit_fn for conn=0x7f0cab0b9dd0
Oct 15 15:15:50 [14853] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_destroyed:390 cs_ipcs_connection_destroyed() 
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-response-14855-14872-31-header
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-event-14855-14872-31-header
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-request-14855-14872-31-header
Oct 15 15:15:50 [14869] vm1        cib:    debug: qb_ipcc_disconnect: 	qb_ipcc_disconnect()
Oct 15 15:15:50 [14869] vm1        cib:    debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-request-14855-14869-33-header
Oct 15 15:15:50 [14869] vm1        cib:    debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-response-14855-14869-33-header
Oct 15 15:15:50 [14869] vm1        cib:    debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-event-14855-14869-33-header
Oct 15 15:15:50 [14869] vm1        cib:   notice: corosync_node_name: 	Unable to get node name for nodeid 3232261517
Oct 15 15:15:50 [14869] vm1        cib:   notice: get_node_name: 	Defaulting to uname -n for the local corosync node name
Oct 15 15:15:50 [14869] vm1        cib:     info: crm_get_peer: 	Node 3232261517 is now known as vm1
Oct 15 15:15:50 [14869] vm1        cib:     info: qb_ipcs_us_publish: 	server name: cib_ro
Oct 15 15:15:50 [14869] vm1        cib:     info: qb_ipcs_us_publish: 	server name: cib_rw
Oct 15 15:15:50 [14869] vm1        cib:     info: qb_ipcs_us_publish: 	server name: cib_shm
Oct 15 15:15:50 [14869] vm1        cib:     info: cib_init: 	Starting cib mainloop
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ipcs.c:qb_ipcs_dispatch_connection_request:757 HUP conn (14855-14869-33)
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ipcs.c:qb_ipcs_disconnect:605 qb_ipcs_disconnect(14855-14869-33) state:2
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] loop_poll_epoll.c:_del:117 epoll_ctl(del): Bad file descriptor (9)
Oct 15 15:15:50 [14853] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_closed:417 cs_ipcs_connection_closed() 
Oct 15 15:15:50 [14853] vm1 corosync debug   [CMAP  ] cmap.c:cmap_lib_exit_fn:325 exit_fn for conn=0x7f0cab0c0dc0
Oct 15 15:15:50 [14853] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_destroyed:390 cs_ipcs_connection_destroyed() 
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-response-14855-14869-33-header
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-event-14855-14869-33-header
Oct 15 15:15:50 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-request-14855-14869-33-header
Oct 15 15:15:50 [14869] vm1        cib:    debug: get_last_sequence: 	Series file /var/lib/pacemaker/cib/cib.last does not exist
Oct 15 15:15:50 [14869] vm1        cib:    debug: write_cib_contents: 	Writing CIB to disk
Oct 15 15:15:50 [14853] vm1 corosync debug   [CPG   ] cpg.c:message_handler_req_exec_cpg_procjoin:1260 got procjoin message from cluster node -1062705779 (r(0) ip(192.168.101.141) r(1) ip(192.168.102.141) ) for pid 14869
Oct 15 15:15:50 [14869] vm1        cib:     info: pcmk_cpg_membership: 	Joined[0.0] cib.3232261517 
Oct 15 15:15:50 [14869] vm1        cib:     info: pcmk_cpg_membership: 	Member[0.0] cib.3232261517 
Oct 15 15:15:50 [14869] vm1        cib:     info: crm_get_peer: 	Created entry 051cade5-4f39-41a1-bc91-3104583927bd/0x122e940 for node (null)/3232261518 (2 total)
Oct 15 15:15:50 [14869] vm1        cib:     info: crm_get_peer: 	Node 3232261518 has uuid 3232261518
Oct 15 15:15:50 [14869] vm1        cib:     info: pcmk_cpg_membership: 	Member[0.1] cib.3232261518 
Oct 15 15:15:50 [14869] vm1        cib:     info: crm_update_peer_proc: 	pcmk_cpg_membership: Node (null)[3232261518] - corosync-cpg is now online
Oct 15 15:15:50 [14853] vm1 corosync debug   [CPG   ] cpg.c:message_handler_req_exec_cpg_procjoin:1260 got procjoin message from cluster node -1062705777 (r(0) ip(192.168.101.143) r(1) ip(192.168.102.143) ) for pid 8200
Oct 15 15:15:50 [14853] vm1 corosync debug   [CPG   ] cpg.c:message_handler_req_exec_cpg_procjoin:1260 got procjoin message from cluster node -1062705777 (r(0) ip(192.168.101.143) r(1) ip(192.168.102.143) ) for pid 8197
Oct 15 15:15:50 [14869] vm1        cib:     info: pcmk_cpg_membership: 	Joined[1.0] cib.3232261519 
Oct 15 15:15:50 [14869] vm1        cib:     info: pcmk_cpg_membership: 	Member[1.0] cib.3232261517 
Oct 15 15:15:50 [14869] vm1        cib:     info: pcmk_cpg_membership: 	Member[1.1] cib.3232261518 
Oct 15 15:15:50 [14869] vm1        cib:     info: crm_get_peer: 	Created entry c0fdc043-f03f-41c0-a0e3-7d454eeaed0d/0x122e9b0 for node (null)/3232261519 (3 total)
Oct 15 15:15:50 [14869] vm1        cib:     info: crm_get_peer: 	Node 3232261519 has uuid 3232261519
Oct 15 15:15:50 [14869] vm1        cib:     info: pcmk_cpg_membership: 	Member[1.2] cib.3232261519 
Oct 15 15:15:50 [14869] vm1        cib:     info: crm_update_peer_proc: 	pcmk_cpg_membership: Node (null)[3232261519] - corosync-cpg is now online
Oct 15 15:15:50 [14869] vm1        cib:     info: write_cib_contents: 	Wrote version 0.0.0 of the CIB to disk (digest: 3930c46445d2289a49a22e68ead11aaf)
Oct 15 15:15:50 [14869] vm1        cib:    debug: write_cib_contents: 	Wrote digest 3930c46445d2289a49a22e68ead11aaf to disk
Oct 15 15:15:50 [14869] vm1        cib:     info: retrieveCib: 	Reading cluster configuration from: /var/lib/pacemaker/cib/cib.za5CTj (digest: /var/lib/pacemaker/cib/cib.nBTEUu)
Oct 15 15:15:50 [14869] vm1        cib:    debug: write_cib_contents: 	Activating /var/lib/pacemaker/cib/cib.za5CTj
Oct 15 15:15:51 [14853] vm1 corosync debug   [CPG   ] cpg.c:message_handler_req_exec_cpg_procjoin:1260 got procjoin message from cluster node -1062705778 (r(0) ip(192.168.101.142) r(1) ip(192.168.102.142) ) for pid 9164
Oct 15 15:15:51 [14869] vm1        cib:     info: crm_client_new: 	Connecting 0x122f110 for uid=0 gid=0 pid=23331 id=06d6cc80-cc71-4fe3-8dc1-0b5a2aeb1a69
Oct 15 15:15:51 [14869] vm1        cib:    debug: handle_new_connection: 	IPC credentials authenticated (14869-23331-10)
Oct 15 15:15:51 [14869] vm1        cib:    debug: qb_ipcs_shm_connect: 	connecting to client [23331]
Oct 15 15:15:51 [14869] vm1        cib:    debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Oct 15 15:15:51 [14869] vm1        cib:    debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Oct 15 15:15:51 [14869] vm1        cib:    debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Oct 15 15:15:51 [14869] vm1        cib:     info: cib_process_request: 	Completed cib_query operation for section 'all': OK (rc=0, origin=local/crm_mon/29, version=0.0.0)
Oct 15 15:15:51 [14869] vm1        cib:    debug: cib_common_callback_worker: 	Setting cib_diff_notify callbacks for crm_mon (06d6cc80-cc71-4fe3-8dc1-0b5a2aeb1a69): off
Oct 15 15:15:51 [14869] vm1        cib:    debug: cib_common_callback_worker: 	Setting cib_diff_notify callbacks for crm_mon (06d6cc80-cc71-4fe3-8dc1-0b5a2aeb1a69): on
Oct 15 15:15:51 [14869] vm1        cib:     info: crm_client_new: 	Connecting 0x12b3320 for uid=496 gid=492 pid=14874 id=5b415466-b331-49a1-b495-ed97c3cb8b21
Oct 15 15:15:51 [14869] vm1        cib:    debug: handle_new_connection: 	IPC credentials authenticated (14869-14874-11)
Oct 15 15:15:51 [14869] vm1        cib:    debug: qb_ipcs_shm_connect: 	connecting to client [14874]
Oct 15 15:15:51 [14869] vm1        cib:    debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Oct 15 15:15:51 [14869] vm1        cib:    debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Oct 15 15:15:51 [14869] vm1        cib:    debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Oct 15 15:15:51 [14874] vm1       crmd:    debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Oct 15 15:15:51 [14874] vm1       crmd:    debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Oct 15 15:15:51 [14874] vm1       crmd:    debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Oct 15 15:15:51 [14874] vm1       crmd:    debug: cib_native_signon_raw: 	Connection to CIB successful
Oct 15 15:15:51 [14869] vm1        cib:    debug: cib_common_callback_worker: 	Setting cib_refresh_notify callbacks for crmd (5b415466-b331-49a1-b495-ed97c3cb8b21): on
Oct 15 15:15:51 [14869] vm1        cib:    debug: cib_common_callback_worker: 	Setting cib_diff_notify callbacks for crmd (5b415466-b331-49a1-b495-ed97c3cb8b21): on
Oct 15 15:15:51 [14874] vm1       crmd:     info: do_cib_control: 	CIB connection established
Oct 15 15:15:51 [14874] vm1       crmd:   notice: crm_cluster_connect: 	Connecting to cluster infrastructure: corosync
Oct 15 15:15:51 [14869] vm1        cib:     info: cib_process_request: 	Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/2, version=0.0.0)
Oct 15 15:15:51 [14853] vm1 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [14874]
Oct 15 15:15:51 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:51 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:51 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:51 [14874] vm1       crmd:    debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:51 [14874] vm1       crmd:    debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:51 [14874] vm1       crmd:    debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:51 [14853] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Oct 15 15:15:51 [14853] vm1 corosync debug   [CPG   ] cpg.c:cpg_lib_init_fn:1459 lib_init_fn: conn=0x7f0cab0bfbf0, cpd=0x7f0cab0ba5b4
Oct 15 15:15:51 [14874] vm1       crmd:    debug: get_local_nodeid: 	Local nodeid is 3232261517
Oct 15 15:15:51 [14874] vm1       crmd:     info: crm_get_peer: 	Created entry 588c7fd2-4783-4439-b2b1-192c7b6bd1f4/0x16e1ec0 for node (null)/3232261517 (1 total)
Oct 15 15:15:51 [14874] vm1       crmd:     info: crm_get_peer: 	Node 3232261517 has uuid 3232261517
Oct 15 15:15:51 [14874] vm1       crmd:     info: crm_update_peer_proc: 	cluster_connect_cpg: Node (null)[3232261517] - corosync-cpg is now online
Oct 15 15:15:51 [14874] vm1       crmd:     info: init_cs_connection_once: 	Connection to 'corosync': established
Oct 15 15:15:51 [14853] vm1 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [14874]
Oct 15 15:15:51 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:51 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:51 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:51 [14874] vm1       crmd:    debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:51 [14874] vm1       crmd:    debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:51 [14874] vm1       crmd:    debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:51 [14853] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Oct 15 15:15:51 [14853] vm1 corosync debug   [CMAP  ] cmap.c:cmap_lib_init_fn:306 lib_init_fn: conn=0x7f0cab0c0bd0
Oct 15 15:15:51 [14874] vm1       crmd:    debug: qb_ipcc_disconnect: 	qb_ipcc_disconnect()
Oct 15 15:15:51 [14874] vm1       crmd:    debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-request-14855-14874-33-header
Oct 15 15:15:51 [14874] vm1       crmd:    debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-response-14855-14874-33-header
Oct 15 15:15:51 [14874] vm1       crmd:    debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-event-14855-14874-33-header
Oct 15 15:15:51 [14874] vm1       crmd:   notice: corosync_node_name: 	Unable to get node name for nodeid 3232261517
Oct 15 15:15:51 [14874] vm1       crmd:   notice: get_node_name: 	Defaulting to uname -n for the local corosync node name
Oct 15 15:15:51 [14874] vm1       crmd:     info: crm_get_peer: 	Node 3232261517 is now known as vm1
Oct 15 15:15:51 [14874] vm1       crmd:     info: peer_update_callback: 	vm1 is now (null)
Oct 15 15:15:51 [14874] vm1       crmd:    debug: cluster_connect_quorum: 	Configuring Pacemaker to obtain quorum from Corosync
Oct 15 15:15:51 [14853] vm1 corosync debug   [CPG   ] cpg.c:message_handler_req_exec_cpg_procjoin:1260 got procjoin message from cluster node -1062705779 (r(0) ip(192.168.101.141) r(1) ip(192.168.102.141) ) for pid 14874
Oct 15 15:15:51 [14853] vm1 corosync debug   [QB    ] ipcs.c:qb_ipcs_dispatch_connection_request:757 HUP conn (14855-14874-33)
Oct 15 15:15:51 [14853] vm1 corosync debug   [QB    ] ipcs.c:qb_ipcs_disconnect:605 qb_ipcs_disconnect(14855-14874-33) state:2
Oct 15 15:15:51 [14853] vm1 corosync debug   [QB    ] loop_poll_epoll.c:_del:117 epoll_ctl(del): Bad file descriptor (9)
Oct 15 15:15:51 [14853] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_closed:417 cs_ipcs_connection_closed() 
Oct 15 15:15:51 [14853] vm1 corosync debug   [CMAP  ] cmap.c:cmap_lib_exit_fn:325 exit_fn for conn=0x7f0cab0c0bd0
Oct 15 15:15:51 [14853] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_destroyed:390 cs_ipcs_connection_destroyed() 
Oct 15 15:15:51 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-response-14855-14874-33-header
Oct 15 15:15:51 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-event-14855-14874-33-header
Oct 15 15:15:51 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-request-14855-14874-33-header
Oct 15 15:15:51 [14853] vm1 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [14874]
Oct 15 15:15:51 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:51 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:51 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:51 [14874] vm1       crmd:    debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:51 [14874] vm1       crmd:    debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:51 [14874] vm1       crmd:    debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:51 [14853] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Oct 15 15:15:51 [14853] vm1 corosync debug   [QUORUM] vsf_quorum.c:quorum_lib_init_fn:316 lib_init_fn: conn=0x7f0cab0c0bd0
Oct 15 15:15:51 [14853] vm1 corosync debug   [QUORUM] vsf_quorum.c:message_handler_req_lib_quorum_gettype:471 got quorum_type request on 0x7f0cab0c0bd0
Oct 15 15:15:51 [14853] vm1 corosync debug   [QUORUM] vsf_quorum.c:message_handler_req_lib_quorum_getquorate:395 got quorate request on 0x7f0cab0c0bd0
Oct 15 15:15:51 [14874] vm1       crmd:   notice: cluster_connect_quorum: 	Quorum acquired
Oct 15 15:15:51 [14853] vm1 corosync debug   [QUORUM] vsf_quorum.c:message_handler_req_lib_quorum_trackstart:412 got trackstart request on 0x7f0cab0c0bd0
Oct 15 15:15:51 [14853] vm1 corosync debug   [QUORUM] vsf_quorum.c:message_handler_req_lib_quorum_trackstart:420 sending initial status to 0x7f0cab0c0bd0
Oct 15 15:15:51 [14853] vm1 corosync debug   [QUORUM] vsf_quorum.c:send_library_notification:359 sending quorum notification to 0x7f0cab0c0bd0, length = 60
Oct 15 15:15:51 [14853] vm1 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [14874]
Oct 15 15:15:51 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:51 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:51 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:51 [14874] vm1       crmd:    debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:51 [14874] vm1       crmd:    debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:51 [14874] vm1       crmd:    debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:51 [14853] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Oct 15 15:15:51 [14853] vm1 corosync debug   [CMAP  ] cmap.c:cmap_lib_init_fn:306 lib_init_fn: conn=0x7f0cab0baf70
Oct 15 15:15:51 [14874] vm1       crmd:    debug: qb_ipcc_disconnect: 	qb_ipcc_disconnect()
Oct 15 15:15:51 [14874] vm1       crmd:    debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-request-14855-14874-34-header
Oct 15 15:15:51 [14874] vm1       crmd:    debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-response-14855-14874-34-header
Oct 15 15:15:51 [14874] vm1       crmd:    debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-event-14855-14874-34-header
Oct 15 15:15:51 [14853] vm1 corosync debug   [QB    ] ipcs.c:qb_ipcs_dispatch_connection_request:757 HUP conn (14855-14874-34)
Oct 15 15:15:51 [14853] vm1 corosync debug   [QB    ] ipcs.c:qb_ipcs_disconnect:605 qb_ipcs_disconnect(14855-14874-34) state:2
Oct 15 15:15:51 [14853] vm1 corosync debug   [QB    ] loop_poll_epoll.c:_del:117 epoll_ctl(del): Bad file descriptor (9)
Oct 15 15:15:51 [14853] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_closed:417 cs_ipcs_connection_closed() 
Oct 15 15:15:51 [14853] vm1 corosync debug   [CMAP  ] cmap.c:cmap_lib_exit_fn:325 exit_fn for conn=0x7f0cab0baf70
Oct 15 15:15:51 [14853] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_destroyed:390 cs_ipcs_connection_destroyed() 
Oct 15 15:15:51 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-response-14855-14874-34-header
Oct 15 15:15:51 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-event-14855-14874-34-header
Oct 15 15:15:51 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-request-14855-14874-34-header
Oct 15 15:15:51 [14853] vm1 corosync debug   [CPG   ] cpg.c:message_handler_req_exec_cpg_procjoin:1260 got procjoin message from cluster node -1062705777 (r(0) ip(192.168.101.143) r(1) ip(192.168.102.143) ) for pid 8202
Oct 15 15:15:51 [14869] vm1        cib:     info: crm_client_new: 	Connecting 0x12b1a10 for uid=0 gid=0 pid=14870 id=d66f721c-1ad2-4350-9a72-db8b684f58c2
Oct 15 15:15:51 [14869] vm1        cib:    debug: handle_new_connection: 	IPC credentials authenticated (14869-14870-12)
Oct 15 15:15:51 [14869] vm1        cib:    debug: qb_ipcs_shm_connect: 	connecting to client [14870]
Oct 15 15:15:51 [14853] vm1 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [14874]
Oct 15 15:15:51 [14869] vm1        cib:    debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Oct 15 15:15:51 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:51 [14869] vm1        cib:    debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Oct 15 15:15:51 [14869] vm1        cib:    debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Oct 15 15:15:51 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:51 [14870] vm1 stonith-ng:    debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Oct 15 15:15:51 [14870] vm1 stonith-ng:    debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Oct 15 15:15:51 [14870] vm1 stonith-ng:    debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Oct 15 15:15:51 [14870] vm1 stonith-ng:    debug: cib_native_signon_raw: 	Connection to CIB successful
Oct 15 15:15:51 [14869] vm1        cib:    debug: cib_common_callback_worker: 	Setting cib_diff_notify callbacks for crmd (d66f721c-1ad2-4350-9a72-db8b684f58c2): on
Oct 15 15:15:51 [14870] vm1 stonith-ng:   notice: setup_cib: 	Watching for stonith topology changes
Oct 15 15:15:51 [14870] vm1 stonith-ng:     info: qb_ipcs_us_publish: 	server name: stonith-ng
Oct 15 15:15:51 [14870] vm1 stonith-ng:     info: main: 	Starting stonith-ng mainloop
Oct 15 15:15:51 [14870] vm1 stonith-ng:     info: pcmk_cpg_membership: 	Joined[0.0] stonith-ng.3232261517 
Oct 15 15:15:51 [14870] vm1 stonith-ng:     info: pcmk_cpg_membership: 	Member[0.0] stonith-ng.3232261517 
Oct 15 15:15:51 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:51 [14870] vm1 stonith-ng:     info: crm_get_peer: 	Created entry e8e88268-fab1-4a08-8681-3587011ea75e/0xd30690 for node (null)/3232261518 (2 total)
Oct 15 15:15:51 [14870] vm1 stonith-ng:     info: crm_get_peer: 	Node 3232261518 has uuid 3232261518
Oct 15 15:15:51 [14870] vm1 stonith-ng:     info: pcmk_cpg_membership: 	Member[0.1] stonith-ng.3232261518 
Oct 15 15:15:51 [14870] vm1 stonith-ng:     info: crm_update_peer_proc: 	pcmk_cpg_membership: Node (null)[3232261518] - corosync-cpg is now online
Oct 15 15:15:51 [14870] vm1 stonith-ng:    debug: st_peer_update_callback: 	Broadcasting our uname because of node 3232261518
Oct 15 15:15:51 [14869] vm1        cib:     info: cib_process_request: 	Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/2, version=0.0.0)
Oct 15 15:15:51 [14874] vm1       crmd:    debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:51 [14874] vm1       crmd:    debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:51 [14874] vm1       crmd:    debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:51 [14853] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Oct 15 15:15:51 [14853] vm1 corosync debug   [CMAP  ] cmap.c:cmap_lib_init_fn:306 lib_init_fn: conn=0x7f0cab0baf70
Oct 15 15:15:51 [14874] vm1       crmd:    debug: qb_ipcc_disconnect: 	qb_ipcc_disconnect()
Oct 15 15:15:51 [14874] vm1       crmd:    debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-request-14855-14874-34-header
Oct 15 15:15:51 [14874] vm1       crmd:    debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-response-14855-14874-34-header
Oct 15 15:15:51 [14874] vm1       crmd:    debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-event-14855-14874-34-header
Oct 15 15:15:51 [14874] vm1       crmd:     info: do_ha_control: 	Connected to the cluster
Oct 15 15:15:51 [14874] vm1       crmd:    debug: do_lrm_control: 	Connecting to the LRM
Oct 15 15:15:51 [14874] vm1       crmd:     info: lrmd_ipc_connect: 	Connecting to lrmd
Oct 15 15:15:51 [14853] vm1 corosync debug   [QB    ] ipcs.c:qb_ipcs_dispatch_connection_request:757 HUP conn (14855-14874-34)
Oct 15 15:15:51 [14853] vm1 corosync debug   [QB    ] ipcs.c:qb_ipcs_disconnect:605 qb_ipcs_disconnect(14855-14874-34) state:2
Oct 15 15:15:51 [14853] vm1 corosync debug   [QB    ] loop_poll_epoll.c:_del:117 epoll_ctl(del): Bad file descriptor (9)
Oct 15 15:15:51 [14853] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_closed:417 cs_ipcs_connection_closed() 
Oct 15 15:15:51 [14853] vm1 corosync debug   [CMAP  ] cmap.c:cmap_lib_exit_fn:325 exit_fn for conn=0x7f0cab0baf70
Oct 15 15:15:51 [14853] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_destroyed:390 cs_ipcs_connection_destroyed() 
Oct 15 15:15:51 [14869] vm1        cib:     info: cib_process_request: 	Completed cib_modify operation for section nodes: OK (rc=0, origin=local/crmd/3, version=0.0.0)
Oct 15 15:15:51 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-response-14855-14874-34-header
Oct 15 15:15:51 [14871] vm1       lrmd:     info: crm_client_new: 	Connecting 0xef8d10 for uid=496 gid=492 pid=14874 id=0245b4d8-632b-498b-abda-91d20df4709f
Oct 15 15:15:51 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-event-14855-14874-34-header
Oct 15 15:15:51 [14871] vm1       lrmd:    debug: handle_new_connection: 	IPC credentials authenticated (14871-14874-6)
Oct 15 15:15:51 [14871] vm1       lrmd:    debug: qb_ipcs_shm_connect: 	connecting to client [14874]
Oct 15 15:15:51 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-request-14855-14874-34-header
Oct 15 15:15:51 [14871] vm1       lrmd:    debug: qb_rb_open_2: 	shm size:131085; real_size:135168; rb->word_size:33792
Oct 15 15:15:51 [14871] vm1       lrmd:    debug: qb_rb_open_2: 	shm size:131085; real_size:135168; rb->word_size:33792
Oct 15 15:15:51 [14853] vm1 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [14870]
Oct 15 15:15:51 [14872] vm1      attrd:    debug: attrd_cib_connect: 	CIB signon attempt 2
Oct 15 15:15:51 [14871] vm1       lrmd:    debug: qb_rb_open_2: 	shm size:131085; real_size:135168; rb->word_size:33792
Oct 15 15:15:51 [14869] vm1        cib:     info: crm_client_new: 	Connecting 0x12b5af0 for uid=496 gid=492 pid=14872 id=9176b503-a83c-4b85-ba02-759e6090e522
Oct 15 15:15:51 [14869] vm1        cib:    debug: handle_new_connection: 	IPC credentials authenticated (14869-14872-13)
Oct 15 15:15:51 [14869] vm1        cib:    debug: qb_ipcs_shm_connect: 	connecting to client [14872]
Oct 15 15:15:51 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:51 [14874] vm1       crmd:    debug: qb_rb_open_2: 	shm size:131085; real_size:135168; rb->word_size:33792
Oct 15 15:15:51 [14874] vm1       crmd:    debug: qb_rb_open_2: 	shm size:131085; real_size:135168; rb->word_size:33792
Oct 15 15:15:51 [14874] vm1       crmd:    debug: qb_rb_open_2: 	shm size:131085; real_size:135168; rb->word_size:33792
Oct 15 15:15:51 [14869] vm1        cib:    debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Oct 15 15:15:51 [14874] vm1       crmd:     info: do_lrm_control: 	LRM connection established
Oct 15 15:15:51 [14874] vm1       crmd:     info: do_started: 	Delaying start, no membership data (0000000000100000)
Oct 15 15:15:51 [14874] vm1       crmd:    debug: register_fsa_input_adv: 	Stalling the FSA pending further input: source=do_started cause=C_FSA_INTERNAL data=(nil) queue=0
Oct 15 15:15:51 [14874] vm1       crmd:    debug: s_crmd_fsa: 	Exiting the FSA: queue=0, fsa_actions=0x2, stalled=true
Oct 15 15:15:51 [14874] vm1       crmd:     info: pcmk_quorum_notification: 	Membership 12: quorum retained (3)
Oct 15 15:15:51 [14874] vm1       crmd:    debug: pcmk_quorum_notification: 	Member[0] 3232261517 
Oct 15 15:15:51 [14874] vm1       crmd:   notice: crm_update_peer_state: 	pcmk_quorum_notification: Node vm1[3232261517] - state is now member (was (null))
Oct 15 15:15:51 [14874] vm1       crmd:     info: peer_update_callback: 	vm1 is now member (was (null))
Oct 15 15:15:51 [14874] vm1       crmd:    debug: pcmk_quorum_notification: 	Member[1] 3232261518 
Oct 15 15:15:51 [14874] vm1       crmd:     info: crm_get_peer: 	Created entry 372e060a-eee2-47a1-9239-6704e98945eb/0x1827970 for node (null)/3232261518 (2 total)
Oct 15 15:15:51 [14874] vm1       crmd:     info: crm_get_peer: 	Node 3232261518 has uuid 3232261518
Oct 15 15:15:51 [14874] vm1       crmd:     info: pcmk_quorum_notification: 	Obtaining name for new node 3232261518
Oct 15 15:15:51 [14871] vm1       lrmd:    debug: process_lrmd_message: 	Processed register operation from 0245b4d8-632b-498b-abda-91d20df4709f: rc=0, reply=0, notify=0, exit=4201864
Oct 15 15:15:51 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:51 [14869] vm1        cib:    debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Oct 15 15:15:51 [14869] vm1        cib:    debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Oct 15 15:15:51 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:51 [14872] vm1      attrd:    debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Oct 15 15:15:51 [14872] vm1      attrd:    debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Oct 15 15:15:51 [14872] vm1      attrd:    debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Oct 15 15:15:51 [14869] vm1        cib:     info: cib_process_request: 	Completed cib_query operation for section crm_config: OK (rc=0, origin=local/crmd/4, version=0.0.0)
Oct 15 15:15:51 [14872] vm1      attrd:    debug: cib_native_signon_raw: 	Connection to CIB successful
Oct 15 15:15:51 [14872] vm1      attrd:     info: attrd_cib_connect: 	Connected to the CIB after 2 attempts
Oct 15 15:15:51 [14869] vm1        cib:    debug: cib_common_callback_worker: 	Setting cib_refresh_notify callbacks for attrd (9176b503-a83c-4b85-ba02-759e6090e522): on
Oct 15 15:15:51 [14872] vm1      attrd:     info: main: 	CIB connection active
Oct 15 15:15:51 [14872] vm1      attrd:     info: pcmk_cpg_membership: 	Joined[0.0] attrd.3232261517 
Oct 15 15:15:51 [14872] vm1      attrd:     info: pcmk_cpg_membership: 	Member[0.0] attrd.3232261517 
Oct 15 15:15:51 [14872] vm1      attrd:     info: crm_get_peer: 	Created entry abd2abca-d2be-43fe-bcc9-3a93e950db08/0x1765f60 for node (null)/3232261518 (2 total)
Oct 15 15:15:51 [14872] vm1      attrd:     info: crm_get_peer: 	Node 3232261518 has uuid 3232261518
Oct 15 15:15:51 [14872] vm1      attrd:     info: pcmk_cpg_membership: 	Member[0.1] attrd.3232261518 
Oct 15 15:15:51 [14872] vm1      attrd:     info: crm_update_peer_proc: 	pcmk_cpg_membership: Node (null)[3232261518] - corosync-cpg is now online
Oct 15 15:15:51 [14872] vm1      attrd:   notice: crm_update_peer_state: 	attrd_peer_change_cb: Node (null)[3232261518] - state is now member (was (null))
Oct 15 15:15:51 [14872] vm1      attrd:     info: pcmk_cpg_membership: 	Joined[1.0] attrd.3232261519 
Oct 15 15:15:51 [14872] vm1      attrd:     info: pcmk_cpg_membership: 	Member[1.0] attrd.3232261517 
Oct 15 15:15:51 [14872] vm1      attrd:     info: pcmk_cpg_membership: 	Member[1.1] attrd.3232261518 
Oct 15 15:15:51 [14872] vm1      attrd:     info: crm_get_peer: 	Created entry 7f3f4fd0-0841-42fc-af5b-e6433194a597/0x1765fd0 for node (null)/3232261519 (3 total)
Oct 15 15:15:51 [14872] vm1      attrd:     info: crm_get_peer: 	Node 3232261519 has uuid 3232261519
Oct 15 15:15:51 [14872] vm1      attrd:     info: pcmk_cpg_membership: 	Member[1.2] attrd.3232261519 
Oct 15 15:15:51 [14872] vm1      attrd:     info: crm_update_peer_proc: 	pcmk_cpg_membership: Node (null)[3232261519] - corosync-cpg is now online
Oct 15 15:15:51 [14872] vm1      attrd:   notice: crm_update_peer_state: 	attrd_peer_change_cb: Node (null)[3232261519] - state is now member (was (null))
Oct 15 15:15:51 [14870] vm1 stonith-ng:    debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:51 [14870] vm1 stonith-ng:    debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:51 [14870] vm1 stonith-ng:    debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:51 [14853] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Oct 15 15:15:51 [14853] vm1 corosync debug   [CMAP  ] cmap.c:cmap_lib_init_fn:306 lib_init_fn: conn=0x7f0cab0baf70
Oct 15 15:15:51 [14870] vm1 stonith-ng:    debug: qb_ipcc_disconnect: 	qb_ipcc_disconnect()
Oct 15 15:15:51 [14870] vm1 stonith-ng:    debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-request-14855-14870-34-header
Oct 15 15:15:51 [14870] vm1 stonith-ng:    debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-response-14855-14870-34-header
Oct 15 15:15:51 [14870] vm1 stonith-ng:    debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-event-14855-14870-34-header
Oct 15 15:15:51 [14870] vm1 stonith-ng:   notice: corosync_node_name: 	Unable to get node name for nodeid 3232261517
Oct 15 15:15:51 [14870] vm1 stonith-ng:   notice: get_node_name: 	Defaulting to uname -n for the local corosync node name
Oct 15 15:15:51 [14870] vm1 stonith-ng:     info: pcmk_cpg_membership: 	Joined[1.0] stonith-ng.3232261519 
Oct 15 15:15:51 [14870] vm1 stonith-ng:     info: pcmk_cpg_membership: 	Member[1.0] stonith-ng.3232261517 
Oct 15 15:15:51 [14870] vm1 stonith-ng:     info: pcmk_cpg_membership: 	Member[1.1] stonith-ng.3232261518 
Oct 15 15:15:51 [14870] vm1 stonith-ng:     info: crm_get_peer: 	Created entry 1e858d23-45f8-43d5-8072-a64ea95bfbc0/0xd2e8e0 for node (null)/3232261519 (3 total)
Oct 15 15:15:51 [14870] vm1 stonith-ng:     info: crm_get_peer: 	Node 3232261519 has uuid 3232261519
Oct 15 15:15:51 [14870] vm1 stonith-ng:     info: pcmk_cpg_membership: 	Member[1.2] stonith-ng.3232261519 
Oct 15 15:15:51 [14870] vm1 stonith-ng:     info: crm_update_peer_proc: 	pcmk_cpg_membership: Node (null)[3232261519] - corosync-cpg is now online
Oct 15 15:15:51 [14870] vm1 stonith-ng:    debug: st_peer_update_callback: 	Broadcasting our uname because of node 3232261519
Oct 15 15:15:51 [14870] vm1 stonith-ng:     info: crm_get_peer: 	Node 3232261518 is now known as vm2
Oct 15 15:15:51 [14870] vm1 stonith-ng:    debug: st_peer_update_callback: 	Broadcasting our uname because of node 3232261518
Oct 15 15:15:51 [14870] vm1 stonith-ng:     info: init_cib_cache_cb: 	Updating device list from the cib: init
Oct 15 15:15:51 [14853] vm1 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [14874]
Oct 15 15:15:51 [14870] vm1 stonith-ng:    debug: unpack_config: 	STONITH timeout: 60000
Oct 15 15:15:51 [14870] vm1 stonith-ng:    debug: unpack_config: 	STONITH of failed nodes is enabled
Oct 15 15:15:51 [14870] vm1 stonith-ng:    debug: unpack_config: 	Stop all active resources: false
Oct 15 15:15:51 [14870] vm1 stonith-ng:    debug: unpack_config: 	Cluster is symmetric - resources can run anywhere by default
Oct 15 15:15:51 [14870] vm1 stonith-ng:    debug: unpack_config: 	Default stickiness: 0
Oct 15 15:15:51 [14870] vm1 stonith-ng:    debug: unpack_config: 	On loss of CCM Quorum: Stop ALL resources
Oct 15 15:15:51 [14870] vm1 stonith-ng:    debug: unpack_config: 	Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
Oct 15 15:15:51 [14870] vm1 stonith-ng:     info: unpack_nodes: 	Creating a fake local node
Oct 15 15:15:51 [14870] vm1 stonith-ng:    debug: unpack_domains: 	Unpacking domains
Oct 15 15:15:51 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:51 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:51 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:51 [14874] vm1       crmd:    debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:51 [14874] vm1       crmd:    debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:51 [14874] vm1       crmd:    debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:51 [14853] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Oct 15 15:15:51 [14853] vm1 corosync debug   [CMAP  ] cmap.c:cmap_lib_init_fn:306 lib_init_fn: conn=0x7f0cab0cb000
Oct 15 15:15:51 [14853] vm1 corosync debug   [QB    ] ipcs.c:qb_ipcs_dispatch_connection_request:757 HUP conn (14855-14870-34)
Oct 15 15:15:51 [14853] vm1 corosync debug   [QB    ] ipcs.c:qb_ipcs_disconnect:605 qb_ipcs_disconnect(14855-14870-34) state:2
Oct 15 15:15:51 [14853] vm1 corosync debug   [QB    ] loop_poll_epoll.c:_del:117 epoll_ctl(del): Bad file descriptor (9)
Oct 15 15:15:51 [14853] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_closed:417 cs_ipcs_connection_closed() 
Oct 15 15:15:51 [14853] vm1 corosync debug   [CMAP  ] cmap.c:cmap_lib_exit_fn:325 exit_fn for conn=0x7f0cab0baf70
Oct 15 15:15:51 [14853] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_destroyed:390 cs_ipcs_connection_destroyed() 
Oct 15 15:15:51 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-response-14855-14870-34-header
Oct 15 15:15:51 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-event-14855-14870-34-header
Oct 15 15:15:51 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-request-14855-14870-34-header
Oct 15 15:15:51 [14874] vm1       crmd:    debug: qb_ipcc_disconnect: 	qb_ipcc_disconnect()
Oct 15 15:15:51 [14874] vm1       crmd:    debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-request-14855-14874-35-header
Oct 15 15:15:51 [14874] vm1       crmd:    debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-response-14855-14874-35-header
Oct 15 15:15:51 [14874] vm1       crmd:    debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-event-14855-14874-35-header
Oct 15 15:15:51 [14874] vm1       crmd:   notice: corosync_node_name: 	Unable to get node name for nodeid 3232261518
Oct 15 15:15:51 [14874] vm1       crmd:   notice: crm_update_peer_state: 	pcmk_quorum_notification: Node (null)[3232261518] - state is now member (was (null))
Oct 15 15:15:51 [14874] vm1       crmd:    debug: pcmk_quorum_notification: 	Member[2] 3232261519 
Oct 15 15:15:51 [14874] vm1       crmd:     info: crm_get_peer: 	Created entry 40021629-1033-4f01-9550-f1b5b8ecbf3f/0x1827860 for node (null)/3232261519 (3 total)
Oct 15 15:15:51 [14874] vm1       crmd:     info: crm_get_peer: 	Node 3232261519 has uuid 3232261519
Oct 15 15:15:51 [14874] vm1       crmd:     info: pcmk_quorum_notification: 	Obtaining name for new node 3232261519
Oct 15 15:15:51 [14853] vm1 corosync debug   [QB    ] ipcs.c:qb_ipcs_dispatch_connection_request:757 HUP conn (14855-14874-35)
Oct 15 15:15:51 [14853] vm1 corosync debug   [QB    ] ipcs.c:qb_ipcs_disconnect:605 qb_ipcs_disconnect(14855-14874-35) state:2
Oct 15 15:15:51 [14853] vm1 corosync debug   [QB    ] loop_poll_epoll.c:_del:117 epoll_ctl(del): Bad file descriptor (9)
Oct 15 15:15:51 [14853] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_closed:417 cs_ipcs_connection_closed() 
Oct 15 15:15:51 [14853] vm1 corosync debug   [CMAP  ] cmap.c:cmap_lib_exit_fn:325 exit_fn for conn=0x7f0cab0cb000
Oct 15 15:15:51 [14853] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_destroyed:390 cs_ipcs_connection_destroyed() 
Oct 15 15:15:51 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-response-14855-14874-35-header
Oct 15 15:15:51 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-event-14855-14874-35-header
Oct 15 15:15:51 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-request-14855-14874-35-header
Oct 15 15:15:51 [14853] vm1 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [14874]
Oct 15 15:15:51 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:51 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:51 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:51 [14874] vm1       crmd:    debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:51 [14874] vm1       crmd:    debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:51 [14874] vm1       crmd:    debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:51 [14853] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Oct 15 15:15:51 [14853] vm1 corosync debug   [CMAP  ] cmap.c:cmap_lib_init_fn:306 lib_init_fn: conn=0x7f0cab0baf70
Oct 15 15:15:51 [14874] vm1       crmd:    debug: qb_ipcc_disconnect: 	qb_ipcc_disconnect()
Oct 15 15:15:51 [14874] vm1       crmd:    debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-request-14855-14874-34-header
Oct 15 15:15:51 [14874] vm1       crmd:    debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-response-14855-14874-34-header
Oct 15 15:15:51 [14874] vm1       crmd:    debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-event-14855-14874-34-header
Oct 15 15:15:51 [14874] vm1       crmd:   notice: corosync_node_name: 	Unable to get node name for nodeid 3232261519
Oct 15 15:15:51 [14874] vm1       crmd:   notice: crm_update_peer_state: 	pcmk_quorum_notification: Node (null)[3232261519] - state is now member (was (null))
Oct 15 15:15:51 [14874] vm1       crmd:    debug: post_cache_update: 	Updated cache after membership event 12.
Oct 15 15:15:51 [14874] vm1       crmd:    debug: post_cache_update: 	post_cache_update added action A_ELECTION_CHECK to the FSA
Oct 15 15:15:51 [14870] vm1 stonith-ng:     info: crm_get_peer: 	Node 3232261519 is now known as vm3
Oct 15 15:15:51 [14870] vm1 stonith-ng:    debug: st_peer_update_callback: 	Broadcasting our uname because of node 3232261519
Oct 15 15:15:51 [14853] vm1 corosync debug   [QB    ] ipcs.c:qb_ipcs_dispatch_connection_request:757 HUP conn (14855-14874-34)
Oct 15 15:15:51 [14853] vm1 corosync debug   [QB    ] ipcs.c:qb_ipcs_disconnect:605 qb_ipcs_disconnect(14855-14874-34) state:2
Oct 15 15:15:51 [14853] vm1 corosync debug   [QB    ] loop_poll_epoll.c:_del:117 epoll_ctl(del): Bad file descriptor (9)
Oct 15 15:15:51 [14853] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_closed:417 cs_ipcs_connection_closed() 
Oct 15 15:15:51 [14853] vm1 corosync debug   [CMAP  ] cmap.c:cmap_lib_exit_fn:325 exit_fn for conn=0x7f0cab0baf70
Oct 15 15:15:51 [14853] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_destroyed:390 cs_ipcs_connection_destroyed() 
Oct 15 15:15:51 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-response-14855-14874-34-header
Oct 15 15:15:51 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-event-14855-14874-34-header
Oct 15 15:15:51 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-request-14855-14874-34-header
Oct 15 15:15:51 [14853] vm1 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [14874]
Oct 15 15:15:51 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:51 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:51 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:51 [14874] vm1       crmd:    debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:51 [14874] vm1       crmd:    debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:51 [14874] vm1       crmd:    debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:15:51 [14853] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Oct 15 15:15:51 [14853] vm1 corosync debug   [CMAP  ] cmap.c:cmap_lib_init_fn:306 lib_init_fn: conn=0x7f0cab0baf70
Oct 15 15:15:51 [14874] vm1       crmd:    debug: qb_ipcc_disconnect: 	qb_ipcc_disconnect()
Oct 15 15:15:51 [14874] vm1       crmd:    debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-request-14855-14874-34-header
Oct 15 15:15:51 [14874] vm1       crmd:    debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-response-14855-14874-34-header
Oct 15 15:15:51 [14874] vm1       crmd:    debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-event-14855-14874-34-header
Oct 15 15:15:51 [14874] vm1       crmd:   notice: corosync_node_name: 	Unable to get node name for nodeid 3232261517
Oct 15 15:15:51 [14874] vm1       crmd:   notice: get_node_name: 	Defaulting to uname -n for the local corosync node name
Oct 15 15:15:51 [14874] vm1       crmd:     info: do_started: 	Delaying start, Config not read (0000000000000040)
Oct 15 15:15:51 [14874] vm1       crmd:    debug: register_fsa_input_adv: 	Stalling the FSA pending further input: source=do_started cause=C_FSA_INTERNAL data=(nil) queue=0
Oct 15 15:15:51 [14874] vm1       crmd:    debug: s_crmd_fsa: 	Exiting the FSA: queue=0, fsa_actions=0x200000002, stalled=true
Oct 15 15:15:51 [14874] vm1       crmd:    debug: config_query_callback: 	Call 4 : Parsing CIB options
Oct 15 15:15:51 [14874] vm1       crmd:    debug: config_query_callback: 	Shutdown escalation occurs after: 1200000ms
Oct 15 15:15:51 [14874] vm1       crmd:    debug: config_query_callback: 	Checking for expired actions every 900000ms
Oct 15 15:15:51 [14874] vm1       crmd:    debug: do_started: 	Init server comms
Oct 15 15:15:51 [14874] vm1       crmd:     info: qb_ipcs_us_publish: 	server name: crmd
Oct 15 15:15:51 [14874] vm1       crmd:   notice: do_started: 	The local CRM is operational
Oct 15 15:15:51 [14874] vm1       crmd:    debug: do_election_check: 	Ignore election check: we not in an election
Oct 15 15:15:51 [14874] vm1       crmd:    debug: s_crmd_fsa: 	Processing I_PENDING: [ state=S_STARTING cause=C_FSA_INTERNAL origin=do_started ]
Oct 15 15:15:51 [14874] vm1       crmd:     info: do_log: 	FSA: Input I_PENDING from do_started() received in state S_STARTING
Oct 15 15:15:51 [14874] vm1       crmd:   notice: do_state_transition: 	State transition S_STARTING -> S_PENDING [ input=I_PENDING cause=C_FSA_INTERNAL origin=do_started ]
Oct 15 15:15:51 [14869] vm1        cib:     info: cib_process_request: 	Completed cib_slave operation for section 'all': OK (rc=0, origin=local/crmd/5, version=0.0.0)
Oct 15 15:15:51 [14853] vm1 corosync debug   [QB    ] ipcs.c:qb_ipcs_dispatch_connection_request:757 HUP conn (14855-14874-34)
Oct 15 15:15:51 [14853] vm1 corosync debug   [QB    ] ipcs.c:qb_ipcs_disconnect:605 qb_ipcs_disconnect(14855-14874-34) state:2
Oct 15 15:15:51 [14853] vm1 corosync debug   [QB    ] loop_poll_epoll.c:_del:117 epoll_ctl(del): Bad file descriptor (9)
Oct 15 15:15:51 [14853] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_closed:417 cs_ipcs_connection_closed() 
Oct 15 15:15:51 [14853] vm1 corosync debug   [CMAP  ] cmap.c:cmap_lib_exit_fn:325 exit_fn for conn=0x7f0cab0baf70
Oct 15 15:15:51 [14853] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_destroyed:390 cs_ipcs_connection_destroyed() 
Oct 15 15:15:51 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-response-14855-14874-34-header
Oct 15 15:15:51 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-event-14855-14874-34-header
Oct 15 15:15:51 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-request-14855-14874-34-header
Oct 15 15:15:52 [14874] vm1       crmd:    debug: do_cl_join_query: 	Querying for a DC
Oct 15 15:15:52 [14874] vm1       crmd:    debug: crm_timer_start: 	Started Election Trigger (I_DC_TIMEOUT:20000ms), src=16
Oct 15 15:15:52 [14874] vm1       crmd:     info: pcmk_cpg_membership: 	Joined[0.0] crmd.3232261517 
Oct 15 15:15:52 [14874] vm1       crmd:     info: pcmk_cpg_membership: 	Member[0.0] crmd.3232261517 
Oct 15 15:15:52 [14874] vm1       crmd:     info: pcmk_cpg_membership: 	Member[0.1] crmd.3232261518 
Oct 15 15:15:52 [14874] vm1       crmd:     info: crm_update_peer_proc: 	pcmk_cpg_membership: Node (null)[3232261518] - corosync-cpg is now online
Oct 15 15:15:52 [14874] vm1       crmd:     info: crm_get_peer: 	Node 3232261518 is now known as vm2
Oct 15 15:15:52 [14874] vm1       crmd:     info: peer_update_callback: 	vm2 is now member
Oct 15 15:15:52 [14874] vm1       crmd:     info: pcmk_cpg_membership: 	Joined[1.0] crmd.3232261519 
Oct 15 15:15:52 [14874] vm1       crmd:     info: pcmk_cpg_membership: 	Member[1.0] crmd.3232261517 
Oct 15 15:15:52 [14874] vm1       crmd:     info: pcmk_cpg_membership: 	Member[1.1] crmd.3232261518 
Oct 15 15:15:52 [14874] vm1       crmd:     info: pcmk_cpg_membership: 	Member[1.2] crmd.3232261519 
Oct 15 15:15:52 [14874] vm1       crmd:     info: crm_update_peer_proc: 	pcmk_cpg_membership: Node (null)[3232261519] - corosync-cpg is now online
Oct 15 15:15:52 [14874] vm1       crmd:     info: crm_get_peer: 	Node 3232261519 is now known as vm3
Oct 15 15:15:52 [14874] vm1       crmd:     info: peer_update_callback: 	vm3 is now member
Oct 15 15:15:52 [14874] vm1       crmd:    debug: te_connect_stonith: 	Attempting connection to fencing daemon...
Oct 15 15:15:53 [14870] vm1 stonith-ng:     info: crm_client_new: 	Connecting 0xd37980 for uid=496 gid=492 pid=14874 id=4eb0ff33-6154-4b90-9801-dc2d005b765a
Oct 15 15:15:53 [14870] vm1 stonith-ng:    debug: handle_new_connection: 	IPC credentials authenticated (14870-14874-9)
Oct 15 15:15:53 [14870] vm1 stonith-ng:    debug: qb_ipcs_shm_connect: 	connecting to client [14874]
Oct 15 15:15:53 [14870] vm1 stonith-ng:    debug: qb_rb_open_2: 	shm size:131085; real_size:135168; rb->word_size:33792
Oct 15 15:15:53 [14870] vm1 stonith-ng:    debug: qb_rb_open_2: 	shm size:131085; real_size:135168; rb->word_size:33792
Oct 15 15:15:53 [14870] vm1 stonith-ng:    debug: qb_rb_open_2: 	shm size:131085; real_size:135168; rb->word_size:33792
Oct 15 15:15:53 [14874] vm1       crmd:    debug: qb_rb_open_2: 	shm size:131085; real_size:135168; rb->word_size:33792
Oct 15 15:15:53 [14874] vm1       crmd:    debug: qb_rb_open_2: 	shm size:131085; real_size:135168; rb->word_size:33792
Oct 15 15:15:53 [14874] vm1       crmd:    debug: qb_rb_open_2: 	shm size:131085; real_size:135168; rb->word_size:33792
Oct 15 15:15:53 [14870] vm1 stonith-ng:    debug: stonith_command: 	Processing register 9 from crmd.14874 (               0)
Oct 15 15:15:53 [14874] vm1       crmd:    debug: stonith_api_signon: 	Connection to STONITH successful
Oct 15 15:15:53 [14870] vm1 stonith-ng:     info: stonith_command: 	Processed register from crmd.14874: OK (0)
Oct 15 15:15:53 [14870] vm1 stonith-ng:    debug: stonith_command: 	Processing st_notify 10 from crmd.14874 (               0)
Oct 15 15:15:53 [14870] vm1 stonith-ng:    debug: handle_request: 	Setting st_notify_disconnect callbacks for crmd.14874 (4eb0ff33-6154-4b90-9801-dc2d005b765a): ON
Oct 15 15:15:53 [14870] vm1 stonith-ng:     info: stonith_command: 	Processed st_notify from crmd.14874: OK (0)
Oct 15 15:15:53 [14870] vm1 stonith-ng:    debug: stonith_command: 	Processing st_notify 11 from crmd.14874 (               0)
Oct 15 15:15:53 [14870] vm1 stonith-ng:    debug: handle_request: 	Setting st_notify_fence callbacks for crmd.14874 (4eb0ff33-6154-4b90-9801-dc2d005b765a): ON
Oct 15 15:15:53 [14870] vm1 stonith-ng:     info: stonith_command: 	Processed st_notify from crmd.14874: OK (0)
Oct 15 15:16:12 [14874] vm1       crmd:    debug: election_count_vote: 	Created voted hash
Oct 15 15:16:12 [14874] vm1       crmd:    debug: crm_uptime: 	Current CPU usage is: 0s, 23996us
Oct 15 15:16:12 [14874] vm1       crmd:    debug: crm_compare_age: 	Win: 0.23996 vs 0.10998 (usec)
Oct 15 15:16:12 [14874] vm1       crmd:     info: election_count_vote: 	Election 1 (owner: 3232261518) pass: vote from vm2 (Uptime)
Oct 15 15:16:12 [14874] vm1       crmd:    debug: do_election_check: 	Ignore election check: we not in an election
Oct 15 15:16:12 [14874] vm1       crmd:    debug: s_crmd_fsa: 	Processing I_ELECTION: [ state=S_PENDING cause=C_FSA_INTERNAL origin=do_election_count_vote ]
Oct 15 15:16:12 [14874] vm1       crmd:     info: do_state_transition: 	State transition S_PENDING -> S_ELECTION [ input=I_ELECTION cause=C_FSA_INTERNAL origin=do_election_count_vote ]
Oct 15 15:16:12 [14874] vm1       crmd:    debug: election_vote: 	Started election 1
Oct 15 15:16:12 [14874] vm1       crmd:    debug: election_count_vote: 	Created voted hash
Oct 15 15:16:12 [14874] vm1       crmd:    debug: crm_compare_age: 	Win: 0.23996 vs 0.18997 (usec)
Oct 15 15:16:12 [14874] vm1       crmd:     info: election_count_vote: 	Election 1 (owner: 3232261519) pass: vote from vm3 (Uptime)
Oct 15 15:16:12 [14874] vm1       crmd:    debug: election_check: 	Still waiting on 3 non-votes (3 total)
Oct 15 15:16:12 [14874] vm1       crmd:    debug: s_crmd_fsa: 	Processing I_ELECTION: [ state=S_ELECTION cause=C_FSA_INTERNAL origin=do_election_count_vote ]
Oct 15 15:16:12 [14874] vm1       crmd:    debug: election_vote: 	Started election 2
Oct 15 15:16:12 [14874] vm1       crmd:    debug: election_count_vote: 	Created voted hash
Oct 15 15:16:12 [14874] vm1       crmd:    debug: election_check: 	Still waiting on 3 non-votes (3 total)
Oct 15 15:16:12 [14874] vm1       crmd:    debug: election_check: 	Still waiting on 3 non-votes (3 total)
Oct 15 15:16:12 [14874] vm1       crmd:    debug: election_check: 	Still waiting on 3 non-votes (3 total)
Oct 15 15:16:12 [14874] vm1       crmd:    debug: election_count_vote: 	Election 2 (current: 2, owner: 3232261517): Processed vote from vm1 (Recorded)
Oct 15 15:16:12 [14874] vm1       crmd:    debug: election_check: 	Still waiting on 2 non-votes (3 total)
Oct 15 15:16:12 [14874] vm1       crmd:    debug: election_count_vote: 	Election 2 (current: 2, owner: 3232261517): Processed no-vote from vm2 (Recorded)
Oct 15 15:16:12 [14874] vm1       crmd:    debug: election_check: 	Still waiting on 1 non-votes (3 total)
Oct 15 15:16:12 [14874] vm1       crmd:    debug: election_count_vote: 	Election 2 (current: 2, owner: 3232261517): Processed no-vote from vm3 (Recorded)
Oct 15 15:16:12 [14874] vm1       crmd:     info: election_timer_cb: 	Election election-0 complete
Oct 15 15:16:12 [14874] vm1       crmd:     info: election_timeout_popped: 	Election failed: Declaring ourselves the winner
Oct 15 15:16:12 [14874] vm1       crmd:    debug: s_crmd_fsa: 	Processing I_ELECTION_DC: [ state=S_ELECTION cause=C_TIMER_POPPED origin=election_timeout_popped ]
Oct 15 15:16:12 [14874] vm1       crmd:     info: do_log: 	FSA: Input I_ELECTION_DC from election_timeout_popped() received in state S_ELECTION
Oct 15 15:16:12 [14874] vm1       crmd:   notice: do_state_transition: 	State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_TIMER_POPPED origin=election_timeout_popped ]
Oct 15 15:16:12 [14874] vm1       crmd:     info: do_te_control: 	Registering TE UUID: cffe5b98-3c92-4ed3-8992-426ef00df4ed
Oct 15 15:16:12 [14869] vm1        cib:    debug: cib_common_callback_worker: 	Setting cib_diff_notify callbacks for crmd (5b415466-b331-49a1-b495-ed97c3cb8b21): on
Oct 15 15:16:12 [14874] vm1       crmd:     info: set_graph_functions: 	Setting custom graph functions
Oct 15 15:16:12 [14874] vm1       crmd:    debug: do_te_control: 	Transitioner is now active
Oct 15 15:16:12 [14874] vm1       crmd:    debug: unpack_graph: 	Unpacked transition -1: 0 actions in 0 synapses
Oct 15 15:16:12 [14873] vm1    pengine:     info: crm_client_new: 	Connecting 0x196a700 for uid=496 gid=492 pid=14874 id=8686c625-81d6-4dec-a38e-7e834e95c904
Oct 15 15:16:12 [14873] vm1    pengine:    debug: handle_new_connection: 	IPC credentials authenticated (14873-14874-6)
Oct 15 15:16:12 [14873] vm1    pengine:    debug: qb_ipcs_shm_connect: 	connecting to client [14874]
Oct 15 15:16:12 [14873] vm1    pengine:    debug: qb_rb_open_2: 	shm size:5242893; real_size:5246976; rb->word_size:1311744
Oct 15 15:16:12 [14873] vm1    pengine:    debug: qb_rb_open_2: 	shm size:5242893; real_size:5246976; rb->word_size:1311744
Oct 15 15:16:12 [14873] vm1    pengine:    debug: qb_rb_open_2: 	shm size:5242893; real_size:5246976; rb->word_size:1311744
Oct 15 15:16:12 [14874] vm1       crmd:    debug: qb_rb_open_2: 	shm size:5242893; real_size:5246976; rb->word_size:1311744
Oct 15 15:16:12 [14874] vm1       crmd:    debug: qb_rb_open_2: 	shm size:5242893; real_size:5246976; rb->word_size:1311744
Oct 15 15:16:12 [14874] vm1       crmd:    debug: qb_rb_open_2: 	shm size:5242893; real_size:5246976; rb->word_size:1311744
Oct 15 15:16:12 [14874] vm1       crmd:    debug: crm_timer_start: 	Started Integration Timer (I_INTEGRATED:180000ms), src=21
Oct 15 15:16:12 [14874] vm1       crmd:     info: do_dc_takeover: 	Taking over DC status for this partition
Oct 15 15:16:12 [14869] vm1        cib:     info: cib_process_readwrite: 	We are now in R/W mode
Oct 15 15:16:12 [14869] vm1        cib:     info: cib_process_request: 	Completed cib_master operation for section 'all': OK (rc=0, origin=local/crmd/6, version=0.0.0)
Oct 15 15:16:12 [14869] vm1        cib:     info: cib_process_request: 	Completed cib_modify operation for section cib: OK (rc=0, origin=local/crmd/7, version=0.0.1)
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	Diff: --- 0.0.0
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	Diff: +++ 0.0.1 4ebc77531279ad4ef9b647069d43ab1e
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	-- <cib num_updates="0"/>
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	++ <cib epoch="0" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.7"/>
Oct 15 15:16:12 [14853] vm1 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [14869]
Oct 15 15:16:12 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:16:12 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:16:12 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:16:12 [14869] vm1        cib:    debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:16:12 [14869] vm1        cib:    debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:16:12 [14869] vm1        cib:    debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:16:12 [14853] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Oct 15 15:16:12 [14853] vm1 corosync debug   [CMAP  ] cmap.c:cmap_lib_init_fn:306 lib_init_fn: conn=0x7f0cab0baf70
Oct 15 15:16:12 [14869] vm1        cib:    debug: qb_ipcc_disconnect: 	qb_ipcc_disconnect()
Oct 15 15:16:12 [14869] vm1        cib:    debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-request-14855-14869-34-header
Oct 15 15:16:12 [14869] vm1        cib:    debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-response-14855-14869-34-header
Oct 15 15:16:12 [14869] vm1        cib:    debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-event-14855-14869-34-header
Oct 15 15:16:12 [14869] vm1        cib:   notice: corosync_node_name: 	Unable to get node name for nodeid 3232261517
Oct 15 15:16:12 [14869] vm1        cib:   notice: get_node_name: 	Defaulting to uname -n for the local corosync node name
Oct 15 15:16:12 [14869] vm1        cib:    debug: cib_process_xpath: 	cib_query: //cib/configuration/crm_config//cluster_property_set//nvpair[@name='dc-version'] does not exist
Oct 15 15:16:12 [14869] vm1        cib:     info: cib_process_request: 	Completed cib_query operation for section //cib/configuration/crm_config//cluster_property_set//nvpair[@name='dc-version']: No such device or address (rc=-6, origin=local/crmd/8, version=0.0.1)
Oct 15 15:16:12 [14853] vm1 corosync debug   [QB    ] ipcs.c:qb_ipcs_dispatch_connection_request:757 HUP conn (14855-14869-34)
Oct 15 15:16:12 [14853] vm1 corosync debug   [QB    ] ipcs.c:qb_ipcs_disconnect:605 qb_ipcs_disconnect(14855-14869-34) state:2
Oct 15 15:16:12 [14853] vm1 corosync debug   [QB    ] loop_poll_epoll.c:_del:117 epoll_ctl(del): Bad file descriptor (9)
Oct 15 15:16:12 [14853] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_closed:417 cs_ipcs_connection_closed() 
Oct 15 15:16:12 [14853] vm1 corosync debug   [CMAP  ] cmap.c:cmap_lib_exit_fn:325 exit_fn for conn=0x7f0cab0baf70
Oct 15 15:16:12 [14853] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_destroyed:390 cs_ipcs_connection_destroyed() 
Oct 15 15:16:12 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-response-14855-14869-34-header
Oct 15 15:16:12 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-event-14855-14869-34-header
Oct 15 15:16:12 [14869] vm1        cib:    debug: activateCibXml: 	Triggering CIB write for cib_modify op
Oct 15 15:16:12 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-request-14855-14869-34-header
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: log_cib_diff: 	Config update: Local-only Change: 0.1.1
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	-- <cib admin_epoch="0" epoch="0" num_updates="1"/>
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	+  <cib epoch="1" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.7" cib-last-written="Tue Oct 15 15:16:12 2013" update-origin="vm1" update-client="crmd">
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	+    <configuration>
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	+      <crm_config>
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	++       <cluster_property_set id="cib-bootstrap-options">
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	++         <nvpair id="cib-bootstrap-options-dc-version" name="dc-version" value="1.1.11-0.284.6a5e863.git.el6-6a5e863"/>
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	++       </cluster_property_set>
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	+      </crm_config>
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	+    </configuration>
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	+  </cib>
Oct 15 15:16:12 [14869] vm1        cib:   notice: log_cib_diff: 	cib:diff: Local-only Change: 0.1.1
Oct 15 15:16:12 [14869] vm1        cib:   notice: cib:diff: 	-- <cib admin_epoch="0" epoch="0" num_updates="1"/>
Oct 15 15:16:12 [14869] vm1        cib:   notice: cib:diff: 	++       <cluster_property_set id="cib-bootstrap-options">
Oct 15 15:16:12 [14869] vm1        cib:   notice: cib:diff: 	++         <nvpair id="cib-bootstrap-options-dc-version" name="dc-version" value="1.1.11-0.284.6a5e863.git.el6-6a5e863"/>
Oct 15 15:16:12 [14869] vm1        cib:   notice: cib:diff: 	++       </cluster_property_set>
Oct 15 15:16:12 [14869] vm1        cib:     info: cib_process_request: 	Completed cib_modify operation for section crm_config: OK (rc=0, origin=local/crmd/9, version=0.1.1)
Oct 15 15:16:12 [14869] vm1        cib:    debug: cib_process_xpath: 	cib_query: //cib/configuration/crm_config//cluster_property_set//nvpair[@name='cluster-infrastructure'] does not exist
Oct 15 15:16:12 [14869] vm1        cib:     info: cib_process_request: 	Completed cib_query operation for section //cib/configuration/crm_config//cluster_property_set//nvpair[@name='cluster-infrastructure']: No such device or address (rc=-6, origin=local/crmd/10, version=0.1.1)
Oct 15 15:16:12 [14874] vm1       crmd:    debug: initialize_join: 	join-1: Initializing join data (flag=true)
Oct 15 15:16:12 [14874] vm1       crmd:     info: join_make_offer: 	Making join offers based on membership 12
Oct 15 15:16:12 [14874] vm1       crmd:     info: join_make_offer: 	join-1: Sending offer to vm3
Oct 15 15:16:12 [14874] vm1       crmd:     info: crm_update_peer_join: 	join_make_offer: Node vm3[3232261519] - join-1 phase 0 -> 1
Oct 15 15:16:12 [14874] vm1       crmd:     info: join_make_offer: 	join-1: Sending offer to vm1
Oct 15 15:16:12 [14874] vm1       crmd:     info: crm_update_peer_join: 	join_make_offer: Node vm1[3232261517] - join-1 phase 0 -> 1
Oct 15 15:16:12 [14874] vm1       crmd:     info: join_make_offer: 	join-1: Sending offer to vm2
Oct 15 15:16:12 [14874] vm1       crmd:     info: crm_update_peer_join: 	join_make_offer: Node vm2[3232261518] - join-1 phase 0 -> 1
Oct 15 15:16:12 [14874] vm1       crmd:     info: do_dc_join_offer_all: 	join-1: Waiting on 3 outstanding join acks
Oct 15 15:16:12 [14874] vm1       crmd:    debug: s_crmd_fsa: 	Processing I_ELECTION_DC: [ state=S_INTEGRATION cause=C_FSA_INTERNAL origin=do_election_check ]
Oct 15 15:16:12 [14874] vm1       crmd:  warning: do_log: 	FSA: Input I_ELECTION_DC from do_election_check() received in state S_INTEGRATION
Oct 15 15:16:12 [14874] vm1       crmd:    debug: election_vote: 	Started election 3
Oct 15 15:16:12 [14874] vm1       crmd:    debug: initialize_join: 	join-2: Initializing join data (flag=true)
Oct 15 15:16:12 [14874] vm1       crmd:     info: crm_update_peer_join: 	initialize_join: Node vm3[3232261519] - join-2 phase 1 -> 0
Oct 15 15:16:12 [14874] vm1       crmd:     info: crm_update_peer_join: 	initialize_join: Node vm1[3232261517] - join-2 phase 1 -> 0
Oct 15 15:16:12 [14874] vm1       crmd:     info: crm_update_peer_join: 	initialize_join: Node vm2[3232261518] - join-2 phase 1 -> 0
Oct 15 15:16:12 [14874] vm1       crmd:     info: join_make_offer: 	join-2: Sending offer to vm3
Oct 15 15:16:12 [14874] vm1       crmd:     info: crm_update_peer_join: 	join_make_offer: Node vm3[3232261519] - join-2 phase 0 -> 1
Oct 15 15:16:12 [14874] vm1       crmd:     info: join_make_offer: 	join-2: Sending offer to vm1
Oct 15 15:16:12 [14874] vm1       crmd:     info: crm_update_peer_join: 	join_make_offer: Node vm1[3232261517] - join-2 phase 0 -> 1
Oct 15 15:16:12 [14874] vm1       crmd:     info: join_make_offer: 	join-2: Sending offer to vm2
Oct 15 15:16:12 [14874] vm1       crmd:     info: crm_update_peer_join: 	join_make_offer: Node vm2[3232261518] - join-2 phase 0 -> 1
Oct 15 15:16:12 [14874] vm1       crmd:     info: do_dc_join_offer_all: 	join-2: Waiting on 3 outstanding join acks
Oct 15 15:16:12 [14869] vm1        cib:    debug: activateCibXml: 	Triggering CIB write for cib_modify op
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: log_cib_diff: 	Config update: Local-only Change: 0.2.1
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	-- <cib admin_epoch="0" epoch="1" num_updates="1"/>
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	+  <cib epoch="2" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.7" cib-last-written="Tue Oct 15 15:16:12 2013" update-origin="vm1" update-client="crmd">
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	+    <configuration>
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	+      <crm_config>
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	+        <cluster_property_set id="cib-bootstrap-options">
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	++         <nvpair id="cib-bootstrap-options-cluster-infrastructure" name="cluster-infrastructure" value="corosync"/>
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	+        </cluster_property_set>
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	+      </crm_config>
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	+    </configuration>
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	+  </cib>
Oct 15 15:16:12 [14869] vm1        cib:    debug: get_last_sequence: 	Series file /var/lib/pacemaker/cib/cib.last does not exist
Oct 15 15:16:12 [14869] vm1        cib:   notice: log_cib_diff: 	cib:diff: Local-only Change: 0.2.1
Oct 15 15:16:12 [14869] vm1        cib:   notice: cib:diff: 	-- <cib admin_epoch="0" epoch="1" num_updates="1"/>
Oct 15 15:16:12 [14869] vm1        cib:   notice: cib:diff: 	++         <nvpair id="cib-bootstrap-options-cluster-infrastructure" name="cluster-infrastructure" value="corosync"/>
Oct 15 15:16:12 [14869] vm1        cib:     info: cib_process_request: 	Completed cib_modify operation for section crm_config: OK (rc=0, origin=local/crmd/11, version=0.2.1)
Oct 15 15:16:12 [14869] vm1        cib:     info: cib_process_request: 	Completed cib_query operation for section crm_config: OK (rc=0, origin=local/crmd/12, version=0.2.1)
Oct 15 15:16:12 [14874] vm1       crmd:    debug: config_query_callback: 	Call 12 : Parsing CIB options
Oct 15 15:16:12 [14874] vm1       crmd:    debug: config_query_callback: 	Shutdown escalation occurs after: 1200000ms
Oct 15 15:16:12 [14874] vm1       crmd:    debug: config_query_callback: 	Checking for expired actions every 900000ms
Oct 15 15:16:12 [14869] vm1        cib:     info: cib_process_request: 	Completed cib_query operation for section crm_config: OK (rc=0, origin=local/crmd/13, version=0.2.1)
Oct 15 15:16:12 [14869] vm1        cib:     info: cib_process_request: 	Completed cib_query operation for section crm_config: OK (rc=0, origin=local/crmd/14, version=0.2.1)
Oct 15 15:16:12 [14874] vm1       crmd:    debug: config_query_callback: 	Call 13 : Parsing CIB options
Oct 15 15:16:12 [14874] vm1       crmd:    debug: config_query_callback: 	Shutdown escalation occurs after: 1200000ms
Oct 15 15:16:12 [14874] vm1       crmd:    debug: config_query_callback: 	Checking for expired actions every 900000ms
Oct 15 15:16:12 [14874] vm1       crmd:    debug: config_query_callback: 	Call 14 : Parsing CIB options
Oct 15 15:16:12 [14874] vm1       crmd:    debug: config_query_callback: 	Shutdown escalation occurs after: 1200000ms
Oct 15 15:16:12 [14874] vm1       crmd:    debug: config_query_callback: 	Checking for expired actions every 900000ms
Oct 15 15:16:12 [14869] vm1        cib:     info: write_cib_contents: 	Archived previous version as /var/lib/pacemaker/cib/cib-0.raw
Oct 15 15:16:12 [14869] vm1        cib:    debug: write_cib_contents: 	Writing CIB to disk
Oct 15 15:16:12 [14874] vm1       crmd:    debug: handle_request: 	Raising I_JOIN_OFFER: join-1
Oct 15 15:16:12 [14874] vm1       crmd:    debug: handle_request: 	Raising I_JOIN_OFFER: join-2
Oct 15 15:16:12 [14874] vm1       crmd:    debug: s_crmd_fsa: 	Processing I_JOIN_OFFER: [ state=S_INTEGRATION cause=C_HA_MESSAGE origin=route_message ]
Oct 15 15:16:12 [14874] vm1       crmd:     info: update_dc: 	Set DC to vm1 (3.0.7)
Oct 15 15:16:12 [14874] vm1       crmd:    debug: do_cl_join_offer_respond: 	do_cl_join_offer_respond added action A_DC_TIMER_STOP to the FSA
Oct 15 15:16:12 [14874] vm1       crmd:    debug: election_count_vote: 	Created voted hash
Oct 15 15:16:12 [14874] vm1       crmd:    debug: election_count_vote: 	Election 3 (current: 3, owner: 3232261517): Processed vote from vm1 (Recorded)
Oct 15 15:16:12 [14874] vm1       crmd:    debug: do_election_check: 	Ignore election check: we not in an election
Oct 15 15:16:12 [14874] vm1       crmd:    debug: s_crmd_fsa: 	Processing I_JOIN_OFFER: [ state=S_INTEGRATION cause=C_HA_MESSAGE origin=route_message ]
Oct 15 15:16:12 [14874] vm1       crmd:    debug: do_cl_join_offer_respond: 	do_cl_join_offer_respond added action A_DC_TIMER_STOP to the FSA
Oct 15 15:16:12 [14869] vm1        cib:     info: cib_process_request: 	Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/15, version=0.2.1)
Oct 15 15:16:12 [14869] vm1        cib:     info: cib_process_request: 	Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/16, version=0.2.1)
Oct 15 15:16:12 [14874] vm1       crmd:    debug: join_query_callback: 	Respond to join offer join-2
Oct 15 15:16:12 [14874] vm1       crmd:    debug: join_query_callback: 	Acknowledging vm1 as our DC
Oct 15 15:16:12 [14874] vm1       crmd:    debug: s_crmd_fsa: 	Processing I_JOIN_REQUEST: [ state=S_INTEGRATION cause=C_HA_MESSAGE origin=route_message ]
Oct 15 15:16:12 [14874] vm1       crmd:    debug: do_dc_join_filter_offer: 	Processing req from vm3
Oct 15 15:16:12 [14874] vm1       crmd:    debug: do_dc_join_filter_offer: 	Invalid response from vm3: join-1 vs. join-2
Oct 15 15:16:12 [14874] vm1       crmd:    debug: check_join_state: 	Invoked by do_dc_join_filter_offer in state: S_INTEGRATION
Oct 15 15:16:12 [14874] vm1       crmd:    debug: s_crmd_fsa: 	Processing I_JOIN_REQUEST: [ state=S_INTEGRATION cause=C_HA_MESSAGE origin=route_message ]
Oct 15 15:16:12 [14874] vm1       crmd:    debug: do_dc_join_filter_offer: 	Processing req from vm1
Oct 15 15:16:12 [14874] vm1       crmd:    debug: do_dc_join_filter_offer: 	join-2: Welcoming node vm1 (ref join_request-crmd-1381817772-11)
Oct 15 15:16:12 [14874] vm1       crmd:     info: crm_update_peer_join: 	do_dc_join_filter_offer: Node vm1[3232261517] - join-2 phase 1 -> 2
Oct 15 15:16:12 [14874] vm1       crmd:     info: crm_update_peer_expected: 	do_dc_join_filter_offer: Node vm1[3232261517] - expected state is now member
Oct 15 15:16:12 [14874] vm1       crmd:    debug: do_dc_join_filter_offer: 	1 nodes have been integrated into join-2
Oct 15 15:16:12 [14874] vm1       crmd:    debug: check_join_state: 	Invoked by do_dc_join_filter_offer in state: S_INTEGRATION
Oct 15 15:16:12 [14874] vm1       crmd:    debug: do_dc_join_filter_offer: 	join-2: Still waiting on 2 outstanding offers
Oct 15 15:16:12 [14874] vm1       crmd:    debug: election_count_vote: 	Election 3 (current: 3, owner: 3232261517): Processed no-vote from vm2 (Recorded)
Oct 15 15:16:12 [14874] vm1       crmd:    debug: do_election_check: 	Ignore election check: we not in an election
Oct 15 15:16:12 [14874] vm1       crmd:    debug: s_crmd_fsa: 	Processing I_JOIN_REQUEST: [ state=S_INTEGRATION cause=C_HA_MESSAGE origin=route_message ]
Oct 15 15:16:12 [14874] vm1       crmd:    debug: do_dc_join_filter_offer: 	Processing req from vm2
Oct 15 15:16:12 [14874] vm1       crmd:    debug: do_dc_join_filter_offer: 	Invalid response from vm2: join-1 vs. join-2
Oct 15 15:16:12 [14874] vm1       crmd:    debug: check_join_state: 	Invoked by do_dc_join_filter_offer in state: S_INTEGRATION
Oct 15 15:16:12 [14874] vm1       crmd:    debug: s_crmd_fsa: 	Processing I_JOIN_REQUEST: [ state=S_INTEGRATION cause=C_HA_MESSAGE origin=route_message ]
Oct 15 15:16:12 [14874] vm1       crmd:    debug: do_dc_join_filter_offer: 	Processing req from vm2
Oct 15 15:16:12 [14874] vm1       crmd:    debug: do_dc_join_filter_offer: 	join-2: Welcoming node vm2 (ref join_request-crmd-1381817772-8)
Oct 15 15:16:12 [14874] vm1       crmd:     info: crm_update_peer_join: 	do_dc_join_filter_offer: Node vm2[3232261518] - join-2 phase 1 -> 2
Oct 15 15:16:12 [14874] vm1       crmd:     info: crm_update_peer_expected: 	do_dc_join_filter_offer: Node vm2[3232261518] - expected state is now member
Oct 15 15:16:12 [14874] vm1       crmd:    debug: do_dc_join_filter_offer: 	2 nodes have been integrated into join-2
Oct 15 15:16:12 [14874] vm1       crmd:    debug: check_join_state: 	Invoked by do_dc_join_filter_offer in state: S_INTEGRATION
Oct 15 15:16:12 [14874] vm1       crmd:    debug: do_dc_join_filter_offer: 	join-2: Still waiting on 1 outstanding offers
Oct 15 15:16:12 [14874] vm1       crmd:    debug: election_count_vote: 	Election 3 (current: 3, owner: 3232261517): Processed no-vote from vm3 (Recorded)
Oct 15 15:16:12 [14874] vm1       crmd:    debug: do_election_check: 	Ignore election check: we not in an election
Oct 15 15:16:12 [14874] vm1       crmd:    debug: s_crmd_fsa: 	Processing I_JOIN_REQUEST: [ state=S_INTEGRATION cause=C_HA_MESSAGE origin=route_message ]
Oct 15 15:16:12 [14874] vm1       crmd:    debug: do_dc_join_filter_offer: 	Processing req from vm3
Oct 15 15:16:12 [14874] vm1       crmd:    debug: do_dc_join_filter_offer: 	join-2: Welcoming node vm3 (ref join_request-crmd-1381817772-7)
Oct 15 15:16:12 [14869] vm1        cib:     info: write_cib_contents: 	Wrote version 0.1.0 of the CIB to disk (digest: 51521f2153f57cb386e10b5d3317b80b)
Oct 15 15:16:12 [14874] vm1       crmd:     info: crm_update_peer_join: 	do_dc_join_filter_offer: Node vm3[3232261519] - join-2 phase 1 -> 2
Oct 15 15:16:12 [14874] vm1       crmd:     info: crm_update_peer_expected: 	do_dc_join_filter_offer: Node vm3[3232261519] - expected state is now member
Oct 15 15:16:12 [14874] vm1       crmd:    debug: do_dc_join_filter_offer: 	3 nodes have been integrated into join-2
Oct 15 15:16:12 [14874] vm1       crmd:    debug: check_join_state: 	Invoked by do_dc_join_filter_offer in state: S_INTEGRATION
Oct 15 15:16:12 [14874] vm1       crmd:    debug: check_join_state: 	join-2: Integration of 3 peers complete: do_dc_join_filter_offer
Oct 15 15:16:12 [14874] vm1       crmd:    debug: s_crmd_fsa: 	Processing I_INTEGRATED: [ state=S_INTEGRATION cause=C_FSA_INTERNAL origin=check_join_state ]
Oct 15 15:16:12 [14874] vm1       crmd:     info: do_state_transition: 	State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_FSA_INTERNAL origin=check_join_state ]
Oct 15 15:16:12 [14874] vm1       crmd:    debug: do_state_transition: 	All 3 cluster nodes responded to the join offer.
Oct 15 15:16:12 [14874] vm1       crmd:    debug: crm_timer_start: 	Started Finalization Timer (I_ELECTION:1800000ms), src=29
Oct 15 15:16:12 [14874] vm1       crmd:    debug: do_dc_join_finalize: 	Finializing join-2 for 3 clients
Oct 15 15:16:12 [14874] vm1       crmd:     info: crmd_join_phase_log: 	join-2: vm3=integrated
Oct 15 15:16:12 [14874] vm1       crmd:     info: crmd_join_phase_log: 	join-2: vm1=integrated
Oct 15 15:16:12 [14874] vm1       crmd:     info: crmd_join_phase_log: 	join-2: vm2=integrated
Oct 15 15:16:12 [14874] vm1       crmd:     info: do_dc_join_finalize: 	join-2: Syncing our CIB to the rest of the cluster
Oct 15 15:16:12 [14874] vm1       crmd:    debug: do_dc_join_finalize: 	Requested version   <generation_tuple epoch="2" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.7" cib-last-written="Tue Oct 15 15:16:12 2013" update-origin="vm1" update-client="crmd"/>
Oct 15 15:16:12 [14869] vm1        cib:    debug: sync_our_cib: 	Syncing CIB to all peers
Oct 15 15:16:12 [14869] vm1        cib:     info: cib_process_request: 	Completed cib_sync operation for section 'all': OK (rc=0, origin=local/crmd/17, version=0.2.1)
Oct 15 15:16:12 [14874] vm1       crmd:    debug: check_join_state: 	Invoked by finalize_sync_callback in state: S_FINALIZE_JOIN
Oct 15 15:16:12 [14874] vm1       crmd:    debug: check_join_state: 	join-2: Still waiting on 3 integrated nodes
Oct 15 15:16:12 [14874] vm1       crmd:    debug: crmd_join_phase_log: 	join-2: vm3=integrated
Oct 15 15:16:12 [14874] vm1       crmd:    debug: crmd_join_phase_log: 	join-2: vm1=integrated
Oct 15 15:16:12 [14874] vm1       crmd:    debug: crmd_join_phase_log: 	join-2: vm2=integrated
Oct 15 15:16:12 [14874] vm1       crmd:    debug: finalize_sync_callback: 	Notifying 3 clients of join-2 results
Oct 15 15:16:12 [14874] vm1       crmd:    debug: finalize_join_for: 	join-2: ACK'ing join request from vm3
Oct 15 15:16:12 [14869] vm1        cib:    debug: write_cib_contents: 	Wrote digest 51521f2153f57cb386e10b5d3317b80b to disk
Oct 15 15:16:12 [14869] vm1        cib:     info: retrieveCib: 	Reading cluster configuration from: /var/lib/pacemaker/cib/cib.1mdLng (digest: /var/lib/pacemaker/cib/cib.RPb3up)
Oct 15 15:16:12 [14874] vm1       crmd:     info: crm_update_peer_join: 	finalize_join_for: Node vm3[3232261519] - join-2 phase 2 -> 3
Oct 15 15:16:12 [14869] vm1        cib:    debug: activateCibXml: 	Triggering CIB write for cib_modify op
Oct 15 15:16:12 [14874] vm1       crmd:    debug: finalize_join_for: 	join-2: ACK'ing join request from vm1
Oct 15 15:16:12 [14874] vm1       crmd:     info: crm_update_peer_join: 	finalize_join_for: Node vm1[3232261517] - join-2 phase 2 -> 3
Oct 15 15:16:12 [14874] vm1       crmd:    debug: finalize_join_for: 	join-2: ACK'ing join request from vm2
Oct 15 15:16:12 [14874] vm1       crmd:     info: crm_update_peer_join: 	finalize_join_for: Node vm2[3232261518] - join-2 phase 2 -> 3
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: log_cib_diff: 	Config update: Local-only Change: 0.3.1
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	-- <cib admin_epoch="0" epoch="2" num_updates="1"/>
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	+  <cib epoch="3" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.7" cib-last-written="Tue Oct 15 15:16:12 2013" update-origin="vm1" update-client="crmd">
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	+    <configuration>
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	+      <nodes>
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	++       <node id="3232261519" uname="vm3"/>
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	+      </nodes>
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	+    </configuration>
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	+  </cib>
Oct 15 15:16:12 [14874] vm1       crmd:    debug: handle_request: 	Raising I_JOIN_RESULT: join-2
Oct 15 15:16:12 [14874] vm1       crmd:    debug: s_crmd_fsa: 	Processing I_JOIN_RESULT: [ state=S_FINALIZE_JOIN cause=C_HA_MESSAGE origin=route_message ]
Oct 15 15:16:12 [14874] vm1       crmd:    debug: do_cl_join_finalize_respond: 	Confirming join join-2: join_ack_nack
Oct 15 15:16:12 [14874] vm1       crmd:    debug: do_cl_join_finalize_respond: 	join-2: Join complete.  Sending local LRM status to vm1
Oct 15 15:16:12 [14874] vm1       crmd:     info: erase_status_tag: 	Deleting xpath: //node_state[@uname='vm1']/transient_attributes
Oct 15 15:16:12 [14874] vm1       crmd:     info: update_attrd_helper: 	Connecting to attrd... 5 retries remaining
Oct 15 15:16:12 [14869] vm1        cib:    debug: write_cib_contents: 	Activating /var/lib/pacemaker/cib/cib.1mdLng
Oct 15 15:16:12 [14869] vm1        cib:   notice: log_cib_diff: 	cib:diff: Local-only Change: 0.3.1
Oct 15 15:16:12 [14869] vm1        cib:   notice: cib:diff: 	-- <cib admin_epoch="0" epoch="2" num_updates="1"/>
Oct 15 15:16:12 [14869] vm1        cib:   notice: cib:diff: 	++       <node id="3232261519" uname="vm3"/>
Oct 15 15:16:12 [14869] vm1        cib:     info: cib_process_request: 	Completed cib_modify operation for section nodes: OK (rc=0, origin=local/crmd/18, version=0.3.1)
Oct 15 15:16:12 [14869] vm1        cib:    debug: activateCibXml: 	Triggering CIB write for cib_modify op
Oct 15 15:16:12 [14872] vm1      attrd:     info: crm_client_new: 	Connecting 0x1763460 for uid=496 gid=492 pid=14874 id=1810848f-23b4-48a4-8b29-8fcd652ba5be
Oct 15 15:16:12 [14872] vm1      attrd:    debug: handle_new_connection: 	IPC credentials authenticated (14872-14874-9)
Oct 15 15:16:12 [14872] vm1      attrd:    debug: qb_ipcs_shm_connect: 	connecting to client [14874]
Oct 15 15:16:12 [14872] vm1      attrd:    debug: qb_rb_open_2: 	shm size:131085; real_size:135168; rb->word_size:33792
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: log_cib_diff: 	Config update: Local-only Change: 0.4.1
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	-- <cib admin_epoch="0" epoch="3" num_updates="1"/>
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	+  <cib epoch="4" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.7" cib-last-written="Tue Oct 15 15:16:12 2013" update-origin="vm1" update-client="crmd">
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	+    <configuration>
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	+      <nodes>
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	++       <node id="3232261517" uname="vm1"/>
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	+      </nodes>
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	+    </configuration>
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	+  </cib>
Oct 15 15:16:12 [14872] vm1      attrd:    debug: qb_rb_open_2: 	shm size:131085; real_size:135168; rb->word_size:33792
Oct 15 15:16:12 [14872] vm1      attrd:    debug: qb_rb_open_2: 	shm size:131085; real_size:135168; rb->word_size:33792
Oct 15 15:16:12 [14869] vm1        cib:   notice: log_cib_diff: 	cib:diff: Local-only Change: 0.4.1
Oct 15 15:16:12 [14874] vm1       crmd:    debug: qb_rb_open_2: 	shm size:131085; real_size:135168; rb->word_size:33792
Oct 15 15:16:12 [14874] vm1       crmd:    debug: qb_rb_open_2: 	shm size:131085; real_size:135168; rb->word_size:33792
Oct 15 15:16:12 [14874] vm1       crmd:    debug: qb_rb_open_2: 	shm size:131085; real_size:135168; rb->word_size:33792
Oct 15 15:16:12 [14874] vm1       crmd:    debug: attrd_update_delegate: 	Sent update: terminate=(null) for vm1
Oct 15 15:16:12 [14874] vm1       crmd:    debug: attrd_update_delegate: 	Sent update: shutdown=(null) for vm1
Oct 15 15:16:12 [14874] vm1       crmd:    debug: do_dc_join_ack: 	Ignoring op=join_ack_nack message from vm1
Oct 15 15:16:12 [14874] vm1       crmd:    debug: s_crmd_fsa: 	Processing I_JOIN_RESULT: [ state=S_FINALIZE_JOIN cause=C_HA_MESSAGE origin=route_message ]
Oct 15 15:16:12 [14872] vm1      attrd:     info: attrd_client_message: 	Starting an election to determine the writer
Oct 15 15:16:12 [14872] vm1      attrd:    debug: crm_uptime: 	Current CPU usage is: 0s, 19996us
Oct 15 15:16:12 [14874] vm1       crmd:     info: crm_update_peer_join: 	do_dc_join_ack: Node vm2[3232261518] - join-2 phase 3 -> 4
Oct 15 15:16:12 [14874] vm1       crmd:     info: do_dc_join_ack: 	join-2: Updating node state to member for vm2
Oct 15 15:16:12 [14874] vm1       crmd:     info: erase_status_tag: 	Deleting xpath: //node_state[@uname='vm2']/lrm
Oct 15 15:16:12 [14874] vm1       crmd:    debug: do_dc_join_ack: 	join-2: Registered callback for LRM update 23
Oct 15 15:16:12 [14874] vm1       crmd:    debug: s_crmd_fsa: 	Processing I_JOIN_RESULT: [ state=S_FINALIZE_JOIN cause=C_HA_MESSAGE origin=route_message ]
Oct 15 15:16:12 [14874] vm1       crmd:     info: crm_update_peer_join: 	do_dc_join_ack: Node vm3[3232261519] - join-2 phase 3 -> 4
Oct 15 15:16:12 [14874] vm1       crmd:     info: do_dc_join_ack: 	join-2: Updating node state to member for vm3
Oct 15 15:16:12 [14874] vm1       crmd:     info: erase_status_tag: 	Deleting xpath: //node_state[@uname='vm3']/lrm
Oct 15 15:16:12 [14874] vm1       crmd:    debug: do_dc_join_ack: 	join-2: Registered callback for LRM update 25
Oct 15 15:16:12 [14869] vm1        cib:   notice: cib:diff: 	-- <cib admin_epoch="0" epoch="3" num_updates="1"/>
Oct 15 15:16:12 [14869] vm1        cib:   notice: cib:diff: 	++       <node id="3232261517" uname="vm1"/>
Oct 15 15:16:12 [14869] vm1        cib:     info: cib_process_request: 	Completed cib_modify operation for section nodes: OK (rc=0, origin=local/crmd/19, version=0.4.1)
Oct 15 15:16:12 [14853] vm1 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [14872]
Oct 15 15:16:12 [14869] vm1        cib:    debug: activateCibXml: 	Triggering CIB write for cib_modify op
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: log_cib_diff: 	Config update: Local-only Change: 0.5.1
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	-- <cib admin_epoch="0" epoch="4" num_updates="1"/>
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	+  <cib epoch="5" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.7" cib-last-written="Tue Oct 15 15:16:12 2013" update-origin="vm1" update-client="crmd">
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	+    <configuration>
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	+      <nodes>
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	++       <node id="3232261518" uname="vm2"/>
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	+      </nodes>
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	+    </configuration>
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	+  </cib>
Oct 15 15:16:12 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:16:12 [14869] vm1        cib:   notice: log_cib_diff: 	cib:diff: Local-only Change: 0.5.1
Oct 15 15:16:12 [14869] vm1        cib:   notice: cib:diff: 	-- <cib admin_epoch="0" epoch="4" num_updates="1"/>
Oct 15 15:16:12 [14869] vm1        cib:   notice: cib:diff: 	++       <node id="3232261518" uname="vm2"/>
Oct 15 15:16:12 [14869] vm1        cib:     info: cib_process_request: 	Completed cib_modify operation for section nodes: OK (rc=0, origin=local/crmd/20, version=0.5.1)
Oct 15 15:16:12 [14869] vm1        cib:    debug: cib_process_xpath: 	//node_state[@uname='vm1']/transient_attributes was already removed
Oct 15 15:16:12 [14869] vm1        cib:     info: cib_process_request: 	Completed cib_delete operation for section //node_state[@uname='vm1']/transient_attributes: OK (rc=0, origin=local/crmd/21, version=0.5.1)
Oct 15 15:16:12 [14874] vm1       crmd:    debug: erase_xpath_callback: 	Deletion of "//node_state[@uname='vm1']/transient_attributes": OK (rc=0)
Oct 15 15:16:12 [14869] vm1        cib:     info: crm_get_peer: 	Node 3232261518 is now known as vm2
Oct 15 15:16:12 [14869] vm1        cib:    debug: cib_process_xpath: 	//node_state[@uname='vm2']/transient_attributes was already removed
Oct 15 15:16:12 [14869] vm1        cib:     info: cib_process_request: 	Completed cib_delete operation for section //node_state[@uname='vm2']/transient_attributes: OK (rc=0, origin=vm2/crmd/9, version=0.5.1)
Oct 15 15:16:12 [14869] vm1        cib:     info: crm_get_peer: 	Node 3232261519 is now known as vm3
Oct 15 15:16:12 [14869] vm1        cib:    debug: cib_process_xpath: 	//node_state[@uname='vm3']/transient_attributes was already removed
Oct 15 15:16:12 [14869] vm1        cib:     info: cib_process_request: 	Completed cib_delete operation for section //node_state[@uname='vm3']/transient_attributes: OK (rc=0, origin=vm3/crmd/9, version=0.5.1)
Oct 15 15:16:12 [14869] vm1        cib:    debug: cib_process_xpath: 	//node_state[@uname='vm2']/lrm was already removed
Oct 15 15:16:12 [14869] vm1        cib:     info: cib_process_request: 	Completed cib_delete operation for section //node_state[@uname='vm2']/lrm: OK (rc=0, origin=local/crmd/22, version=0.5.1)
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	Diff: --- 0.5.1
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	Diff: +++ 0.5.2 498664d52f84c5bf278390101967da4f
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	-- <cib num_updates="1"/>
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	+  <cib epoch="5" num_updates="2" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.7" cib-last-written="Tue Oct 15 15:16:12 2013" update-origin="vm1" update-client="crmd">
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	+    <status>
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	++     <node_state id="3232261518" uname="vm2" in_ccm="true" crmd="online" crm-debug-origin="do_lrm_query_internal" join="member" expected="member">
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	++       <lrm id="3232261518">
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	++         <lrm_resources/>
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	++       </lrm>
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	++     </node_state>
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	+    </status>
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	+  </cib>
Oct 15 15:16:12 [14874] vm1       crmd:    debug: erase_xpath_callback: 	Deletion of "//node_state[@uname='vm2']/lrm": OK (rc=0)
Oct 15 15:16:12 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:16:12 [14869] vm1        cib:     info: cib_process_request: 	Completed cib_modify operation for section status: OK (rc=0, origin=local/crmd/23, version=0.5.2)
Oct 15 15:16:12 [14869] vm1        cib:    debug: cib_process_xpath: 	//node_state[@uname='vm3']/lrm was already removed
Oct 15 15:16:12 [14869] vm1        cib:     info: cib_process_request: 	Completed cib_delete operation for section //node_state[@uname='vm3']/lrm: OK (rc=0, origin=local/crmd/24, version=0.5.2)
Oct 15 15:16:12 [14874] vm1       crmd:    debug: erase_xpath_callback: 	Deletion of "//node_state[@uname='vm3']/lrm": OK (rc=0)
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	Diff: --- 0.5.2
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	Diff: +++ 0.5.3 bc5e3e4e479443ab2cea3a01b4acc331
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	-- <cib num_updates="2"/>
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	+  <cib epoch="5" num_updates="3" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.7" cib-last-written="Tue Oct 15 15:16:12 2013" update-origin="vm1" update-client="crmd">
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	+    <status>
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	++     <node_state id="3232261519" uname="vm3" in_ccm="true" crmd="online" crm-debug-origin="do_lrm_query_internal" join="member" expected="member">
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	++       <lrm id="3232261519">
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	++         <lrm_resources/>
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	++       </lrm>
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	++     </node_state>
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	+    </status>
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	+  </cib>
Oct 15 15:16:12 [14869] vm1        cib:     info: cib_process_request: 	Completed cib_modify operation for section status: OK (rc=0, origin=local/crmd/25, version=0.5.3)
Oct 15 15:16:12 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:16:12 [14872] vm1      attrd:    debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:16:12 [14872] vm1      attrd:    debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:16:12 [14872] vm1      attrd:    debug: qb_rb_open_2: 	shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:16:12 [14853] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Oct 15 15:16:12 [14853] vm1 corosync debug   [CMAP  ] cmap.c:cmap_lib_init_fn:306 lib_init_fn: conn=0x7f0cab0baf70
Oct 15 15:16:12 [14872] vm1      attrd:    debug: qb_ipcc_disconnect: 	qb_ipcc_disconnect()
Oct 15 15:16:12 [14872] vm1      attrd:    debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-request-14855-14872-34-header
Oct 15 15:16:12 [14872] vm1      attrd:    debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-response-14855-14872-34-header
Oct 15 15:16:12 [14872] vm1      attrd:    debug: qb_rb_close: 	Closing ringbuffer: /dev/shm/qb-cmap-event-14855-14872-34-header
Oct 15 15:16:12 [14872] vm1      attrd:   notice: corosync_node_name: 	Unable to get node name for nodeid 3232261517
Oct 15 15:16:12 [14872] vm1      attrd:   notice: get_node_name: 	Defaulting to uname -n for the local corosync node name
Oct 15 15:16:12 [14872] vm1      attrd:    debug: election_vote: 	Started election 1
Oct 15 15:16:12 [14872] vm1      attrd:     info: attrd_client_message: 	Broadcasting terminate[vm1] = (null)
Oct 15 15:16:12 [14872] vm1      attrd:     info: attrd_client_message: 	Broadcasting shutdown[vm1] = (null)
Oct 15 15:16:12 [14872] vm1      attrd:     info: crm_get_peer: 	Node 3232261519 is now known as vm3
Oct 15 15:16:12 [14872] vm1      attrd:    debug: election_count_vote: 	Created voted hash
Oct 15 15:16:12 [14872] vm1      attrd:    debug: crm_compare_age: 	Win: 0.19996 vs 0.10998 (usec)
Oct 15 15:16:12 [14872] vm1      attrd:     info: election_count_vote: 	Election 1 (owner: 3232261519) pass: vote from vm3 (Uptime)
Oct 15 15:16:12 [14872] vm1      attrd:    debug: election_vote: 	Started election 2
Oct 15 15:16:12 [14874] vm1       crmd:    debug: s_crmd_fsa: 	Processing I_JOIN_RESULT: [ state=S_FINALIZE_JOIN cause=C_HA_MESSAGE origin=route_message ]
Oct 15 15:16:12 [14874] vm1       crmd:     info: crm_update_peer_join: 	do_dc_join_ack: Node vm1[3232261517] - join-2 phase 3 -> 4
Oct 15 15:16:12 [14874] vm1       crmd:     info: do_dc_join_ack: 	join-2: Updating node state to member for vm1
Oct 15 15:16:12 [14874] vm1       crmd:     info: erase_status_tag: 	Deleting xpath: //node_state[@uname='vm1']/lrm
Oct 15 15:16:12 [14869] vm1        cib:     info: write_cib_contents: 	Archived previous version as /var/lib/pacemaker/cib/cib-1.raw
Oct 15 15:16:12 [14869] vm1        cib:    debug: write_cib_contents: 	Writing CIB to disk
Oct 15 15:16:12 [14874] vm1       crmd:    debug: do_dc_join_ack: 	join-2: Registered callback for LRM update 27
Oct 15 15:16:12 [14853] vm1 corosync debug   [QB    ] ipcs.c:qb_ipcs_dispatch_connection_request:757 HUP conn (14855-14872-34)
Oct 15 15:16:12 [14853] vm1 corosync debug   [QB    ] ipcs.c:qb_ipcs_disconnect:605 qb_ipcs_disconnect(14855-14872-34) state:2
Oct 15 15:16:12 [14853] vm1 corosync debug   [QB    ] loop_poll_epoll.c:_del:117 epoll_ctl(del): Bad file descriptor (9)
Oct 15 15:16:12 [14853] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_closed:417 cs_ipcs_connection_closed() 
Oct 15 15:16:12 [14853] vm1 corosync debug   [CMAP  ] cmap.c:cmap_lib_exit_fn:325 exit_fn for conn=0x7f0cab0baf70
Oct 15 15:16:12 [14853] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_destroyed:390 cs_ipcs_connection_destroyed() 
Oct 15 15:16:12 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-response-14855-14872-34-header
Oct 15 15:16:12 [14869] vm1        cib:    debug: cib_process_xpath: 	//node_state[@uname='vm1']/lrm was already removed
Oct 15 15:16:12 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-event-14855-14872-34-header
Oct 15 15:16:12 [14869] vm1        cib:     info: cib_process_request: 	Completed cib_delete operation for section //node_state[@uname='vm1']/lrm: OK (rc=0, origin=local/crmd/26, version=0.5.3)
Oct 15 15:16:12 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-request-14855-14872-34-header
Oct 15 15:16:12 [14874] vm1       crmd:    debug: erase_xpath_callback: 	Deletion of "//node_state[@uname='vm1']/lrm": OK (rc=0)
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	Diff: --- 0.5.3
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	Diff: +++ 0.5.4 773745806663d7fb19f47a486f18f723
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	-- <cib num_updates="3"/>
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	+  <cib epoch="5" num_updates="4" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.7" cib-last-written="Tue Oct 15 15:16:12 2013" update-origin="vm1" update-client="crmd">
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	+    <status>
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	++     <node_state id="3232261517" uname="vm1" in_ccm="true" crmd="online" crm-debug-origin="do_lrm_query_internal" join="member" expected="member">
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	++       <lrm id="3232261517">
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	++         <lrm_resources/>
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	++       </lrm>
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	++     </node_state>
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	+    </status>
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	+  </cib>
Oct 15 15:16:12 [14872] vm1      attrd:     info: crm_get_peer: 	Node 3232261518 is now known as vm2
Oct 15 15:16:12 [14872] vm1      attrd:    debug: election_count_vote: 	Created voted hash
Oct 15 15:16:12 [14872] vm1      attrd:    debug: crm_compare_age: 	Win: 0.19996 vs 0.10998 (usec)
Oct 15 15:16:12 [14872] vm1      attrd:     info: election_count_vote: 	Election 1 (owner: 3232261518) pass: vote from vm2 (Uptime)
Oct 15 15:16:12 [14872] vm1      attrd:    debug: election_vote: 	Started election 3
Oct 15 15:16:12 [14872] vm1      attrd:    debug: election_count_vote: 	Created voted hash
Oct 15 15:16:12 [14872] vm1      attrd:    debug: crm_compare_age: 	Win: 0.19996 vs 0.10998 (usec)
Oct 15 15:16:12 [14872] vm1      attrd:     info: election_count_vote: 	Election 2 (owner: 3232261518) pass: vote from vm2 (Uptime)
Oct 15 15:16:12 [14872] vm1      attrd:    debug: election_vote: 	Started election 4
Oct 15 15:16:12 [14872] vm1      attrd:    debug: election_count_vote: 	Created voted hash
Oct 15 15:16:12 [14872] vm1      attrd:    debug: election_check: 	Still waiting on 3 non-votes (3 total)
Oct 15 15:16:12 [14872] vm1      attrd:    debug: election_check: 	Still waiting on 3 non-votes (3 total)
Oct 15 15:16:12 [14872] vm1      attrd:    debug: election_check: 	Still waiting on 3 non-votes (3 total)
Oct 15 15:16:12 [14872] vm1      attrd:    debug: election_check: 	Still waiting on 3 non-votes (3 total)
Oct 15 15:16:12 [14872] vm1      attrd:    debug: election_check: 	Still waiting on 3 non-votes (3 total)
Oct 15 15:16:12 [14872] vm1      attrd:    debug: election_check: 	Still waiting on 3 non-votes (3 total)
Oct 15 15:16:12 [14869] vm1        cib:     info: cib_process_request: 	Completed cib_modify operation for section status: OK (rc=0, origin=local/crmd/27, version=0.5.4)
Oct 15 15:16:12 [14869] vm1        cib:     info: write_cib_contents: 	Wrote version 0.5.0 of the CIB to disk (digest: 034de6ab489c499a76b66b8122cf302a)
Oct 15 15:16:12 [14872] vm1      attrd:    debug: election_check: 	Still waiting on 3 non-votes (3 total)
Oct 15 15:16:12 [14872] vm1      attrd:    debug: election_count_vote: 	Election 4 (current: 4, owner: 3232261517): Processed vote from vm1 (Recorded)
Oct 15 15:16:12 [14872] vm1      attrd:    debug: election_check: 	Still waiting on 2 non-votes (3 total)
Oct 15 15:16:12 [14872] vm1      attrd:    debug: election_check: 	Still waiting on 2 non-votes (3 total)
Oct 15 15:16:12 [14874] vm1       crmd:    debug: join_update_complete_callback: 	Join update 23 complete
Oct 15 15:16:12 [14874] vm1       crmd:    debug: check_join_state: 	Invoked by join_update_complete_callback in state: S_FINALIZE_JOIN
Oct 15 15:16:12 [14872] vm1      attrd:    debug: election_check: 	Still waiting on 2 non-votes (3 total)
Oct 15 15:16:12 [14874] vm1       crmd:    debug: check_join_state: 	join-2 complete: join_update_complete_callback
Oct 15 15:16:12 [14872] vm1      attrd:    debug: election_count_vote: 	Election 4 (current: 4, owner: 3232261517): Processed no-vote from vm3 (Recorded)
Oct 15 15:16:12 [14872] vm1      attrd:    debug: election_check: 	Still waiting on 1 non-votes (3 total)
Oct 15 15:16:12 [14874] vm1       crmd:    debug: join_update_complete_callback: 	Join update 25 complete
Oct 15 15:16:12 [14874] vm1       crmd:    debug: check_join_state: 	Invoked by join_update_complete_callback in state: S_FINALIZE_JOIN
Oct 15 15:16:12 [14874] vm1       crmd:    debug: check_join_state: 	join-2 complete: join_update_complete_callback
Oct 15 15:16:12 [14874] vm1       crmd:    debug: s_crmd_fsa: 	Processing I_FINALIZED: [ state=S_FINALIZE_JOIN cause=C_FSA_INTERNAL origin=check_join_state ]
Oct 15 15:16:12 [14874] vm1       crmd:     info: do_state_transition: 	State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state ]
Oct 15 15:16:12 [14874] vm1       crmd:    debug: do_state_transition: 	All 3 cluster nodes are eligible to run resources.
Oct 15 15:16:12 [14874] vm1       crmd:    debug: do_dc_join_final: 	Ensuring DC, quorum and node attributes are up-to-date
Oct 15 15:16:12 [14874] vm1       crmd:    debug: attrd_update_delegate: 	Sent update: (null)=(null) for localhost
Oct 15 15:16:12 [14874] vm1       crmd:    debug: crm_update_quorum: 	Updating quorum status to true (call=30)
Oct 15 15:16:12 [14874] vm1       crmd:    debug: do_te_invoke: 	Cancelling the transition: inactive
Oct 15 15:16:12 [14874] vm1       crmd:     info: abort_transition_graph: 	do_te_invoke:151 - Triggered transition abort (complete=1) : Peer Cancelled
Oct 15 15:16:12 [14874] vm1       crmd:    debug: s_crmd_fsa: 	Processing I_FINALIZED: [ state=S_POLICY_ENGINE cause=C_FSA_INTERNAL origin=check_join_state ]
Oct 15 15:16:12 [14874] vm1       crmd:    debug: s_crmd_fsa: 	Processing I_PE_CALC: [ state=S_POLICY_ENGINE cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Oct 15 15:16:12 [14874] vm1       crmd:    debug: do_pe_invoke: 	Query 31: Requesting the current CIB: S_POLICY_ENGINE
Oct 15 15:16:12 [14869] vm1        cib:    debug: write_cib_contents: 	Wrote digest 034de6ab489c499a76b66b8122cf302a to disk
Oct 15 15:16:12 [14869] vm1        cib:     info: retrieveCib: 	Reading cluster configuration from: /var/lib/pacemaker/cib/cib.A52rgl (digest: /var/lib/pacemaker/cib/cib.I86ryu)
Oct 15 15:16:12 [14869] vm1        cib:     info: cib_process_request: 	Completed cib_modify operation for section nodes: OK (rc=0, origin=local/crmd/28, version=0.5.4)
Oct 15 15:16:12 [14869] vm1        cib:     info: cib_process_request: 	Completed cib_modify operation for section status: OK (rc=0, origin=local/crmd/29, version=0.5.4)
Oct 15 15:16:12 [14872] vm1      attrd:    debug: election_count_vote: 	Election 4 (current: 4, owner: 3232261517): Processed no-vote from vm2 (Recorded)
Oct 15 15:16:12 [14872] vm1      attrd:     info: election_timer_cb: 	Election election-attrd complete
Oct 15 15:16:12 [14872] vm1      attrd:    debug: attrd_peer_sync: 	Syncing shutdown[vm1] = (null) to everyone
Oct 15 15:16:12 [14872] vm1      attrd:    debug: attrd_peer_sync: 	Syncing shutdown[vm2] = (null) to everyone
Oct 15 15:16:12 [14872] vm1      attrd:    debug: attrd_peer_sync: 	Syncing shutdown[vm3] = (null) to everyone
Oct 15 15:16:12 [14872] vm1      attrd:    debug: attrd_peer_sync: 	Syncing terminate[vm1] = (null) to everyone
Oct 15 15:16:12 [14872] vm1      attrd:    debug: attrd_peer_sync: 	Syncing terminate[vm2] = (null) to everyone
Oct 15 15:16:12 [14872] vm1      attrd:    debug: attrd_peer_sync: 	Syncing terminate[vm3] = (null) to everyone
Oct 15 15:16:12 [14872] vm1      attrd:    debug: attrd_peer_sync: 	Syncing values to everyone
Oct 15 15:16:12 [14872] vm1      attrd:    debug: write_attribute: 	Update: vm1[shutdown]=(null) (3232261517 3232261517 3232261517 vm1)
Oct 15 15:16:12 [14872] vm1      attrd:    debug: write_attribute: 	Update: vm2[shutdown]=(null) (3232261518 3232261518 3232261518 vm2)
Oct 15 15:16:12 [14872] vm1      attrd:    debug: write_attribute: 	Update: vm3[shutdown]=(null) (3232261519 3232261519 3232261519 vm3)
Oct 15 15:16:12 [14874] vm1       crmd:    debug: te_update_diff: 	Processing diff (cib_modify): 0.5.4 -> 0.5.5 (S_POLICY_ENGINE)
Oct 15 15:16:12 [14872] vm1      attrd:   notice: write_attribute: 	Sent update 2 with 3 changes for shutdown, id=<n/a>, set=(null)
Oct 15 15:16:12 [14872] vm1      attrd:    debug: write_attribute: 	Update: vm1[terminate]=(null) (3232261517 3232261517 3232261517 vm1)
Oct 15 15:16:12 [14872] vm1      attrd:    debug: write_attribute: 	Update: vm2[terminate]=(null) (3232261518 3232261518 3232261518 vm2)
Oct 15 15:16:12 [14872] vm1      attrd:    debug: write_attribute: 	Update: vm3[terminate]=(null) (3232261519 3232261519 3232261519 vm3)
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	Diff: --- 0.5.4
Oct 15 15:16:12 [14869] vm1        cib:    debug: write_cib_contents: 	Activating /var/lib/pacemaker/cib/cib.A52rgl
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	Diff: +++ 0.5.5 218c22dd38a8cf0a6e5a836dfab5f96d
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	-- <cib num_updates="4"/>
Oct 15 15:16:12 [14872] vm1      attrd:   notice: write_attribute: 	Sent update 3 with 3 changes for terminate, id=<n/a>, set=(null)
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	++ <cib epoch="5" num_updates="5" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.7" cib-last-written="Tue Oct 15 15:16:12 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517"/>
Oct 15 15:16:12 [14869] vm1        cib:     info: cib_process_request: 	Completed cib_modify operation for section cib: OK (rc=0, origin=local/crmd/30, version=0.5.5)
Oct 15 15:16:12 [14874] vm1       crmd:    debug: join_update_complete_callback: 	Join update 27 complete
Oct 15 15:16:12 [14874] vm1       crmd:    debug: check_join_state: 	Invoked by join_update_complete_callback in state: S_POLICY_ENGINE
Oct 15 15:16:12 [14869] vm1        cib:     info: cib_process_request: 	Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/31, version=0.5.5)
Oct 15 15:16:12 [14874] vm1       crmd:    debug: do_pe_invoke_callback: 	Invoking the PE: query=31, ref=pe_calc-dc-1381817772-16, seq=12, quorate=1
Oct 15 15:16:12 [14869] vm1        cib:    debug: cib_process_modify: 	Destroying /cib/status/node_state[1]/transient_attributes/instance_attributes/nvpair
Oct 15 15:16:12 [14869] vm1        cib:    debug: cib_process_modify: 	Destroying /cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair
Oct 15 15:16:12 [14869] vm1        cib:    debug: cib_process_modify: 	Destroying /cib/status/node_state[3]/transient_attributes/instance_attributes/nvpair
Oct 15 15:16:12 [14874] vm1       crmd:    debug: te_update_diff: 	Processing diff (cib_modify): 0.5.5 -> 0.5.6 (S_POLICY_ENGINE)
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	Diff: --- 0.5.5
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	Diff: +++ 0.5.6 621aadfcadc833ab8c7b2202d1ddb73c
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	-- <cib num_updates="5"/>
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	+  <cib epoch="5" num_updates="6" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.7" cib-last-written="Tue Oct 15 15:16:12 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517">
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	+    <status>
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	+      <node_state id="3232261518" uname="vm2" in_ccm="true" crmd="online" crm-debug-origin="do_state_transition" join="member" expected="member">
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	++       <transient_attributes id="3232261518">
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	++         <instance_attributes id="status-3232261518"/>
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	++       </transient_attributes>
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	+      </node_state>
Oct 15 15:16:12 [14873] vm1    pengine:    debug: unpack_config: 	STONITH timeout: 60000
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	+      <node_state id="3232261519" uname="vm3" in_ccm="true" crmd="online" crm-debug-origin="do_state_transition" join="member" expected="member">
Oct 15 15:16:12 [14873] vm1    pengine:    debug: unpack_config: 	STONITH of failed nodes is enabled
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	++       <transient_attributes id="3232261519">
Oct 15 15:16:12 [14873] vm1    pengine:    debug: unpack_config: 	Stop all active resources: false
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	++         <instance_attributes id="status-3232261519"/>
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	++       </transient_attributes>
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	+      </node_state>
Oct 15 15:16:12 [14873] vm1    pengine:    debug: unpack_config: 	Cluster is symmetric - resources can run anywhere by default
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	+      <node_state id="3232261517" uname="vm1" in_ccm="true" crmd="online" crm-debug-origin="do_state_transition" join="member" expected="member">
Oct 15 15:16:12 [14873] vm1    pengine:    debug: unpack_config: 	Default stickiness: 0
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	++       <transient_attributes id="3232261517">
Oct 15 15:16:12 [14873] vm1    pengine:    debug: unpack_config: 	On loss of CCM Quorum: Stop ALL resources
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	++         <instance_attributes id="status-3232261517"/>
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	++       </transient_attributes>
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	+      </node_state>
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	+    </status>
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	+  </cib>
Oct 15 15:16:12 [14873] vm1    pengine:    debug: unpack_config: 	Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
Oct 15 15:16:12 [14873] vm1    pengine:    debug: unpack_domains: 	Unpacking domains
Oct 15 15:16:12 [14873] vm1    pengine:    error: unpack_resources: 	Resource start-up disabled since no STONITH resources have been defined
Oct 15 15:16:12 [14873] vm1    pengine:    error: unpack_resources: 	Either configure some or disable STONITH with the stonith-enabled option
Oct 15 15:16:12 [14873] vm1    pengine:    error: unpack_resources: 	NOTE: Clusters with shared data need STONITH to ensure data integrity
Oct 15 15:16:12 [14873] vm1    pengine:     info: determine_online_status_fencing: 	Node vm2 is active
Oct 15 15:16:12 [14873] vm1    pengine:     info: determine_online_status: 	Node vm2 is online
Oct 15 15:16:12 [14873] vm1    pengine:     info: determine_online_status_fencing: 	Node vm3 is active
Oct 15 15:16:12 [14873] vm1    pengine:     info: determine_online_status: 	Node vm3 is online
Oct 15 15:16:12 [14873] vm1    pengine:     info: determine_online_status_fencing: 	Node vm1 is active
Oct 15 15:16:12 [14873] vm1    pengine:     info: determine_online_status: 	Node vm1 is online
Oct 15 15:16:12 [14873] vm1    pengine:   notice: stage6: 	Delaying fencing operations until there are resources to manage
Oct 15 15:16:12 [14869] vm1        cib:     info: cib_process_request: 	Completed cib_modify operation for section status: OK (rc=0, origin=local/attrd/2, version=0.5.6)
Oct 15 15:16:12 [14873] vm1    pengine:    debug: get_last_sequence: 	Series file /var/lib/pacemaker/pengine/pe-input.last does not exist
Oct 15 15:16:12 [14869] vm1        cib:    debug: cib_process_modify: 	Destroying /cib/status/node_state[1]/transient_attributes/instance_attributes/nvpair
Oct 15 15:16:12 [14869] vm1        cib:    debug: cib_process_modify: 	Destroying /cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair
Oct 15 15:16:12 [14869] vm1        cib:    debug: cib_process_modify: 	Destroying /cib/status/node_state[3]/transient_attributes/instance_attributes/nvpair
Oct 15 15:16:12 [14874] vm1       crmd:    debug: s_crmd_fsa: 	Processing I_PE_SUCCESS: [ state=S_POLICY_ENGINE cause=C_IPC_MESSAGE origin=handle_response ]
Oct 15 15:16:12 [14874] vm1       crmd:     info: do_state_transition: 	State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Oct 15 15:16:12 [14874] vm1       crmd:    debug: unpack_graph: 	Unpacked transition 0: 3 actions in 3 synapses
Oct 15 15:16:12 [14874] vm1       crmd:     info: do_te_invoke: 	Processing graph 0 (ref=pe_calc-dc-1381817772-16) derived from /var/lib/pacemaker/pengine/pe-input-0.bz2
Oct 15 15:16:12 [14874] vm1       crmd:   notice: te_rsc_command: 	Initiating action 4: probe_complete probe_complete on vm3 - no waiting
Oct 15 15:16:12 [14874] vm1       crmd:     info: te_rsc_command: 	Action 4 confirmed - no wait
Oct 15 15:16:12 [14873] vm1    pengine:   notice: process_pe_message: 	Calculated Transition 0: /var/lib/pacemaker/pengine/pe-input-0.bz2
Oct 15 15:16:12 [14874] vm1       crmd:   notice: te_rsc_command: 	Initiating action 3: probe_complete probe_complete on vm2 - no waiting
Oct 15 15:16:12 [14873] vm1    pengine:   notice: process_pe_message: 	Configuration ERRORs found during PE processing.  Please run "crm_verify -L" to identify issues.
Oct 15 15:16:12 [14874] vm1       crmd:     info: te_rsc_command: 	Action 3 confirmed - no wait
Oct 15 15:16:12 [14874] vm1       crmd:   notice: te_rsc_command: 	Initiating action 2: probe_complete probe_complete on vm1 (local) - no waiting
Oct 15 15:16:12 [14874] vm1       crmd:    debug: attrd_update_delegate: 	Sent update: probe_complete=true for vm1
Oct 15 15:16:12 [14874] vm1       crmd:     info: te_rsc_command: 	Action 2 confirmed - no wait
Oct 15 15:16:12 [14874] vm1       crmd:    debug: run_graph: 	Transition 0 (Complete=0, Pending=0, Fired=3, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-0.bz2): In-progress
Oct 15 15:16:12 [14872] vm1      attrd:     info: attrd_client_message: 	Broadcasting probe_complete[vm1] = true (writer)
Oct 15 15:16:12 [14869] vm1        cib:     info: cib_process_request: 	Completed cib_modify operation for section status: OK (rc=0, origin=local/attrd/3, version=0.5.6)
Oct 15 15:16:12 [14874] vm1       crmd:   notice: run_graph: 	Transition 0 (Complete=3, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-0.bz2): Complete
Oct 15 15:16:12 [14872] vm1      attrd:     info: attrd_cib_callback: 	Update 3 for terminate: OK (0)
Oct 15 15:16:12 [14872] vm1      attrd:   notice: attrd_cib_callback: 	Update 3 for terminate[vm1]=(null): OK (0)
Oct 15 15:16:12 [14872] vm1      attrd:   notice: attrd_cib_callback: 	Update 3 for terminate[vm2]=(null): OK (0)
Oct 15 15:16:12 [14874] vm1       crmd:    debug: te_graph_trigger: 	Transition 0 is now complete
Oct 15 15:16:12 [14872] vm1      attrd:   notice: attrd_cib_callback: 	Update 3 for terminate[vm3]=(null): OK (0)
Oct 15 15:16:12 [14872] vm1      attrd:     info: attrd_cib_callback: 	Update 2 for shutdown: OK (0)
Oct 15 15:16:12 [14872] vm1      attrd:   notice: attrd_cib_callback: 	Update 2 for shutdown[vm1]=(null): OK (0)
Oct 15 15:16:12 [14872] vm1      attrd:   notice: attrd_cib_callback: 	Update 2 for shutdown[vm2]=(null): OK (0)
Oct 15 15:16:12 [14872] vm1      attrd:   notice: attrd_cib_callback: 	Update 2 for shutdown[vm3]=(null): OK (0)
Oct 15 15:16:12 [14874] vm1       crmd:    debug: notify_crmd: 	Processing transition completion in state S_TRANSITION_ENGINE
Oct 15 15:16:12 [14874] vm1       crmd:    debug: notify_crmd: 	Transition 0 status: done - <null>
Oct 15 15:16:12 [14874] vm1       crmd:    debug: s_crmd_fsa: 	Processing I_TE_SUCCESS: [ state=S_TRANSITION_ENGINE cause=C_FSA_INTERNAL origin=notify_crmd ]
Oct 15 15:16:12 [14874] vm1       crmd:     info: do_log: 	FSA: Input I_TE_SUCCESS from notify_crmd() received in state S_TRANSITION_ENGINE
Oct 15 15:16:12 [14874] vm1       crmd:   notice: do_state_transition: 	State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
Oct 15 15:16:12 [14874] vm1       crmd:    debug: do_state_transition: 	Starting PEngine Recheck Timer
Oct 15 15:16:12 [14874] vm1       crmd:    debug: crm_timer_start: 	Started PEngine Recheck Timer (I_PE_CALC:900000ms), src=42
Oct 15 15:16:12 [14872] vm1      attrd:    debug: write_attribute: 	Update: vm1[probe_complete]=true (3232261517 3232261517 3232261517 vm1)
Oct 15 15:16:12 [14872] vm1      attrd:   notice: write_attribute: 	Sent update 4 with 1 changes for probe_complete, id=<n/a>, set=(null)
Oct 15 15:16:12 [14874] vm1       crmd:    debug: te_update_diff: 	Processing diff (cib_modify): 0.5.6 -> 0.5.7 (S_IDLE)
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	Diff: --- 0.5.6
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	Diff: +++ 0.5.7 9bed0068b13bd6a785d79d34686d171a
Oct 15 15:16:12 [14872] vm1      attrd:     info: write_attribute: 	Write out of probe_complete delayed: update 4 in progress
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	-- <cib num_updates="6"/>
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	+  <cib epoch="5" num_updates="7" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.7" cib-last-written="Tue Oct 15 15:16:12 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517">
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	+    <status>
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	+      <node_state id="3232261517" uname="vm1" in_ccm="true" crmd="online" crm-debug-origin="do_state_transition" join="member" expected="member">
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	+        <transient_attributes id="3232261517">
Oct 15 15:16:12 [14872] vm1      attrd:     info: write_attribute: 	Write out of probe_complete delayed: update 4 in progress
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	+          <instance_attributes id="status-3232261517">
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	++           <nvpair id="status-3232261517-probe_complete" name="probe_complete" value="true"/>
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	+          </instance_attributes>
Oct 15 15:16:12 [14869] vm1        cib:     info: cib_process_request: 	Completed cib_modify operation for section status: OK (rc=0, origin=local/attrd/4, version=0.5.7)
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	+        </transient_attributes>
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	+      </node_state>
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	+    </status>
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	+  </cib>
Oct 15 15:16:12 [14872] vm1      attrd:     info: attrd_cib_callback: 	Update 4 for probe_complete: OK (0)
Oct 15 15:16:12 [14872] vm1      attrd:   notice: attrd_cib_callback: 	Update 4 for probe_complete[vm1]=true: OK (0)
Oct 15 15:16:12 [14872] vm1      attrd:   notice: attrd_cib_callback: 	Update 4 for probe_complete[vm2]=(null): OK (0)
Oct 15 15:16:12 [14872] vm1      attrd:   notice: attrd_cib_callback: 	Update 4 for probe_complete[vm3]=(null): OK (0)
Oct 15 15:16:12 [14872] vm1      attrd:    debug: write_attribute: 	Update: vm1[probe_complete]=true (3232261517 3232261517 3232261517 vm1)
Oct 15 15:16:12 [14872] vm1      attrd:    debug: write_attribute: 	Update: vm2[probe_complete]=true (3232261518 3232261518 3232261518 vm2)
Oct 15 15:16:12 [14872] vm1      attrd:    debug: write_attribute: 	Update: vm3[probe_complete]=true (3232261519 3232261519 3232261519 vm3)
Oct 15 15:16:12 [14872] vm1      attrd:   notice: write_attribute: 	Sent update 5 with 3 changes for probe_complete, id=<n/a>, set=(null)
Oct 15 15:16:12 [14874] vm1       crmd:    debug: te_update_diff: 	Processing diff (cib_modify): 0.5.7 -> 0.5.8 (S_IDLE)
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	Diff: --- 0.5.7
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	Diff: +++ 0.5.8 2f185275dac4d8bb189544533a0ea4a5
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	-- <cib num_updates="7"/>
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	+  <cib epoch="5" num_updates="8" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.7" cib-last-written="Tue Oct 15 15:16:12 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517">
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	+    <status>
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	+      <node_state id="3232261518" uname="vm2" in_ccm="true" crmd="online" crm-debug-origin="do_state_transition" join="member" expected="member">
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	+        <transient_attributes id="3232261518">
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	+          <instance_attributes id="status-3232261518">
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	++           <nvpair id="status-3232261518-probe_complete" name="probe_complete" value="true"/>
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	+          </instance_attributes>
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	+        </transient_attributes>
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	+      </node_state>
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	+      <node_state id="3232261519" uname="vm3" in_ccm="true" crmd="online" crm-debug-origin="do_state_transition" join="member" expected="member">
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	+        <transient_attributes id="3232261519">
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	+          <instance_attributes id="status-3232261519">
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	++           <nvpair id="status-3232261519-probe_complete" name="probe_complete" value="true"/>
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	+          </instance_attributes>
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	+        </transient_attributes>
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	+      </node_state>
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	+    </status>
Oct 15 15:16:12 [14870] vm1 stonith-ng:    debug: Config update: 	+  </cib>
Oct 15 15:16:12 [14869] vm1        cib:     info: cib_process_request: 	Completed cib_modify operation for section status: OK (rc=0, origin=local/attrd/5, version=0.5.8)
Oct 15 15:16:12 [14872] vm1      attrd:     info: attrd_cib_callback: 	Update 5 for probe_complete: OK (0)
Oct 15 15:16:12 [14872] vm1      attrd:   notice: attrd_cib_callback: 	Update 5 for probe_complete[vm1]=true: OK (0)
Oct 15 15:16:12 [14872] vm1      attrd:   notice: attrd_cib_callback: 	Update 5 for probe_complete[vm2]=true: OK (0)
Oct 15 15:16:12 [14872] vm1      attrd:   notice: attrd_cib_callback: 	Update 5 for probe_complete[vm3]=true: OK (0)
Oct 15 15:16:21 [14869] vm1        cib:     info: crm_client_new: 	Connecting 0x1441290 for uid=0 gid=0 pid=14881 id=a77c8b98-a009-4d98-8874-7a6469166f7e
Oct 15 15:16:21 [14869] vm1        cib:    debug: handle_new_connection: 	IPC credentials authenticated (14869-14881-14)
Oct 15 15:16:21 [14869] vm1        cib:    debug: qb_ipcs_shm_connect: 	connecting to client [14881]
Oct 15 15:16:21 [14869] vm1        cib:    debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Oct 15 15:16:21 [14869] vm1        cib:    debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Oct 15 15:16:21 [14869] vm1        cib:    debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Oct 15 15:16:21 [14869] vm1        cib:     info: cib_process_request: 	Completed cib_query operation for section 'all': OK (rc=0, origin=local/cibadmin/2, version=0.5.8)
Oct 15 15:16:21 [14869] vm1        cib:    debug: qb_ipcs_dispatch_connection_request: 	HUP conn (14869-14881-14)
Oct 15 15:16:21 [14869] vm1        cib:    debug: qb_ipcs_disconnect: 	qb_ipcs_disconnect(14869-14881-14) state:2
Oct 15 15:16:21 [14869] vm1        cib:     info: crm_client_destroy: 	Destroying 0 events
Oct 15 15:16:21 [14869] vm1        cib:    debug: qb_rb_close: 	Free'ing ringbuffer: /dev/shm/qb-cib_rw-response-14869-14881-14-header
Oct 15 15:16:21 [14869] vm1        cib:    debug: qb_rb_close: 	Free'ing ringbuffer: /dev/shm/qb-cib_rw-event-14869-14881-14-header
Oct 15 15:16:21 [14869] vm1        cib:    debug: qb_rb_close: 	Free'ing ringbuffer: /dev/shm/qb-cib_rw-request-14869-14881-14-header
Oct 15 15:16:21 [14869] vm1        cib:     info: crm_client_new: 	Connecting 0x1441290 for uid=0 gid=0 pid=14882 id=d36183bc-0a1f-4e87-9823-8b6bf77863af
Oct 15 15:16:21 [14869] vm1        cib:    debug: handle_new_connection: 	IPC credentials authenticated (14869-14882-14)
Oct 15 15:16:21 [14869] vm1        cib:    debug: qb_ipcs_shm_connect: 	connecting to client [14882]
Oct 15 15:16:21 [14869] vm1        cib:    debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Oct 15 15:16:21 [14869] vm1        cib:    debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Oct 15 15:16:21 [14869] vm1        cib:    debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Oct 15 15:16:21 [14869] vm1        cib:     info: cib_process_request: 	Completed cib_query operation for section 'all': OK (rc=0, origin=local/cibadmin/2, version=0.5.8)
Oct 15 15:16:21 [14869] vm1        cib:    debug: qb_ipcs_dispatch_connection_request: 	HUP conn (14869-14882-14)
Oct 15 15:16:21 [14869] vm1        cib:    debug: qb_ipcs_disconnect: 	qb_ipcs_disconnect(14869-14882-14) state:2
Oct 15 15:16:21 [14869] vm1        cib:     info: crm_client_destroy: 	Destroying 0 events
Oct 15 15:16:21 [14869] vm1        cib:    debug: qb_rb_close: 	Free'ing ringbuffer: /dev/shm/qb-cib_rw-response-14869-14882-14-header
Oct 15 15:16:21 [14869] vm1        cib:    debug: qb_rb_close: 	Free'ing ringbuffer: /dev/shm/qb-cib_rw-event-14869-14882-14-header
Oct 15 15:16:21 [14869] vm1        cib:    debug: qb_rb_close: 	Free'ing ringbuffer: /dev/shm/qb-cib_rw-request-14869-14882-14-header
Oct 15 15:16:21 [14869] vm1        cib:     info: crm_client_new: 	Connecting 0x1441290 for uid=0 gid=0 pid=14922 id=83eacdc1-8fdf-496d-9a1f-06a9a962d7bd
Oct 15 15:16:21 [14869] vm1        cib:    debug: handle_new_connection: 	IPC credentials authenticated (14869-14922-14)
Oct 15 15:16:21 [14869] vm1        cib:    debug: qb_ipcs_shm_connect: 	connecting to client [14922]
Oct 15 15:16:21 [14869] vm1        cib:    debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Oct 15 15:16:21 [14869] vm1        cib:    debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Oct 15 15:16:21 [14869] vm1        cib:    debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Oct 15 15:16:21 [14869] vm1        cib:    debug: activateCibXml: 	Triggering CIB write for cib_replace op
Oct 15 15:16:21 [14874] vm1       crmd:    debug: te_update_diff: 	Processing diff (cib_replace): 0.5.8 -> 0.6.1 (S_IDLE)
Oct 15 15:16:21 [14874] vm1       crmd:     info: abort_transition_graph: 	te_update_diff:126 - Triggered transition abort (complete=1, node=, tag=diff, id=(null), magic=NA, cib=0.6.1) : Non-status change
Oct 15 15:16:21 [14874] vm1       crmd:    debug: abort_transition_graph: 	Cause   <diff crm_feature_set="3.0.7" digest="1c8a43265f13cefc2951afe21b89b052">
Oct 15 15:16:21 [14874] vm1       crmd:    debug: abort_transition_graph: 	Cause     <diff-removed admin_epoch="0" epoch="5" num_updates="8">
Oct 15 15:16:21 [14874] vm1       crmd:    debug: abort_transition_graph: 	Cause       <cib admin_epoch="0" epoch="5" num_updates="8">
Oct 15 15:16:21 [14874] vm1       crmd:    debug: abort_transition_graph: 	Cause         <configuration>
Oct 15 15:16:21 [14874] vm1       crmd:    debug: abort_transition_graph: 	Cause           <crm_config>
Oct 15 15:16:21 [14874] vm1       crmd:    debug: abort_transition_graph: 	Cause             <cluster_property_set id="cib-bootstrap-options">
Oct 15 15:16:21 [14874] vm1       crmd:    debug: abort_transition_graph: 	Cause               <nvpair id="cib-bootstrap-options-dc-version" name="dc-version" value="1.1.11-0.284.6a5e863.git.el6-6a5e863" __crm_diff_marker__="removed:top"/>
Oct 15 15:16:21 [14874] vm1       crmd:    debug: abort_transition_graph: 	Cause               <nvpair id="cib-bootstrap-options-cluster-infrastructure" name="cluster-infrastructure" value="corosync" __crm_diff_marker__="removed:top"/>
Oct 15 15:16:21 [14874] vm1       crmd:    debug: abort_transition_graph: 	Cause             </cluster_property_set>
Oct 15 15:16:21 [14874] vm1       crmd:    debug: abort_transition_graph: 	Cause           </crm_config>
Oct 15 15:16:21 [14874] vm1       crmd:    debug: abort_transition_graph: 	Cause         </configuration>
Oct 15 15:16:21 [14874] vm1       crmd:    debug: abort_transition_graph: 	Cause       </cib>
Oct 15 15:16:21 [14874] vm1       crmd:    debug: abort_transition_graph: 	Cause     </diff-removed>
Oct 15 15:16:21 [14874] vm1       crmd:    debug: abort_transition_graph: 	Cause     <diff-added>
Oct 15 15:16:21 [14874] vm1       crmd:    debug: abort_transition_graph: 	Cause       <cib epoch="6" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.7" cib-last-written="Tue Oct 15 15:16:21 2013" update-origin="vm1" update-client="cibadmin" have-quorum="1" dc-uuid="3232261517">
Oct 15 15:16:21 [14874] vm1       crmd:    debug: abort_transition_graph: 	Cause         <configuration>
Oct 15 15:16:21 [14874] vm1       crmd:    debug: abort_transition_graph: 	Cause           <crm_config>
Oct 15 15:16:21 [14874] vm1       crmd:    debug: abort_transition_graph: 	Cause             <cluster_property_set id="cib-bootstrap-options">
Oct 15 15:16:21 [14874] vm1       crmd:    debug: abort_transition_graph: 	Cause               <nvpair name="no-quorum-policy" value="freeze" id="cib-bootstrap-options-no-quorum-policy" __crm_diff_marker__="added:top"/>
Oct 15 15:16:21 [14874] vm1       crmd:    debug: abort_transition_graph: 	Cause               <nvpair name="stonith-enabled" value="true" id="cib-bootstrap-options-stonith-enabled" __crm_diff_marker__="added:top"/>
Oct 15 15:16:21 [14874] vm1       crmd:    debug: abort_transition_graph: 	Cause               <nvpair name="startup-fencing" value="false" id="cib-bootstrap-options-startup-fencing" __crm_diff_marker__="added:top"/>
Oct 15 15:16:21 [14874] vm1       crmd:    debug: abort_transition_graph: 	Cause               <nvpair name="stonith-timeout" value="60s" id="cib-bootstrap-options-stonith-timeout" __crm_diff_marker__="added:top"/>
Oct 15 15:16:21 [14874] vm1       crmd:    debug: abort_transition_graph: 	Cause               <nvpair name="crmd-transition-delay" value="2s" id="cib-bootstrap-options-crmd-transition-delay" __crm_diff_marker__="added:top"/>
Oct 15 15:16:21 [14874] vm1       crmd:    debug: abort_transition_graph: 	Cause             </cluster_property_set>
Oct 15 15:16:21 [14874] vm1       crmd:    debug: abort_transition_graph: 	Cause           </crm_config>
Oct 15 15:16:21 [14874] vm1       crmd:    debug: abort_transition_graph: 	Cause           <resources>
Oct 15 15:16:21 [14874] vm1       crmd:    debug: abort_transition_graph: 	Cause             <primitive id="pDummy" class="ocf" provider="pacemaker" type="Dummy" __crm_diff_marker__="added:top">
Oct 15 15:16:21 [14874] vm1       crmd:    debug: abort_transition_graph: 	Cause               <operations>
Oct 15 15:16:21 [14874] vm1       crmd:    debug: abort_transition_graph: 	Cause                 <op name="monitor" interval="10s" timeout="300s" on-fail="fence" id="pDummy-monitor-10s"/>
Oct 15 15:16:21 [14874] vm1       crmd:    debug: abort_transition_graph: 	Cause               </operations>
Oct 15 15:16:21 [14874] vm1       crmd:    debug: abort_transition_graph: 	Cause             </primitive>
Oct 15 15:16:21 [14874] vm1       crmd:    debug: abort_transition_graph: 	Cause             <group id="gStonith3" __crm_diff_marker__="added:top">
Oct 15 15:16:21 [14874] vm1       crmd:    debug: abort_transition_graph: 	Cause               <primitive id="f1" class="stonith" type="external/libvirt">
Oct 15 15:16:21 [14874] vm1       crmd:    debug: abort_transition_graph: 	Cause                 <instance_attributes id="f1-instance_attributes">
Oct 15 15:16:21 [14874] vm1       crmd:    debug: abort_transition_graph: 	Cause                   <nvpair name="hostlist" value="vm3" id="f1-instance_attributes-hostlist"/>
Oct 15 15:16:21 [14874] vm1       crmd:    debug: abort_transition_graph: 	Cause                   <nvpair name="hypervisor_uri" value="qemu+ssh://bl460g1n6/system" id="f1-instance_attributes-hypervisor_uri"/>
Oct 15 15:16:21 [14874] vm1       crmd:    debug: abort_transition_graph: 	Cause                 </instance_attributes>
Oct 15 15:16:21 [14874] vm1       crmd:    debug: abort_transition_graph: 	Cause                 <operations>
Oct 15 15:16:21 [14874] vm1       crmd:    debug: abort_transition_graph: 	Cause                   <op name="start" interval="0s" timeout="60s" id="f1-start-0s"/>
Oct 15 15:16:21 [14874] vm1       crmd:    debug: abort_transition_graph: 	Cause                   <op name="monitor" interval="3600s" timeout="60s" id="f1-monitor-3600s"/>
Oct 15 15:16:21 [14874] vm1       crmd:    debug: abort_transition_graph: 	Cause                   <op name="stop" interval="0s" timeout="60s" id="f1-stop-0s"/>
Oct 15 15:16:21 [14874] vm1       crmd:    debug: abort_transition_graph: 	Cause                 </operations>
Oct 15 15:16:21 [14874] vm1       crmd:    debug: abort_transition_graph: 	Cause               </primitive>
Oct 15 15:16:21 [14874] vm1       crmd:    debug: abort_transition_graph: 	Cause               <primitive id="f2" class="stonith" type="external/ssh">
Oct 15 15:16:21 [14874] vm1       crmd:    debug: abort_transition_graph: 	Cause                 <instance_attributes id="f2-instance_attributes">
Oct 15 15:16:21 [14874] vm1       crmd:    debug: abort_transition_graph: 	Cause                   <nvpair name="pcmk_reboot_retries" value="1" id="f2-instance_attributes-pcmk_reboot_retries"/>
Oct 15 15:16:21 [14874] vm1       crmd:    debug: abort_transition_graph: 	Cause                   <nvpair name="hostlist" value="vm3" id="f2-instance_attributes-hostlist"/>
Oct 15 15:16:21 [14874] vm1       crmd:    debug: abort_transition_graph: 	Cause                 </instance_attributes>
Oct 15 15:16:21 [14874] vm1       crmd:    debug: abort_transition_graph: 	Cause                 <operations>
Oct 15 15:16:21 [14874] vm1       crmd:    debug: abort_transition_graph: 	Cause                   <op name="start" interval="0s" timeout="60s" id="f2-start-0s"/>
Oct 15 15:16:21 [14874] vm1       crmd:    debug: abort_transition_graph: 	Cause                   <op name="monitor" interval="3600s" timeout="60s" id="f2-monitor-3600s"/>
Oct 15 15:16:21 [14874] vm1       crmd:    debug: abort_transition_graph: 	Cause                   <op name="stop" interval="0s" timeout="60s" id="f2-stop-0s"/>
Oct 15 15:16:21 [14874] vm1       crmd:    debug: abort_transition_graph: 	Cause                 </operations>
Oct 15 15:16:21 [14874] vm1       crmd:    debug: abort_transition_graph: 	Cause               </primitive>
Oct 15 15:16:21 [14874] vm1       crmd:    debug: abort_transition_graph: 	Cause             </group>
Oct 15 15:16:21 [14874] vm1       crmd:    debug: abort_transition_graph: 	Cause           </resources>
Oct 15 15:16:21 [14874] vm1       crmd:    debug: abort_transition_graph: 	Cause           <constraints>
Oct 15 15:16:21 [14874] vm1       crmd:    debug: abort_transition_graph: 	Cause             <rsc_location id="l1" rsc="pDummy" __crm_diff_marker__="added:top">
Oct 15 15:16:21 [14874] vm1       crmd:    debug: abort_transition_graph: 	Cause               <rule score="300" id="l1-rule">
Oct 15 15:16:21 [14874] vm1       crmd:    debug: abort_transition_graph: 	Cause                 <expression attribute="#uname" operation="eq" value="vm3" id="l1-expression"/>
Oct 15 15:16:21 [14874] vm1       crmd:    debug: abort_transition_graph: 	Cause               </rule>
Oct 15 15:16:21 [14874] vm1       crmd:    debug: abort_transition_graph: 	Cause             </rsc_location>
Oct 15 15:16:21 [14874] vm1       crmd:    debug: abort_transition_graph: 	Cause             <rsc_location id="l2" rsc="gStonith3" __crm_diff_marker__="added:top">
Oct 15 15:16:21 [14874] vm1       crmd:    debug: abort_transition_graph: 	Cause               <rule score="-INFINITY" id="l2-rule">
Oct 15 15:16:21 [14874] vm1       crmd:    debug: abort_transition_graph: 	Cause                 <expression attribute="#uname" operation="eq" value="vm3" id="l2-expression"/>
Oct 15 15:16:21 [14874] vm1       crmd:    debug: abort_transition_graph: 	Cause               </rule>
Oct 15 15:16:21 [14874] vm1       crmd:    debug: abort_transition_graph: 	Cause               <rule score="200" id="l2-rule-0">
Oct 15 15:16:21 [14874] vm1       crmd:    debug: abort_transition_graph: 	Cause                 <expression attribute="#uname" operation="eq" value="vm1" id="l2-expression-0"/>
Oct 15 15:16:21 [14874] vm1       crmd:    debug: abort_transition_graph: 	Cause               </rule>
Oct 15 15:16:21 [14874] vm1       crmd:    debug: abort_transition_graph: 	Cause             </rsc_location>
Oct 15 15:16:21 [14874] vm1       crmd:    debug: abort_transition_graph: 	Cause           </constraints>
Oct 15 15:16:21 [14874] vm1       crmd:    debug: abort_transition_graph: 	Cause           <fencing-topology __crm_diff_marker__="added:top">
Oct 15 15:16:21 [14874] vm1       crmd:    debug: abort_transition_graph: 	Cause             <fencing-level target="vm3" devices="f1" index="1" id="fencing"/>
Oct 15 15:16:21 [14874] vm1       crmd:    debug: abort_transition_graph: 	Cause             <fencing-level target="vm3" devices="f2" index="2" id="fencing-0"/>
Oct 15 15:16:21 [14874] vm1       crmd:    debug: abort_transition_graph: 	Cause           </fencing-topology>
Oct 15 15:16:21 [14874] vm1       crmd:    debug: abort_transition_graph: 	Cause           <rsc_defaults __crm_diff_marker__="added:top">
Oct 15 15:16:21 [14874] vm1       crmd:    debug: abort_transition_graph: 	Cause             <meta_attributes id="rsc-options">
Oct 15 15:16:21 [14874] vm1       crmd:    debug: abort_transition_graph: 	Cause               <nvpair name="resource-stickiness" value="INFINITY" id="rsc-options-resource-stickiness"/>
Oct 15 15:16:21 [14874] vm1       crmd:    debug: abort_transition_graph: 	Cause               <nvpair name="migration-threshold" value="1" id="rsc-options-migration-threshold"/>
Oct 15 15:16:21 [14874] vm1       crmd:    debug: abort_transition_graph: 	Cause             </meta_attributes>
Oct 15 15:16:21 [14874] vm1       crmd:    debug: abort_transition_graph: 	Cause           </rsc_defaults>
Oct 15 15:16:21 [14874] vm1       crmd:    debug: abort_transition_graph: 	Cause         </configuration>
Oct 15 15:16:21 [14874] vm1       crmd:    debug: abort_transition_graph: 	Cause       </cib>
Oct 15 15:16:21 [14874] vm1       crmd:    debug: abort_transition_graph: 	Cause     </diff-added>
Oct 15 15:16:21 [14874] vm1       crmd:    debug: abort_transition_graph: 	Cause   </diff>
Oct 15 15:16:21 [14874] vm1       crmd:    debug: s_crmd_fsa: 	Processing I_PE_CALC: [ state=S_IDLE cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Oct 15 15:16:21 [14874] vm1       crmd:   notice: do_state_transition: 	State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
Oct 15 15:16:21 [14874] vm1       crmd:    debug: do_state_transition: 	All 3 cluster nodes are eligible to run resources.
Oct 15 15:16:21 [14874] vm1       crmd:    debug: do_pe_invoke: 	Query 32: Requesting the current CIB: S_POLICY_ENGINE
Oct 15 15:16:21 [14870] vm1 stonith-ng:    debug: Config update: 	Diff: --- 0.5.8
Oct 15 15:16:21 [14870] vm1 stonith-ng:    debug: Config update: 	Diff: +++ 0.6.1 1c8a43265f13cefc2951afe21b89b052
Oct 15 15:16:21 [14870] vm1 stonith-ng:    debug: Config update: 	-  <cib admin_epoch="0" epoch="5" num_updates="8">
Oct 15 15:16:21 [14870] vm1 stonith-ng:    debug: Config update: 	-    <configuration>
Oct 15 15:16:21 [14870] vm1 stonith-ng:    debug: Config update: 	-      <crm_config>
Oct 15 15:16:21 [14870] vm1 stonith-ng:    debug: Config update: 	-        <cluster_property_set id="cib-bootstrap-options">
Oct 15 15:16:21 [14870] vm1 stonith-ng:    debug: Config update: 	--         <nvpair id="cib-bootstrap-options-dc-version" name="dc-version" value="1.1.11-0.284.6a5e863.git.el6-6a5e863"/>
Oct 15 15:16:21 [14870] vm1 stonith-ng:    debug: Config update: 	--         <nvpair id="cib-bootstrap-options-cluster-infrastructure" name="cluster-infrastructure" value="corosync"/>
Oct 15 15:16:21 [14870] vm1 stonith-ng:    debug: Config update: 	-        </cluster_property_set>
Oct 15 15:16:21 [14870] vm1 stonith-ng:    debug: Config update: 	-      </crm_config>
Oct 15 15:16:21 [14870] vm1 stonith-ng:    debug: Config update: 	-    </configuration>
Oct 15 15:16:21 [14870] vm1 stonith-ng:    debug: Config update: 	-  </cib>
Oct 15 15:16:21 [14870] vm1 stonith-ng:    debug: Config update: 	+  <cib epoch="6" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.7" cib-last-written="Tue Oct 15 15:16:21 2013" update-origin="vm1" update-client="cibadmin" have-quorum="1" dc-uuid="3232261517">
Oct 15 15:16:21 [14870] vm1 stonith-ng:    debug: Config update: 	+    <configuration>
Oct 15 15:16:21 [14870] vm1 stonith-ng:    debug: Config update: 	+      <crm_config>
Oct 15 15:16:21 [14870] vm1 stonith-ng:    debug: Config update: 	+        <cluster_property_set id="cib-bootstrap-options">
Oct 15 15:16:21 [14870] vm1 stonith-ng:    debug: Config update: 	++         <nvpair name="no-quorum-policy" value="freeze" id="cib-bootstrap-options-no-quorum-policy"/>
Oct 15 15:16:21 [14870] vm1 stonith-ng:    debug: Config update: 	++         <nvpair name="stonith-enabled" value="true" id="cib-bootstrap-options-stonith-enabled"/>
Oct 15 15:16:21 [14870] vm1 stonith-ng:    debug: Config update: 	++         <nvpair name="startup-fencing" value="false" id="cib-bootstrap-options-startup-fencing"/>
Oct 15 15:16:21 [14870] vm1 stonith-ng:    debug: Config update: 	++         <nvpair name="stonith-timeout" value="60s" id="cib-bootstrap-options-stonith-timeout"/>
Oct 15 15:16:21 [14870] vm1 stonith-ng:    debug: Config update: 	++         <nvpair name="crmd-transition-delay" value="2s" id="cib-bootstrap-options-crmd-transition-delay"/>
Oct 15 15:16:21 [14870] vm1 stonith-ng:    debug: Config update: 	+        </cluster_property_set>
Oct 15 15:16:21 [14870] vm1 stonith-ng:    debug: Config update: 	+      </crm_config>
Oct 15 15:16:21 [14870] vm1 stonith-ng:    debug: Config update: 	+      <resources>
Oct 15 15:16:21 [14870] vm1 stonith-ng:    debug: Config update: 	++       <primitive id="pDummy" class="ocf" provider="pacemaker" type="Dummy">
Oct 15 15:16:21 [14870] vm1 stonith-ng:    debug: Config update: 	++         <operations>
Oct 15 15:16:21 [14870] vm1 stonith-ng:    debug: Config update: 	++           <op name="monitor" interval="10s" timeout="300s" on-fail="fence" id="pDummy-monitor-10s"/>
Oct 15 15:16:21 [14870] vm1 stonith-ng:    debug: Config update: 	++         </operations>
Oct 15 15:16:21 [14870] vm1 stonith-ng:    debug: Config update: 	++       </primitive>
Oct 15 15:16:21 [14870] vm1 stonith-ng:    debug: Config update: 	++       <group id="gStonith3">
Oct 15 15:16:21 [14870] vm1 stonith-ng:    debug: Config update: 	++         <primitive id="f1" class="stonith" type="external/libvirt">
Oct 15 15:16:21 [14870] vm1 stonith-ng:    debug: Config update: 	++           <instance_attributes id="f1-instance_attributes">
Oct 15 15:16:21 [14870] vm1 stonith-ng:    debug: Config update: 	++             <nvpair name="hostlist" value="vm3" id="f1-instance_attributes-hostlist"/>
Oct 15 15:16:21 [14870] vm1 stonith-ng:    debug: Config update: 	++             <nvpair name="hypervisor_uri" value="qemu+ssh://bl460g1n6/system" id="f1-instance_attributes-hypervisor_uri"/>
Oct 15 15:16:21 [14870] vm1 stonith-ng:    debug: Config update: 	++           </instance_attributes>
Oct 15 15:16:21 [14870] vm1 stonith-ng:    debug: Config update: 	++           <operations>
Oct 15 15:16:21 [14870] vm1 stonith-ng:    debug: Config update: 	++             <op name="start" interval="0s" timeout="60s" id="f1-start-0s"/>
Oct 15 15:16:21 [14870] vm1 stonith-ng:    debug: Config update: 	++             <op name="monitor" interval="3600s" timeout="60s" id="f1-monitor-3600s"/>
Oct 15 15:16:21 [14870] vm1 stonith-ng:    debug: Config update: 	++             <op name="stop" interval="0s" timeout="60s" id="f1-stop-0s"/>
Oct 15 15:16:21 [14870] vm1 stonith-ng:    debug: Config update: 	++           </operations>
Oct 15 15:16:21 [14870] vm1 stonith-ng:    debug: Config update: 	++         </primitive>
Oct 15 15:16:21 [14870] vm1 stonith-ng:    debug: Config update: 	++         <primitive id="f2" class="stonith" type="external/ssh">
Oct 15 15:16:21 [14870] vm1 stonith-ng:    debug: Config update: 	++           <instance_attributes id="f2-instance_attributes">
Oct 15 15:16:21 [14870] vm1 stonith-ng:    debug: Config update: 	++             <nvpair name="pcmk_reboot_retries" value="1" id="f2-instance_attributes-pcmk_reboot_retries"/>
Oct 15 15:16:21 [14870] vm1 stonith-ng:    debug: Config update: 	++             <nvpair name="hostlist" value="vm3" id="f2-instance_attributes-hostlist"/>
Oct 15 15:16:21 [14870] vm1 stonith-ng:    debug: Config update: 	++           </instance_attributes>
Oct 15 15:16:21 [14870] vm1 stonith-ng:    debug: Config update: 	++           <operations>
Oct 15 15:16:21 [14870] vm1 stonith-ng:    debug: Config update: 	++             <op name="start" interval="0s" timeout="60s" id="f2-start-0s"/>
Oct 15 15:16:21 [14870] vm1 stonith-ng:    debug: Config update: 	++             <op name="monitor" interval="3600s" timeout="60s" id="f2-monitor-3600s"/>
Oct 15 15:16:21 [14870] vm1 stonith-ng:    debug: Config update: 	++             <op name="stop" interval="0s" timeout="60s" id="f2-stop-0s"/>
Oct 15 15:16:21 [14870] vm1 stonith-ng:    debug: Config update: 	++           </operations>
Oct 15 15:16:21 [14870] vm1 stonith-ng:    debug: Config update: 	++         </primitive>
Oct 15 15:16:21 [14870] vm1 stonith-ng:    debug: Config update: 	++       </group>
Oct 15 15:16:21 [14870] vm1 stonith-ng:    debug: Config update: 	+      </resources>
Oct 15 15:16:21 [14870] vm1 stonith-ng:    debug: Config update: 	+      <constraints>
Oct 15 15:16:21 [14870] vm1 stonith-ng:    debug: Config update: 	++       <rsc_location id="l1" rsc="pDummy">
Oct 15 15:16:21 [14870] vm1 stonith-ng:    debug: Config update: 	++         <rule score="300" id="l1-rule">
Oct 15 15:16:21 [14870] vm1 stonith-ng:    debug: Config update: 	++           <expression attribute="#uname" operation="eq" value="vm3" id="l1-expression"/>
Oct 15 15:16:21 [14870] vm1 stonith-ng:    debug: Config update: 	++         </rule>
Oct 15 15:16:21 [14870] vm1 stonith-ng:    debug: Config update: 	++       </rsc_location>
Oct 15 15:16:21 [14870] vm1 stonith-ng:    debug: Config update: 	++       <rsc_location id="l2" rsc="gStonith3">
Oct 15 15:16:21 [14870] vm1 stonith-ng:    debug: Config update: 	++         <rule score="-INFINITY" id="l2-rule">
Oct 15 15:16:21 [14870] vm1 stonith-ng:    debug: Config update: 	++           <expression attribute="#uname" operation="eq" value="vm3" id="l2-expression"/>
Oct 15 15:16:21 [14870] vm1 stonith-ng:    debug: Config update: 	++         </rule>
Oct 15 15:16:21 [14870] vm1 stonith-ng:    debug: Config update: 	++         <rule score="200" id="l2-rule-0">
Oct 15 15:16:21 [14870] vm1 stonith-ng:    debug: Config update: 	++           <expression attribute="#uname" operation="eq" value="vm1" id="l2-expression-0"/>
Oct 15 15:16:21 [14870] vm1 stonith-ng:    debug: Config update: 	++         </rule>
Oct 15 15:16:21 [14870] vm1 stonith-ng:    debug: Config update: 	++       </rsc_location>
Oct 15 15:16:21 [14870] vm1 stonith-ng:    debug: Config update: 	+      </constraints>
Oct 15 15:16:21 [14870] vm1 stonith-ng:    debug: Config update: 	++     <fencing-topology>
Oct 15 15:16:21 [14870] vm1 stonith-ng:    debug: Config update: 	++       <fencing-level target="vm3" devices="f1" index="1" id="fencing"/>
Oct 15 15:16:21 [14870] vm1 stonith-ng:    debug: Config update: 	++       <fencing-level target="vm3" devices="f2" index="2" id="fencing-0"/>
Oct 15 15:16:21 [14870] vm1 stonith-ng:    debug: Config update: 	++     </fencing-topology>
Oct 15 15:16:21 [14870] vm1 stonith-ng:    debug: Config update: 	++     <rsc_defaults>
Oct 15 15:16:21 [14870] vm1 stonith-ng:    debug: Config update: 	++       <meta_attributes id="rsc-options">
Oct 15 15:16:21 [14870] vm1 stonith-ng:    debug: Config update: 	++         <nvpair name="resource-stickiness" value="INFINITY" id="rsc-options-resource-stickiness"/>
Oct 15 15:16:21 [14870] vm1 stonith-ng:    debug: Config update: 	++         <nvpair name="migration-threshold" value="1" id="rsc-options-migration-threshold"/>
Oct 15 15:16:21 [14870] vm1 stonith-ng:    debug: Config update: 	++       </meta_attributes>
Oct 15 15:16:21 [14870] vm1 stonith-ng:    debug: Config update: 	++     </rsc_defaults>
Oct 15 15:16:21 [14870] vm1 stonith-ng:    debug: Config update: 	+    </configuration>
Oct 15 15:16:21 [14870] vm1 stonith-ng:    debug: Config update: 	+  </cib>
Oct 15 15:16:21 [14870] vm1 stonith-ng:     info: stonith_level_remove: 	Node vm3 not found (0 active entries)
Oct 15 15:16:21 [14870] vm1 stonith-ng:     info: stonith_level_register: 	Node vm3 has 1 active fencing levels
Oct 15 15:16:21 [14870] vm1 stonith-ng:     info: stonith_level_register: 	Node vm3 has 2 active fencing levels
Oct 15 15:16:21 [14870] vm1 stonith-ng:     info: update_cib_stonith_devices: 	Updating device list from the cib: new resource
Oct 15 15:16:21 [14870] vm1 stonith-ng:    debug: unpack_config: 	STONITH timeout: 60000
Oct 15 15:16:21 [14870] vm1 stonith-ng:    debug: unpack_config: 	STONITH of failed nodes is enabled
Oct 15 15:16:21 [14870] vm1 stonith-ng:    debug: unpack_config: 	Stop all active resources: false
Oct 15 15:16:21 [14870] vm1 stonith-ng:    debug: unpack_config: 	Cluster is symmetric - resources can run anywhere by default
Oct 15 15:16:21 [14870] vm1 stonith-ng:    debug: unpack_config: 	Default stickiness: 0
Oct 15 15:16:21 [14870] vm1 stonith-ng:    debug: unpack_config: 	On loss of CCM Quorum: Freeze resources
Oct 15 15:16:21 [14870] vm1 stonith-ng:    debug: unpack_config: 	Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
Oct 15 15:16:21 [14870] vm1 stonith-ng:  warning: handle_startup_fencing: 	Blind faith: not fencing unseen nodes
Oct 15 15:16:21 [14870] vm1 stonith-ng:    debug: unpack_domains: 	Unpacking domains
Oct 15 15:16:21 [14870] vm1 stonith-ng:    debug: group_rsc_location: 	Processing rsc_location l2-rule-0 for gStonith3
Oct 15 15:16:21 [14870] vm1 stonith-ng:    debug: group_rsc_location: 	Processing rsc_location l2-rule for gStonith3
Oct 15 15:16:21 [14870] vm1 stonith-ng:     info: cib_device_update: 	Device f1 is allowed on vm1: score=200
Oct 15 15:16:21 [14870] vm1 stonith-ng:     info: stonith_action_create: 	Initiating action metadata for agent fence_legacy (target=(null))
Oct 15 15:16:21 [14870] vm1 stonith-ng:    debug: internal_stonith_action_execute: 	forking
Oct 15 15:16:21 [14870] vm1 stonith-ng:    debug: internal_stonith_action_execute: 	sending args
Oct 15 15:16:21 [14869] vm1        cib:     info: cib_replace_notify: 	Replaced: 0.5.8 -> 0.6.1 from vm1
Oct 15 15:16:21 [14874] vm1       crmd:    debug: do_cib_replaced: 	Updating the CIB after a replace: DC=true
Oct 15 15:16:21 [14874] vm1       crmd:    debug: s_crmd_fsa: 	Processing I_ELECTION: [ state=S_POLICY_ENGINE cause=C_FSA_INTERNAL origin=do_cib_replaced ]
Oct 15 15:16:21 [14874] vm1       crmd:     info: do_state_transition: 	State transition S_POLICY_ENGINE -> S_ELECTION [ input=I_ELECTION cause=C_FSA_INTERNAL origin=do_cib_replaced ]
Oct 15 15:16:21 [14874] vm1       crmd:     info: update_dc: 	Unset DC. Was vm1
Oct 15 15:16:21 [14874] vm1       crmd:    debug: crm_uptime: 	Current CPU usage is: 0s, 48992us
Oct 15 15:16:21 [14874] vm1       crmd:    debug: election_vote: 	Started election 4
Oct 15 15:16:21 [14872] vm1      attrd:   notice: attrd_cib_replaced_cb: 	Updating all attributes after cib_refresh_notify event
Oct 15 15:16:21 [14872] vm1      attrd:    debug: write_attribute: 	Update: vm1[shutdown]=(null) (3232261517 3232261517 3232261517 vm1)
Oct 15 15:16:21 [14872] vm1      attrd:    debug: write_attribute: 	Update: vm2[shutdown]=(null) (3232261518 3232261518 3232261518 vm2)
Oct 15 15:16:21 [14872] vm1      attrd:    debug: write_attribute: 	Update: vm3[shutdown]=(null) (3232261519 3232261519 3232261519 vm3)
Oct 15 15:16:21 [14872] vm1      attrd:   notice: write_attribute: 	Sent update 6 with 3 changes for shutdown, id=<n/a>, set=(null)
Oct 15 15:16:21 [14872] vm1      attrd:    debug: write_attribute: 	Update: vm1[terminate]=(null) (3232261517 3232261517 3232261517 vm1)
Oct 15 15:16:21 [14872] vm1      attrd:    debug: write_attribute: 	Update: vm2[terminate]=(null) (3232261518 3232261518 3232261518 vm2)
Oct 15 15:16:21 [14872] vm1      attrd:    debug: write_attribute: 	Update: vm3[terminate]=(null) (3232261519 3232261519 3232261519 vm3)
Oct 15 15:16:21 [14872] vm1      attrd:   notice: write_attribute: 	Sent update 7 with 3 changes for terminate, id=<n/a>, set=(null)
Oct 15 15:16:21 [14872] vm1      attrd:    debug: write_attribute: 	Update: vm1[probe_complete]=true (3232261517 3232261517 3232261517 vm1)
Oct 15 15:16:21 [14872] vm1      attrd:    debug: write_attribute: 	Update: vm2[probe_complete]=true (3232261518 3232261518 3232261518 vm2)
Oct 15 15:16:21 [14872] vm1      attrd:    debug: write_attribute: 	Update: vm3[probe_complete]=true (3232261519 3232261519 3232261519 vm3)
Oct 15 15:16:21 [14872] vm1      attrd:   notice: write_attribute: 	Sent update 8 with 3 changes for probe_complete, id=<n/a>, set=(null)
Oct 15 15:16:21 [14869] vm1        cib:   notice: cib:diff: 	Diff: --- 0.5.8
Oct 15 15:16:21 [14869] vm1        cib:   notice: cib:diff: 	Diff: +++ 0.6.1 1c8a43265f13cefc2951afe21b89b052
Oct 15 15:16:21 [14869] vm1        cib:   notice: cib:diff: 	--         <nvpair id="cib-bootstrap-options-dc-version" name="dc-version" value="1.1.11-0.284.6a5e863.git.el6-6a5e863"/>
Oct 15 15:16:21 [14869] vm1        cib:   notice: cib:diff: 	--         <nvpair id="cib-bootstrap-options-cluster-infrastructure" name="cluster-infrastructure" value="corosync"/>
Oct 15 15:16:21 [14869] vm1        cib:   notice: cib:diff: 	++         <nvpair name="no-quorum-policy" value="freeze" id="cib-bootstrap-options-no-quorum-policy"/>
Oct 15 15:16:21 [14869] vm1        cib:   notice: cib:diff: 	++         <nvpair name="stonith-enabled" value="true" id="cib-bootstrap-options-stonith-enabled"/>
Oct 15 15:16:21 [14869] vm1        cib:   notice: cib:diff: 	++         <nvpair name="startup-fencing" value="false" id="cib-bootstrap-options-startup-fencing"/>
Oct 15 15:16:21 [14869] vm1        cib:   notice: cib:diff: 	++         <nvpair name="stonith-timeout" value="60s" id="cib-bootstrap-options-stonith-timeout"/>
Oct 15 15:16:21 [14869] vm1        cib:   notice: cib:diff: 	++         <nvpair name="crmd-transition-delay" value="2s" id="cib-bootstrap-options-crmd-transition-delay"/>
Oct 15 15:16:21 [14869] vm1        cib:   notice: cib:diff: 	++       <primitive id="pDummy" class="ocf" provider="pacemaker" type="Dummy">
Oct 15 15:16:21 [14869] vm1        cib:   notice: cib:diff: 	++         <operations>
Oct 15 15:16:21 [14869] vm1        cib:   notice: cib:diff: 	++           <op name="monitor" interval="10s" timeout="300s" on-fail="fence" id="pDummy-monitor-10s"/>
Oct 15 15:16:21 [14869] vm1        cib:   notice: cib:diff: 	++         </operations>
Oct 15 15:16:21 [14869] vm1        cib:   notice: cib:diff: 	++       </primitive>
Oct 15 15:16:21 [14874] vm1       crmd:    debug: election_count_vote: 	Created voted hash
Oct 15 15:16:21 [14869] vm1        cib:   notice: cib:diff: 	++       <group id="gStonith3">
Oct 15 15:16:21 [14874] vm1       crmd:    debug: election_count_vote: 	Election 4 (current: 4, owner: 3232261517): Processed vote from vm1 (Recorded)
Oct 15 15:16:21 [14874] vm1       crmd:    debug: election_check: 	Still waiting on 2 non-votes (3 total)
Oct 15 15:16:21 [14869] vm1        cib:   notice: cib:diff: 	++         <primitive id="f1" class="stonith" type="external/libvirt">
Oct 15 15:16:21 [14869] vm1        cib:   notice: cib:diff: 	++           <instance_attributes id="f1-instance_attributes">
Oct 15 15:16:21 [14869] vm1        cib:   notice: cib:diff: 	++             <nvpair name="hostlist" value="vm3" id="f1-instance_attributes-hostlist"/>
Oct 15 15:16:21 [14869] vm1        cib:   notice: cib:diff: 	++             <nvpair name="hypervisor_uri" value="qemu+ssh://bl460g1n6/system" id="f1-instance_attributes-hypervisor_uri"/>
Oct 15 15:16:21 [14869] vm1        cib:   notice: cib:diff: 	++           </instance_attributes>
Oct 15 15:16:21 [14869] vm1        cib:   notice: cib:diff: 	++           <operations>
Oct 15 15:16:21 [14869] vm1        cib:   notice: cib:diff: 	++             <op name="start" interval="0s" timeout="60s" id="f1-start-0s"/>
Oct 15 15:16:21 [14869] vm1        cib:   notice: cib:diff: 	++             <op name="monitor" interval="3600s" timeout="60s" id="f1-monitor-3600s"/>
Oct 15 15:16:21 [14869] vm1        cib:   notice: cib:diff: 	++             <op name="stop" interval="0s" timeout="60s" id="f1-stop-0s"/>
Oct 15 15:16:21 [14869] vm1        cib:   notice: cib:diff: 	++           </operations>
Oct 15 15:16:21 [14869] vm1        cib:   notice: cib:diff: 	++         </primitive>
Oct 15 15:16:21 [14869] vm1        cib:   notice: cib:diff: 	++         <primitive id="f2" class="stonith" type="external/ssh">
Oct 15 15:16:21 [14869] vm1        cib:   notice: cib:diff: 	++           <instance_attributes id="f2-instance_attributes">
Oct 15 15:16:21 [14869] vm1        cib:   notice: cib:diff: 	++             <nvpair name="pcmk_reboot_retries" value="1" id="f2-instance_attributes-pcmk_reboot_retries"/>
Oct 15 15:16:21 [14869] vm1        cib:   notice: cib:diff: 	++             <nvpair name="hostlist" value="vm3" id="f2-instance_attributes-hostlist"/>
Oct 15 15:16:21 [14869] vm1        cib:   notice: cib:diff: 	++           </instance_attributes>
Oct 15 15:16:21 [14869] vm1        cib:   notice: cib:diff: 	++           <operations>
Oct 15 15:16:21 [14869] vm1        cib:   notice: cib:diff: 	++             <op name="start" interval="0s" timeout="60s" id="f2-start-0s"/>
Oct 15 15:16:21 [14869] vm1        cib:   notice: cib:diff: 	++             <op name="monitor" interval="3600s" timeout="60s" id="f2-monitor-3600s"/>
Oct 15 15:16:21 [14874] vm1       crmd:    debug: election_count_vote: 	Election 4 (current: 4, owner: 3232261517): Processed no-vote from vm2 (Recorded)
Oct 15 15:16:21 [14874] vm1       crmd:    debug: election_check: 	Still waiting on 1 non-votes (3 total)
Oct 15 15:16:21 [14874] vm1       crmd:    debug: election_count_vote: 	Election 4 (current: 4, owner: 3232261517): Processed no-vote from vm3 (Recorded)
Oct 15 15:16:21 [14869] vm1        cib:   notice: cib:diff: 	++             <op name="stop" interval="0s" timeout="60s" id="f2-stop-0s"/>
Oct 15 15:16:21 [14874] vm1       crmd:     info: election_timer_cb: 	Election election-0 complete
Oct 15 15:16:21 [14869] vm1        cib:   notice: cib:diff: 	++           </operations>
Oct 15 15:16:21 [14874] vm1       crmd:     info: election_timeout_popped: 	Election failed: Declaring ourselves the winner
Oct 15 15:16:21 [14874] vm1       crmd:    debug: s_crmd_fsa: 	Processing I_ELECTION_DC: [ state=S_ELECTION cause=C_TIMER_POPPED origin=election_timeout_popped ]
Oct 15 15:16:21 [14869] vm1        cib:   notice: cib:diff: 	++         </primitive>
Oct 15 15:16:21 [14874] vm1       crmd:     info: do_log: 	FSA: Input I_ELECTION_DC from election_timeout_popped() received in state S_ELECTION
Oct 15 15:16:21 [14874] vm1       crmd:   notice: do_state_transition: 	State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_TIMER_POPPED origin=election_timeout_popped ]
Oct 15 15:16:21 [14874] vm1       crmd:    debug: do_te_control: 	The transitioner is already active
Oct 15 15:16:21 [14874] vm1       crmd:    debug: crm_timer_start: 	Started Integration Timer (I_INTEGRATED:180000ms), src=48
Oct 15 15:16:21 [14874] vm1       crmd:     info: do_dc_takeover: 	Taking over DC status for this partition
Oct 15 15:16:21 [14869] vm1        cib:   notice: cib:diff: 	++       </group>
Oct 15 15:16:21 [14869] vm1        cib:   notice: cib:diff: 	++       <rsc_location id="l1" rsc="pDummy">
Oct 15 15:16:21 [14869] vm1        cib:   notice: cib:diff: 	++         <rule score="300" id="l1-rule">
Oct 15 15:16:21 [14869] vm1        cib:   notice: cib:diff: 	++           <expression attribute="#uname" operation="eq" value="vm3" id="l1-expression"/>
Oct 15 15:16:21 [14869] vm1        cib:   notice: cib:diff: 	++         </rule>
Oct 15 15:16:21 [14869] vm1        cib:   notice: cib:diff: 	++       </rsc_location>
Oct 15 15:16:21 [14869] vm1        cib:   notice: cib:diff: 	++       <rsc_location id="l2" rsc="gStonith3">
Oct 15 15:16:21 [14869] vm1        cib:   notice: cib:diff: 	++         <rule score="-INFINITY" id="l2-rule">
Oct 15 15:16:21 [14869] vm1        cib:   notice: cib:diff: 	++           <expression attribute="#uname" operation="eq" value="vm3" id="l2-expression"/>
Oct 15 15:16:21 [14869] vm1        cib:   notice: cib:diff: 	++         </rule>
Oct 15 15:16:21 [14869] vm1        cib:   notice: cib:diff: 	++         <rule score="200" id="l2-rule-0">
Oct 15 15:16:21 [14869] vm1        cib:   notice: cib:diff: 	++           <expression attribute="#uname" operation="eq" value="vm1" id="l2-expression-0"/>
Oct 15 15:16:21 [14869] vm1        cib:   notice: cib:diff: 	++         </rule>
Oct 15 15:16:21 [14869] vm1        cib:   notice: cib:diff: 	++       </rsc_location>
Oct 15 15:16:21 [14869] vm1        cib:   notice: cib:diff: 	++     <fencing-topology>
Oct 15 15:16:21 [14869] vm1        cib:   notice: cib:diff: 	++       <fencing-level target="vm3" devices="f1" index="1" id="fencing"/>
Oct 15 15:16:21 [14869] vm1        cib:   notice: cib:diff: 	++       <fencing-level target="vm3" devices="f2" index="2" id="fencing-0"/>
Oct 15 15:16:21 [14869] vm1        cib:   notice: cib:diff: 	++     </fencing-topology>
Oct 15 15:16:21 [14869] vm1        cib:   notice: cib:diff: 	++     <rsc_defaults>
Oct 15 15:16:21 [14869] vm1        cib:   notice: cib:diff: 	++       <meta_attributes id="rsc-options">
Oct 15 15:16:21 [14869] vm1        cib:   notice: cib:diff: 	++         <nvpair name="resource-stickiness" value="INFINITY" id="rsc-options-resource-stickiness"/>
Oct 15 15:16:21 [14869] vm1        cib:   notice: cib:diff: 	++         <nvpair name="migration-threshold" value="1" id="rsc-options-migration-threshold"/>
Oct 15 15:16:21 [14869] vm1        cib:   notice: cib:diff: 	++       </meta_attributes>
Oct 15 15:16:21 [14869] vm1        cib:   notice: cib:diff: 	++     </rsc_defaults>
Oct 15 15:16:21 [14869] vm1        cib:     info: cib_process_request: 	Completed cib_replace operation for section 'all': OK (rc=0, origin=local/cibadmin/2, version=0.6.1)
Oct 15 15:16:21 [14869] vm1        cib:     info: cib_process_request: 	Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/32, version=0.6.1)
Oct 15 15:16:21 [14869] vm1        cib:     info: cib_process_request: 	Completed cib_query operation for section crm_config: OK (rc=0, origin=local/crmd/33, version=0.6.1)
Oct 15 15:16:21 [14869] vm1        cib:     info: cib_process_request: 	Completed cib_modify operation for section nodes: OK (rc=0, origin=local/crmd/34, version=0.6.1)
Oct 15 15:16:21 [14869] vm1        cib:     info: cib_process_request: 	Completed cib_modify operation for section status: OK (rc=0, origin=local/crmd/35, version=0.6.1)
Oct 15 15:16:21 [14869] vm1        cib:    debug: cib_process_readwrite: 	We are still in R/W mode
Oct 15 15:16:21 [14869] vm1        cib:     info: cib_process_request: 	Completed cib_master operation for section 'all': OK (rc=0, origin=local/crmd/36, version=0.6.1)
Oct 15 15:16:21 [14869] vm1        cib:    debug: cib_process_modify: 	Destroying /cib/status/node_state[1]/transient_attributes/instance_attributes/nvpair[2]
Oct 15 15:16:21 [14869] vm1        cib:    debug: cib_process_modify: 	Destroying /cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair[2]
Oct 15 15:16:21 [14869] vm1        cib:    debug: cib_process_modify: 	Destroying /cib/status/node_state[3]/transient_attributes/instance_attributes/nvpair[2]
Oct 15 15:16:21 [14869] vm1        cib:     info: cib_process_request: 	Completed cib_modify operation for section status: OK (rc=0, origin=local/attrd/6, version=0.6.1)
Oct 15 15:16:21 [14872] vm1      attrd:     info: attrd_cib_callback: 	Update 6 for shutdown: OK (0)
Oct 15 15:16:21 [14872] vm1      attrd:   notice: attrd_cib_callback: 	Update 6 for shutdown[vm1]=(null): OK (0)
Oct 15 15:16:21 [14872] vm1      attrd:   notice: attrd_cib_callback: 	Update 6 for shutdown[vm2]=(null): OK (0)
Oct 15 15:16:21 [14872] vm1      attrd:   notice: attrd_cib_callback: 	Update 6 for shutdown[vm3]=(null): OK (0)
Oct 15 15:16:21 [14869] vm1        cib:    debug: cib_process_modify: 	Destroying /cib/status/node_state[1]/transient_attributes/instance_attributes/nvpair[2]
Oct 15 15:16:21 [14869] vm1        cib:    debug: cib_process_modify: 	Destroying /cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair[2]
Oct 15 15:16:21 [14869] vm1        cib:    debug: cib_process_modify: 	Destroying /cib/status/node_state[3]/transient_attributes/instance_attributes/nvpair[2]
Oct 15 15:16:21 [14869] vm1        cib:     info: cib_process_request: 	Completed cib_modify operation for section status: OK (rc=0, origin=local/attrd/7, version=0.6.1)
Oct 15 15:16:21 [14872] vm1      attrd:     info: attrd_cib_callback: 	Update 7 for terminate: OK (0)
Oct 15 15:16:21 [14872] vm1      attrd:   notice: attrd_cib_callback: 	Update 7 for terminate[vm1]=(null): OK (0)
Oct 15 15:16:21 [14872] vm1      attrd:   notice: attrd_cib_callback: 	Update 7 for terminate[vm2]=(null): OK (0)
Oct 15 15:16:21 [14872] vm1      attrd:   notice: attrd_cib_callback: 	Update 7 for terminate[vm3]=(null): OK (0)
Oct 15 15:16:21 [14869] vm1        cib:     info: cib_process_request: 	Completed cib_modify operation for section status: OK (rc=0, origin=local/attrd/8, version=0.6.1)
Oct 15 15:16:21 [14872] vm1      attrd:     info: attrd_cib_callback: 	Update 8 for probe_complete: OK (0)
Oct 15 15:16:21 [14872] vm1      attrd:   notice: attrd_cib_callback: 	Update 8 for probe_complete[vm1]=true: OK (0)
Oct 15 15:16:21 [14872] vm1      attrd:   notice: attrd_cib_callback: 	Update 8 for probe_complete[vm2]=true: OK (0)
Oct 15 15:16:21 [14872] vm1      attrd:   notice: attrd_cib_callback: 	Update 8 for probe_complete[vm3]=true: OK (0)
Oct 15 15:16:21 [14869] vm1        cib:     info: cib_process_request: 	Completed cib_modify operation for section cib: OK (rc=0, origin=local/crmd/37, version=0.6.1)
Oct 15 15:16:21 [14869] vm1        cib:    debug: cib_process_xpath: 	cib_query: //cib/configuration/crm_config//cluster_property_set//nvpair[@name='dc-version'] does not exist
Oct 15 15:16:21 [14869] vm1        cib:     info: cib_process_request: 	Completed cib_query operation for section //cib/configuration/crm_config//cluster_property_set//nvpair[@name='dc-version']: No such device or address (rc=-6, origin=local/crmd/38, version=0.6.1)
Oct 15 15:16:21 [14869] vm1        cib:    debug: qb_ipcs_dispatch_connection_request: 	HUP conn (14869-14922-14)
Oct 15 15:16:21 [14869] vm1        cib:    debug: qb_ipcs_disconnect: 	qb_ipcs_disconnect(14869-14922-14) state:2
Oct 15 15:16:21 [14869] vm1        cib:     info: crm_client_destroy: 	Destroying 0 events
Oct 15 15:16:21 [14869] vm1        cib:    debug: qb_rb_close: 	Free'ing ringbuffer: /dev/shm/qb-cib_rw-response-14869-14922-14-header
Oct 15 15:16:21 [14869] vm1        cib:    debug: qb_rb_close: 	Free'ing ringbuffer: /dev/shm/qb-cib_rw-event-14869-14922-14-header
Oct 15 15:16:21 [14869] vm1        cib:    debug: qb_rb_close: 	Free'ing ringbuffer: /dev/shm/qb-cib_rw-request-14869-14922-14-header
Oct 15 15:16:21 [14869] vm1        cib:    debug: activateCibXml: 	Triggering CIB write for cib_modify op
Oct 15 15:16:21 [14869] vm1        cib:   notice: log_cib_diff: 	cib:diff: Local-only Change: 0.7.1
Oct 15 15:16:21 [14869] vm1        cib:   notice: cib:diff: 	-- <cib admin_epoch="0" epoch="6" num_updates="1"/>
Oct 15 15:16:21 [14869] vm1        cib:   notice: cib:diff: 	++         <nvpair id="cib-bootstrap-options-dc-version" name="dc-version" value="1.1.11-0.284.6a5e863.git.el6-6a5e863"/>
Oct 15 15:16:21 [14869] vm1        cib:     info: cib_process_request: 	Completed cib_modify operation for section crm_config: OK (rc=0, origin=local/crmd/39, version=0.7.1)
Oct 15 15:16:21 [14869] vm1        cib:    debug: cib_process_xpath: 	cib_query: //cib/configuration/crm_config//cluster_property_set//nvpair[@name='cluster-infrastructure'] does not exist
Oct 15 15:16:21 [14869] vm1        cib:     info: cib_process_request: 	Completed cib_query operation for section //cib/configuration/crm_config//cluster_property_set//nvpair[@name='cluster-infrastructure']: No such device or address (rc=-6, origin=local/crmd/40, version=0.7.1)
Oct 15 15:16:21 [14874] vm1       crmd:    debug: initialize_join: 	join-3: Initializing join data (flag=true)
Oct 15 15:16:21 [14874] vm1       crmd:     info: crm_update_peer_join: 	initialize_join: Node vm3[3232261519] - join-3 phase 4 -> 0
Oct 15 15:16:21 [14874] vm1       crmd:     info: crm_update_peer_join: 	initialize_join: Node vm1[3232261517] - join-3 phase 4 -> 0
Oct 15 15:16:21 [14874] vm1       crmd:     info: crm_update_peer_join: 	initialize_join: Node vm2[3232261518] - join-3 phase 4 -> 0
Oct 15 15:16:21 [14874] vm1       crmd:     info: join_make_offer: 	join-3: Sending offer to vm3
Oct 15 15:16:21 [14874] vm1       crmd:     info: crm_update_peer_join: 	join_make_offer: Node vm3[3232261519] - join-3 phase 0 -> 1
Oct 15 15:16:21 [14874] vm1       crmd:     info: join_make_offer: 	join-3: Sending offer to vm1
Oct 15 15:16:21 [14874] vm1       crmd:     info: crm_update_peer_join: 	join_make_offer: Node vm1[3232261517] - join-3 phase 0 -> 1
Oct 15 15:16:21 [14874] vm1       crmd:     info: join_make_offer: 	join-3: Sending offer to vm2
Oct 15 15:16:21 [14874] vm1       crmd:     info: crm_update_peer_join: 	join_make_offer: Node vm2[3232261518] - join-3 phase 0 -> 1
Oct 15 15:16:21 [14874] vm1       crmd:     info: do_dc_join_offer_all: 	join-3: Waiting on 3 outstanding join acks
Oct 15 15:16:21 [14874] vm1       crmd:    debug: s_crmd_fsa: 	Processing I_ELECTION_DC: [ state=S_INTEGRATION cause=C_FSA_INTERNAL origin=do_election_check ]
Oct 15 15:16:21 [14874] vm1       crmd:  warning: do_log: 	FSA: Input I_ELECTION_DC from do_election_check() received in state S_INTEGRATION
Oct 15 15:16:21 [14874] vm1       crmd:    debug: election_vote: 	Started election 5
Oct 15 15:16:21 [14874] vm1       crmd:    debug: initialize_join: 	join-4: Initializing join data (flag=true)
Oct 15 15:16:21 [14874] vm1       crmd:     info: crm_update_peer_join: 	initialize_join: Node vm3[3232261519] - join-4 phase 1 -> 0
Oct 15 15:16:21 [14874] vm1       crmd:     info: crm_update_peer_join: 	initialize_join: Node vm1[3232261517] - join-4 phase 1 -> 0
Oct 15 15:16:21 [14874] vm1       crmd:     info: crm_update_peer_join: 	initialize_join: Node vm2[3232261518] - join-4 phase 1 -> 0
Oct 15 15:16:21 [14874] vm1       crmd:     info: join_make_offer: 	join-4: Sending offer to vm3
Oct 15 15:16:21 [14874] vm1       crmd:     info: crm_update_peer_join: 	join_make_offer: Node vm3[3232261519] - join-4 phase 0 -> 1
Oct 15 15:16:21 [14874] vm1       crmd:     info: join_make_offer: 	join-4: Sending offer to vm1
Oct 15 15:16:21 [14874] vm1       crmd:     info: crm_update_peer_join: 	join_make_offer: Node vm1[3232261517] - join-4 phase 0 -> 1
Oct 15 15:16:21 [14874] vm1       crmd:     info: join_make_offer: 	join-4: Sending offer to vm2
Oct 15 15:16:21 [14874] vm1       crmd:     info: crm_update_peer_join: 	join_make_offer: Node vm2[3232261518] - join-4 phase 0 -> 1
Oct 15 15:16:21 [14874] vm1       crmd:     info: do_dc_join_offer_all: 	join-4: Waiting on 3 outstanding join acks
Oct 15 15:16:21 [14874] vm1       crmd:    debug: do_pe_invoke_callback: 	Discarding PE request in state: S_INTEGRATION
Oct 15 15:16:21 [14874] vm1       crmd:    debug: config_query_callback: 	Call 33 : Parsing CIB options
Oct 15 15:16:21 [14874] vm1       crmd:    debug: config_query_callback: 	Shutdown escalation occurs after: 1200000ms
Oct 15 15:16:21 [14874] vm1       crmd:    debug: config_query_callback: 	Checking for expired actions every 900000ms
Oct 15 15:16:21 [14869] vm1        cib:    debug: activateCibXml: 	Triggering CIB write for cib_modify op
Oct 15 15:16:21 [14874] vm1       crmd:    debug: handle_request: 	Raising I_JOIN_OFFER: join-3
Oct 15 15:16:21 [14874] vm1       crmd:    debug: handle_request: 	Raising I_JOIN_OFFER: join-4
Oct 15 15:16:21 [14874] vm1       crmd:    debug: s_crmd_fsa: 	Processing I_JOIN_OFFER: [ state=S_INTEGRATION cause=C_HA_MESSAGE origin=route_message ]
Oct 15 15:16:21 [14874] vm1       crmd:     info: update_dc: 	Set DC to vm1 (3.0.7)
Oct 15 15:16:21 [14874] vm1       crmd:    debug: do_cl_join_offer_respond: 	do_cl_join_offer_respond added action A_DC_TIMER_STOP to the FSA
Oct 15 15:16:21 [14874] vm1       crmd:    debug: election_count_vote: 	Created voted hash
Oct 15 15:16:21 [14874] vm1       crmd:    debug: election_count_vote: 	Election 5 (current: 5, owner: 3232261517): Processed vote from vm1 (Recorded)
Oct 15 15:16:21 [14874] vm1       crmd:    debug: do_election_check: 	Ignore election check: we not in an election
Oct 15 15:16:21 [14874] vm1       crmd:    debug: s_crmd_fsa: 	Processing I_JOIN_OFFER: [ state=S_INTEGRATION cause=C_HA_MESSAGE origin=route_message ]
Oct 15 15:16:21 [14874] vm1       crmd:    debug: do_cl_join_offer_respond: 	do_cl_join_offer_respond added action A_DC_TIMER_STOP to the FSA
Oct 15 15:16:21 [14869] vm1        cib:   notice: log_cib_diff: 	cib:diff: Local-only Change: 0.8.1
Oct 15 15:16:21 [14869] vm1        cib:   notice: cib:diff: 	-- <cib admin_epoch="0" epoch="7" num_updates="1"/>
Oct 15 15:16:21 [14869] vm1        cib:   notice: cib:diff: 	++         <nvpair id="cib-bootstrap-options-cluster-infrastructure" name="cluster-infrastructure" value="corosync"/>
Oct 15 15:16:21 [14869] vm1        cib:     info: cib_process_request: 	Completed cib_modify operation for section crm_config: OK (rc=0, origin=local/crmd/41, version=0.8.1)
Oct 15 15:16:21 [14874] vm1       crmd:    debug: election_count_vote: 	Election 5 (current: 5, owner: 3232261517): Processed no-vote from vm3 (Recorded)
Oct 15 15:16:21 [14874] vm1       crmd:    debug: do_election_check: 	Ignore election check: we not in an election
Oct 15 15:16:21 [14874] vm1       crmd:    debug: s_crmd_fsa: 	Processing I_JOIN_REQUEST: [ state=S_INTEGRATION cause=C_HA_MESSAGE origin=route_message ]
Oct 15 15:16:21 [14874] vm1       crmd:    debug: do_dc_join_filter_offer: 	Processing req from vm3
Oct 15 15:16:21 [14874] vm1       crmd:    debug: do_dc_join_filter_offer: 	join-4: Welcoming node vm3 (ref join_request-crmd-1381817781-11)
Oct 15 15:16:21 [14874] vm1       crmd:     info: crm_update_peer_join: 	do_dc_join_filter_offer: Node vm3[3232261519] - join-4 phase 1 -> 2
Oct 15 15:16:21 [14874] vm1       crmd:    debug: do_dc_join_filter_offer: 	1 nodes have been integrated into join-4
Oct 15 15:16:21 [14874] vm1       crmd:    debug: check_join_state: 	Invoked by do_dc_join_filter_offer in state: S_INTEGRATION
Oct 15 15:16:21 [14874] vm1       crmd:    debug: do_dc_join_filter_offer: 	join-4: Still waiting on 2 outstanding offers
Oct 15 15:16:21 [14869] vm1        cib:     info: cib_process_request: 	Completed cib_query operation for section crm_config: OK (rc=0, origin=local/crmd/42, version=0.8.1)
Oct 15 15:16:21 [14874] vm1       crmd:    debug: config_query_callback: 	Call 42 : Parsing CIB options
Oct 15 15:16:21 [14874] vm1       crmd:    debug: config_query_callback: 	Shutdown escalation occurs after: 1200000ms
Oct 15 15:16:21 [14874] vm1       crmd:    debug: config_query_callback: 	Checking for expired actions every 900000ms
Oct 15 15:16:21 [14869] vm1        cib:     info: cib_process_request: 	Completed cib_query operation for section crm_config: OK (rc=0, origin=local/crmd/43, version=0.8.1)
Oct 15 15:16:21 [14874] vm1       crmd:    debug: config_query_callback: 	Call 43 : Parsing CIB options
Oct 15 15:16:21 [14874] vm1       crmd:    debug: config_query_callback: 	Shutdown escalation occurs after: 1200000ms
Oct 15 15:16:21 [14874] vm1       crmd:    debug: config_query_callback: 	Checking for expired actions every 900000ms
Oct 15 15:16:21 [14869] vm1        cib:     info: cib_process_request: 	Completed cib_query operation for section crm_config: OK (rc=0, origin=local/crmd/44, version=0.8.1)
Oct 15 15:16:21 [14874] vm1       crmd:    debug: config_query_callback: 	Call 44 : Parsing CIB options
Oct 15 15:16:21 [14874] vm1       crmd:    debug: config_query_callback: 	Shutdown escalation occurs after: 1200000ms
Oct 15 15:16:21 [14874] vm1       crmd:    debug: config_query_callback: 	Checking for expired actions every 900000ms
Oct 15 15:16:21 [14869] vm1        cib:     info: cib_process_request: 	Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/45, version=0.8.1)
Oct 15 15:16:21 [14869] vm1        cib:     info: cib_process_request: 	Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/46, version=0.8.1)
Oct 15 15:16:21 [14874] vm1       crmd:    debug: join_query_callback: 	Respond to join offer join-4
Oct 15 15:16:21 [14874] vm1       crmd:    debug: join_query_callback: 	Acknowledging vm1 as our DC
Oct 15 15:16:21 [14874] vm1       crmd:    debug: election_count_vote: 	Election 5 (current: 5, owner: 3232261517): Processed no-vote from vm2 (Recorded)
Oct 15 15:16:21 [14874] vm1       crmd:    debug: do_election_check: 	Ignore election check: we not in an election
Oct 15 15:16:21 [14874] vm1       crmd:    debug: s_crmd_fsa: 	Processing I_JOIN_REQUEST: [ state=S_INTEGRATION cause=C_HA_MESSAGE origin=route_message ]
Oct 15 15:16:21 [14874] vm1       crmd:    debug: do_dc_join_filter_offer: 	Processing req from vm2
Oct 15 15:16:21 [14874] vm1       crmd:    debug: do_dc_join_filter_offer: 	join-4: Welcoming node vm2 (ref join_request-crmd-1381817781-12)
Oct 15 15:16:21 [14874] vm1       crmd:     info: crm_update_peer_join: 	do_dc_join_filter_offer: Node vm2[3232261518] - join-4 phase 1 -> 2
Oct 15 15:16:21 [14874] vm1       crmd:    debug: do_dc_join_filter_offer: 	2 nodes have been integrated into join-4
Oct 15 15:16:21 [14874] vm1       crmd:    debug: check_join_state: 	Invoked by do_dc_join_filter_offer in state: S_INTEGRATION
Oct 15 15:16:21 [14874] vm1       crmd:    debug: do_dc_join_filter_offer: 	join-4: Still waiting on 1 outstanding offers
Oct 15 15:16:21 [14874] vm1       crmd:    debug: s_crmd_fsa: 	Processing I_JOIN_REQUEST: [ state=S_INTEGRATION cause=C_HA_MESSAGE origin=route_message ]
Oct 15 15:16:21 [14874] vm1       crmd:    debug: do_dc_join_filter_offer: 	Processing req from vm1
Oct 15 15:16:21 [14874] vm1       crmd:    debug: do_dc_join_filter_offer: 	vm1 has a better generation number than the current max vm3
Oct 15 15:16:21 [14874] vm1       crmd:    debug: do_dc_join_filter_offer: 	Max generation   <generation_tuple epoch="7" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.7" cib-last-written="Tue Oct 15 15:16:21 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517"/>
Oct 15 15:16:21 [14874] vm1       crmd:    debug: do_dc_join_filter_offer: 	Their generation   <generation_tuple epoch="8" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.7" cib-last-written="Tue Oct 15 15:16:21 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517"/>
Oct 15 15:16:21 [14874] vm1       crmd:    debug: do_dc_join_filter_offer: 	join-4: Welcoming node vm1 (ref join_request-crmd-1381817781-28)
Oct 15 15:16:21 [14874] vm1       crmd:     info: crm_update_peer_join: 	do_dc_join_filter_offer: Node vm1[3232261517] - join-4 phase 1 -> 2
Oct 15 15:16:21 [14874] vm1       crmd:    debug: do_dc_join_filter_offer: 	3 nodes have been integrated into join-4
Oct 15 15:16:21 [14874] vm1       crmd:    debug: check_join_state: 	Invoked by do_dc_join_filter_offer in state: S_INTEGRATION
Oct 15 15:16:21 [14874] vm1       crmd:    debug: check_join_state: 	join-4: Integration of 3 peers complete: do_dc_join_filter_offer
Oct 15 15:16:21 [14874] vm1       crmd:    debug: s_crmd_fsa: 	Processing I_INTEGRATED: [ state=S_INTEGRATION cause=C_FSA_INTERNAL origin=check_join_state ]
Oct 15 15:16:21 [14874] vm1       crmd:     info: do_state_transition: 	State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_FSA_INTERNAL origin=check_join_state ]
Oct 15 15:16:21 [14874] vm1       crmd:    debug: do_state_transition: 	All 3 cluster nodes responded to the join offer.
Oct 15 15:16:21 [14874] vm1       crmd:    debug: crm_timer_start: 	Started Finalization Timer (I_ELECTION:1800000ms), src=56
Oct 15 15:16:21 [14874] vm1       crmd:    debug: do_dc_join_finalize: 	Finializing join-4 for 3 clients
Oct 15 15:16:21 [14874] vm1       crmd:     info: crmd_join_phase_log: 	join-4: vm3=integrated
Oct 15 15:16:21 [14874] vm1       crmd:     info: crmd_join_phase_log: 	join-4: vm1=integrated
Oct 15 15:16:21 [14874] vm1       crmd:     info: crmd_join_phase_log: 	join-4: vm2=integrated
Oct 15 15:16:21 [14874] vm1       crmd:     info: do_dc_join_finalize: 	join-4: Syncing our CIB to the rest of the cluster
Oct 15 15:16:21 [14874] vm1       crmd:    debug: do_dc_join_finalize: 	Requested version   <generation_tuple epoch="8" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.7" cib-last-written="Tue Oct 15 15:16:21 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517"/>
Oct 15 15:16:21 [14869] vm1        cib:    debug: sync_our_cib: 	Syncing CIB to all peers
Oct 15 15:16:21 [14869] vm1        cib:     info: cib_process_request: 	Completed cib_sync operation for section 'all': OK (rc=0, origin=local/crmd/47, version=0.8.1)
Oct 15 15:16:21 [14874] vm1       crmd:    debug: check_join_state: 	Invoked by finalize_sync_callback in state: S_FINALIZE_JOIN
Oct 15 15:16:21 [14874] vm1       crmd:    debug: check_join_state: 	join-4: Still waiting on 3 integrated nodes
Oct 15 15:16:21 [14874] vm1       crmd:    debug: crmd_join_phase_log: 	join-4: vm3=integrated
Oct 15 15:16:21 [14874] vm1       crmd:    debug: crmd_join_phase_log: 	join-4: vm1=integrated
Oct 15 15:16:21 [14874] vm1       crmd:    debug: crmd_join_phase_log: 	join-4: vm2=integrated
Oct 15 15:16:21 [14874] vm1       crmd:    debug: finalize_sync_callback: 	Notifying 3 clients of join-4 results
Oct 15 15:16:21 [14874] vm1       crmd:    debug: finalize_join_for: 	join-4: ACK'ing join request from vm3
Oct 15 15:16:21 [14874] vm1       crmd:     info: crm_update_peer_join: 	finalize_join_for: Node vm3[3232261519] - join-4 phase 2 -> 3
Oct 15 15:16:21 [14874] vm1       crmd:    debug: finalize_join_for: 	join-4: ACK'ing join request from vm1
Oct 15 15:16:21 [14874] vm1       crmd:     info: crm_update_peer_join: 	finalize_join_for: Node vm1[3232261517] - join-4 phase 2 -> 3
Oct 15 15:16:21 [14874] vm1       crmd:    debug: finalize_join_for: 	join-4: ACK'ing join request from vm2
Oct 15 15:16:21 [14874] vm1       crmd:     info: crm_update_peer_join: 	finalize_join_for: Node vm2[3232261518] - join-4 phase 2 -> 3
Oct 15 15:16:21 [14869] vm1        cib:     info: cib_process_request: 	Completed cib_modify operation for section nodes: OK (rc=0, origin=local/crmd/48, version=0.8.1)
Oct 15 15:16:21 [14874] vm1       crmd:    debug: handle_request: 	Raising I_JOIN_RESULT: join-4
Oct 15 15:16:21 [14874] vm1       crmd:    debug: s_crmd_fsa: 	Processing I_JOIN_RESULT: [ state=S_FINALIZE_JOIN cause=C_HA_MESSAGE origin=route_message ]
Oct 15 15:16:21 [14874] vm1       crmd:    debug: do_cl_join_finalize_respond: 	Confirming join join-4: join_ack_nack
Oct 15 15:16:21 [14874] vm1       crmd:    debug: do_cl_join_finalize_respond: 	join-4: Join complete.  Sending local LRM status to vm1
Oct 15 15:16:21 [14874] vm1       crmd:    debug: do_dc_join_ack: 	Ignoring op=join_ack_nack message from vm1
Oct 15 15:16:21 [14874] vm1       crmd:    debug: s_crmd_fsa: 	Processing I_JOIN_RESULT: [ state=S_FINALIZE_JOIN cause=C_HA_MESSAGE origin=route_message ]
Oct 15 15:16:21 [14874] vm1       crmd:     info: crm_update_peer_join: 	do_dc_join_ack: Node vm3[3232261519] - join-4 phase 3 -> 4
Oct 15 15:16:21 [14874] vm1       crmd:     info: do_dc_join_ack: 	join-4: Updating node state to member for vm3
Oct 15 15:16:21 [14874] vm1       crmd:     info: erase_status_tag: 	Deleting xpath: //node_state[@uname='vm3']/lrm
Oct 15 15:16:21 [14874] vm1       crmd:    debug: do_dc_join_ack: 	join-4: Registered callback for LRM update 52
Oct 15 15:16:21 [14874] vm1       crmd:    debug: s_crmd_fsa: 	Processing I_JOIN_RESULT: [ state=S_FINALIZE_JOIN cause=C_HA_MESSAGE origin=route_message ]
Oct 15 15:16:21 [14874] vm1       crmd:     info: crm_update_peer_join: 	do_dc_join_ack: Node vm1[3232261517] - join-4 phase 3 -> 4
Oct 15 15:16:21 [14874] vm1       crmd:     info: do_dc_join_ack: 	join-4: Updating node state to member for vm1
Oct 15 15:16:21 [14874] vm1       crmd:     info: erase_status_tag: 	Deleting xpath: //node_state[@uname='vm1']/lrm
Oct 15 15:16:21 [14874] vm1       crmd:    debug: do_dc_join_ack: 	join-4: Registered callback for LRM update 54
Oct 15 15:16:21 [14874] vm1       crmd:    debug: s_crmd_fsa: 	Processing I_JOIN_RESULT: [ state=S_FINALIZE_JOIN cause=C_HA_MESSAGE origin=route_message ]
Oct 15 15:16:21 [14874] vm1       crmd:     info: crm_update_peer_join: 	do_dc_join_ack: Node vm2[3232261518] - join-4 phase 3 -> 4
Oct 15 15:16:21 [14874] vm1       crmd:     info: do_dc_join_ack: 	join-4: Updating node state to member for vm2
Oct 15 15:16:21 [14874] vm1       crmd:     info: erase_status_tag: 	Deleting xpath: //node_state[@uname='vm2']/lrm
Oct 15 15:16:21 [14874] vm1       crmd:    debug: do_dc_join_ack: 	join-4: Registered callback for LRM update 56
Oct 15 15:16:21 [14869] vm1        cib:     info: cib_process_request: 	Completed cib_modify operation for section nodes: OK (rc=0, origin=local/crmd/49, version=0.8.1)
Oct 15 15:16:21 [14869] vm1        cib:     info: cib_process_request: 	Completed cib_modify operation for section nodes: OK (rc=0, origin=local/crmd/50, version=0.8.1)
Oct 15 15:16:21 [14869] vm1        cib:    debug: cib_process_xpath: 	Processing cib_delete op for //node_state[@uname='vm3']/lrm (/cib/status/node_state[2]/lrm)
Oct 15 15:16:21 [14869] vm1        cib:     info: write_cib_contents: 	Archived previous version as /var/lib/pacemaker/cib/cib-2.raw
Oct 15 15:16:21 [14869] vm1        cib:    debug: write_cib_contents: 	Writing CIB to disk
Oct 15 15:16:21 [14869] vm1        cib:     info: cib_process_request: 	Completed cib_delete operation for section //node_state[@uname='vm3']/lrm: OK (rc=0, origin=local/crmd/51, version=0.8.2)
Oct 15 15:16:21 [14869] vm1        cib:     info: write_cib_contents: 	Wrote version 0.8.0 of the CIB to disk (digest: 560938b85dddf6a8da1def7aa23e6520)
Oct 15 15:16:21 [14869] vm1        cib:     info: cib_process_request: 	Completed cib_modify operation for section status: OK (rc=0, origin=local/crmd/52, version=0.8.3)
Oct 15 15:16:21 [14869] vm1        cib:    debug: write_cib_contents: 	Wrote digest 560938b85dddf6a8da1def7aa23e6520 to disk
Oct 15 15:16:21 [14869] vm1        cib:     info: retrieveCib: 	Reading cluster configuration from: /var/lib/pacemaker/cib/cib.3CBBuO (digest: /var/lib/pacemaker/cib/cib.2ucdvm)
Oct 15 15:16:21 [14869] vm1        cib:    debug: cib_process_xpath: 	Processing cib_delete op for //node_state[@uname='vm1']/lrm (/cib/status/node_state[3]/lrm)
Oct 15 15:16:21 [14869] vm1        cib:     info: cib_process_request: 	Completed cib_delete operation for section //node_state[@uname='vm1']/lrm: OK (rc=0, origin=local/crmd/53, version=0.8.4)
Oct 15 15:16:21 [14869] vm1        cib:    debug: write_cib_contents: 	Activating /var/lib/pacemaker/cib/cib.3CBBuO
Oct 15 15:16:21 [14869] vm1        cib:     info: cib_process_request: 	Completed cib_modify operation for section status: OK (rc=0, origin=local/crmd/54, version=0.8.5)
Oct 15 15:16:21 [14869] vm1        cib:    debug: cib_process_xpath: 	Processing cib_delete op for //node_state[@uname='vm2']/lrm (/cib/status/node_state[1]/lrm)
Oct 15 15:16:21 [14869] vm1        cib:     info: cib_process_request: 	Completed cib_delete operation for section //node_state[@uname='vm2']/lrm: OK (rc=0, origin=local/crmd/55, version=0.8.6)
Oct 15 15:16:21 [14874] vm1       crmd:    debug: erase_xpath_callback: 	Deletion of "//node_state[@uname='vm3']/lrm": OK (rc=0)
Oct 15 15:16:21 [14874] vm1       crmd:    debug: join_update_complete_callback: 	Join update 52 complete
Oct 15 15:16:21 [14874] vm1       crmd:    debug: check_join_state: 	Invoked by join_update_complete_callback in state: S_FINALIZE_JOIN
Oct 15 15:16:21 [14874] vm1       crmd:    debug: check_join_state: 	join-4 complete: join_update_complete_callback
Oct 15 15:16:21 [14874] vm1       crmd:    debug: s_crmd_fsa: 	Processing I_FINALIZED: [ state=S_FINALIZE_JOIN cause=C_FSA_INTERNAL origin=check_join_state ]
Oct 15 15:16:21 [14874] vm1       crmd:     info: do_state_transition: 	State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state ]
Oct 15 15:16:21 [14874] vm1       crmd:    debug: do_state_transition: 	All 3 cluster nodes are eligible to run resources.
Oct 15 15:16:21 [14874] vm1       crmd:    debug: do_dc_join_final: 	Ensuring DC, quorum and node attributes are up-to-date
Oct 15 15:16:21 [14874] vm1       crmd:    debug: attrd_update_delegate: 	Sent update: (null)=(null) for localhost
Oct 15 15:16:21 [14874] vm1       crmd:    debug: crm_update_quorum: 	Updating quorum status to true (call=59)
Oct 15 15:16:21 [14874] vm1       crmd:    debug: do_te_invoke: 	Cancelling the transition: inactive
Oct 15 15:16:21 [14874] vm1       crmd:     info: abort_transition_graph: 	do_te_invoke:151 - Triggered transition abort (complete=1) : Peer Cancelled
Oct 15 15:16:21 [14874] vm1       crmd:    debug: crm_timer_start: 	Started New Transition Timer (I_PE_CALC:2000ms), src=67
Oct 15 15:16:21 [14874] vm1       crmd:    debug: erase_xpath_callback: 	Deletion of "//node_state[@uname='vm1']/lrm": OK (rc=0)
Oct 15 15:16:21 [14874] vm1       crmd:    debug: join_update_complete_callback: 	Join update 54 complete
Oct 15 15:16:21 [14874] vm1       crmd:    debug: check_join_state: 	Invoked by join_update_complete_callback in state: S_POLICY_ENGINE
Oct 15 15:16:21 [14874] vm1       crmd:    debug: erase_xpath_callback: 	Deletion of "//node_state[@uname='vm2']/lrm": OK (rc=0)
Oct 15 15:16:21 [14874] vm1       crmd:    debug: te_update_diff: 	Processing diff (cib_modify): 0.8.6 -> 0.8.7 (S_POLICY_ENGINE)
Oct 15 15:16:21 [14869] vm1        cib:     info: cib_process_request: 	Completed cib_modify operation for section status: OK (rc=0, origin=local/crmd/56, version=0.8.7)
Oct 15 15:16:21 [14869] vm1        cib:     info: cib_process_request: 	Completed cib_modify operation for section nodes: OK (rc=0, origin=local/crmd/57, version=0.8.7)
Oct 15 15:16:21 [14869] vm1        cib:     info: cib_process_request: 	Completed cib_modify operation for section status: OK (rc=0, origin=local/crmd/58, version=0.8.7)
Oct 15 15:16:21 [14869] vm1        cib:     info: cib_process_request: 	Completed cib_modify operation for section cib: OK (rc=0, origin=local/crmd/59, version=0.8.7)
Oct 15 15:16:21 [14874] vm1       crmd:    debug: join_update_complete_callback: 	Join update 56 complete
Oct 15 15:16:21 [14874] vm1       crmd:    debug: check_join_state: 	Invoked by join_update_complete_callback in state: S_POLICY_ENGINE
Oct 15 15:16:22 [14869] vm1        cib:     info: crm_client_new: 	Connecting 0x144af10 for uid=0 gid=0 pid=14924 id=d1e89bc1-2943-4921-afc4-d159e63cd0a8
Oct 15 15:16:22 [14869] vm1        cib:    debug: handle_new_connection: 	IPC credentials authenticated (14869-14924-14)
Oct 15 15:16:22 [14869] vm1        cib:    debug: qb_ipcs_shm_connect: 	connecting to client [14924]
Oct 15 15:16:22 [14869] vm1        cib:    debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Oct 15 15:16:22 [14869] vm1        cib:    debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Oct 15 15:16:22 [14869] vm1        cib:    debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Oct 15 15:16:22 [14869] vm1        cib:     info: cib_process_request: 	Completed cib_query operation for section 'all': OK (rc=0, origin=local/cibadmin/2, version=0.8.7)
Oct 15 15:16:22 [14869] vm1        cib:    debug: qb_ipcs_dispatch_connection_request: 	HUP conn (14869-14924-14)
Oct 15 15:16:22 [14869] vm1        cib:    debug: qb_ipcs_disconnect: 	qb_ipcs_disconnect(14869-14924-14) state:2
Oct 15 15:16:22 [14869] vm1        cib:     info: crm_client_destroy: 	Destroying 0 events
Oct 15 15:16:22 [14869] vm1        cib:    debug: qb_rb_close: 	Free'ing ringbuffer: /dev/shm/qb-cib_rw-response-14869-14924-14-header
Oct 15 15:16:22 [14869] vm1        cib:    debug: qb_rb_close: 	Free'ing ringbuffer: /dev/shm/qb-cib_rw-event-14869-14924-14-header
Oct 15 15:16:22 [14869] vm1        cib:    debug: qb_rb_close: 	Free'ing ringbuffer: /dev/shm/qb-cib_rw-request-14869-14924-14-header
Oct 15 15:16:22 [14870] vm1 stonith-ng:    debug: internal_stonith_action_execute: 	result = 0
Oct 15 15:16:22 [14870] vm1 stonith-ng:   notice: stonith_device_register: 	Added 'f1' to the device list (1 active devices)
Oct 15 15:16:22 [14870] vm1 stonith-ng:     info: cib_device_update: 	Device f2 is allowed on vm1: score=0
Oct 15 15:16:22 [14870] vm1 stonith-ng:     info: stonith_action_create: 	Initiating action metadata for agent fence_legacy (target=(null))
Oct 15 15:16:22 [14870] vm1 stonith-ng:    debug: internal_stonith_action_execute: 	forking
Oct 15 15:16:22 [14870] vm1 stonith-ng:    debug: internal_stonith_action_execute: 	sending args
Oct 15 15:16:23 [14870] vm1 stonith-ng:    debug: internal_stonith_action_execute: 	result = 0
Oct 15 15:16:23 [14870] vm1 stonith-ng:   notice: stonith_device_register: 	Added 'f2' to the device list (2 active devices)
Oct 15 15:16:23 [14870] vm1 stonith-ng:    debug: log_cib_diff: 	Config update: Local-only Change: 0.7.1
Oct 15 15:16:23 [14870] vm1 stonith-ng:    debug: Config update: 	-- <cib admin_epoch="0" epoch="6" num_updates="1"/>
Oct 15 15:16:23 [14870] vm1 stonith-ng:    debug: Config update: 	+  <cib epoch="7" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.7" cib-last-written="Tue Oct 15 15:16:21 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517">
Oct 15 15:16:23 [14870] vm1 stonith-ng:    debug: Config update: 	+    <configuration>
Oct 15 15:16:23 [14870] vm1 stonith-ng:    debug: Config update: 	+      <crm_config>
Oct 15 15:16:23 [14870] vm1 stonith-ng:    debug: Config update: 	+        <cluster_property_set id="cib-bootstrap-options">
Oct 15 15:16:23 [14870] vm1 stonith-ng:    debug: Config update: 	++         <nvpair id="cib-bootstrap-options-dc-version" name="dc-version" value="1.1.11-0.284.6a5e863.git.el6-6a5e863"/>
Oct 15 15:16:23 [14870] vm1 stonith-ng:    debug: Config update: 	+        </cluster_property_set>
Oct 15 15:16:23 [14870] vm1 stonith-ng:    debug: Config update: 	+      </crm_config>
Oct 15 15:16:23 [14870] vm1 stonith-ng:    debug: Config update: 	+    </configuration>
Oct 15 15:16:23 [14870] vm1 stonith-ng:    debug: Config update: 	+  </cib>
Oct 15 15:16:23 [14870] vm1 stonith-ng:    debug: log_cib_diff: 	Config update: Local-only Change: 0.8.1
Oct 15 15:16:23 [14870] vm1 stonith-ng:    debug: Config update: 	-- <cib admin_epoch="0" epoch="7" num_updates="1"/>
Oct 15 15:16:23 [14870] vm1 stonith-ng:    debug: Config update: 	+  <cib epoch="8" num_updates="1" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.7" cib-last-written="Tue Oct 15 15:16:21 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517">
Oct 15 15:16:23 [14870] vm1 stonith-ng:    debug: Config update: 	+    <configuration>
Oct 15 15:16:23 [14870] vm1 stonith-ng:    debug: Config update: 	+      <crm_config>
Oct 15 15:16:23 [14870] vm1 stonith-ng:    debug: Config update: 	+        <cluster_property_set id="cib-bootstrap-options">
Oct 15 15:16:23 [14870] vm1 stonith-ng:    debug: Config update: 	++         <nvpair id="cib-bootstrap-options-cluster-infrastructure" name="cluster-infrastructure" value="corosync"/>
Oct 15 15:16:23 [14870] vm1 stonith-ng:    debug: Config update: 	+        </cluster_property_set>
Oct 15 15:16:23 [14870] vm1 stonith-ng:    debug: Config update: 	+      </crm_config>
Oct 15 15:16:23 [14870] vm1 stonith-ng:    debug: Config update: 	+    </configuration>
Oct 15 15:16:23 [14870] vm1 stonith-ng:    debug: Config update: 	+  </cib>
Oct 15 15:16:23 [14870] vm1 stonith-ng:    debug: Config update: 	Diff: --- 0.8.1
Oct 15 15:16:23 [14870] vm1 stonith-ng:    debug: Config update: 	Diff: +++ 0.8.2 d1e2fc96ad37e6292ffc069783bf4faf
Oct 15 15:16:23 [14870] vm1 stonith-ng:    debug: Config update: 	-  <cib num_updates="1">
Oct 15 15:16:23 [14870] vm1 stonith-ng:    debug: Config update: 	-    <status>
Oct 15 15:16:23 [14870] vm1 stonith-ng:    debug: Config update: 	-      <node_state id="3232261519">
Oct 15 15:16:23 [14870] vm1 stonith-ng:    debug: Config update: 	--       <lrm id="3232261519">
Oct 15 15:16:23 [14870] vm1 stonith-ng:    debug: Config update: 	--         <lrm_resources/>
Oct 15 15:16:23 [14870] vm1 stonith-ng:    debug: Config update: 	--       </lrm>
Oct 15 15:16:23 [14870] vm1 stonith-ng:    debug: Config update: 	-      </node_state>
Oct 15 15:16:23 [14870] vm1 stonith-ng:    debug: Config update: 	-    </status>
Oct 15 15:16:23 [14870] vm1 stonith-ng:    debug: Config update: 	-  </cib>
Oct 15 15:16:23 [14870] vm1 stonith-ng:    debug: Config update: 	++ <cib epoch="8" num_updates="2" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.7" cib-last-written="Tue Oct 15 15:16:21 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517"/>
Oct 15 15:16:23 [14870] vm1 stonith-ng:    debug: Config update: 	Diff: --- 0.8.2
Oct 15 15:16:23 [14870] vm1 stonith-ng:    debug: Config update: 	Diff: +++ 0.8.3 ca1157e6b9ca3da29c444681b53f89aa
Oct 15 15:16:23 [14870] vm1 stonith-ng:    debug: Config update: 	-- <cib num_updates="2"/>
Oct 15 15:16:23 [14870] vm1 stonith-ng:    debug: Config update: 	+  <cib epoch="8" num_updates="3" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.7" cib-last-written="Tue Oct 15 15:16:21 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517">
Oct 15 15:16:23 [14870] vm1 stonith-ng:    debug: Config update: 	+    <status>
Oct 15 15:16:23 [14870] vm1 stonith-ng:    debug: Config update: 	+      <node_state id="3232261519" uname="vm3" in_ccm="true" crmd="online" crm-debug-origin="do_lrm_query_internal" join="member" expected="member">
Oct 15 15:16:23 [14870] vm1 stonith-ng:    debug: Config update: 	++       <lrm id="3232261519">
Oct 15 15:16:23 [14870] vm1 stonith-ng:    debug: Config update: 	++         <lrm_resources/>
Oct 15 15:16:23 [14870] vm1 stonith-ng:    debug: Config update: 	++       </lrm>
Oct 15 15:16:23 [14870] vm1 stonith-ng:    debug: Config update: 	+      </node_state>
Oct 15 15:16:23 [14870] vm1 stonith-ng:    debug: Config update: 	+    </status>
Oct 15 15:16:23 [14870] vm1 stonith-ng:    debug: Config update: 	+  </cib>
Oct 15 15:16:23 [14870] vm1 stonith-ng:    debug: Config update: 	Diff: --- 0.8.3
Oct 15 15:16:23 [14870] vm1 stonith-ng:    debug: Config update: 	Diff: +++ 0.8.4 24d7d3eee5aa52fad526d6f5c1d8d346
Oct 15 15:16:23 [14870] vm1 stonith-ng:    debug: Config update: 	-  <cib num_updates="3">
Oct 15 15:16:23 [14870] vm1 stonith-ng:    debug: Config update: 	-    <status>
Oct 15 15:16:23 [14870] vm1 stonith-ng:    debug: Config update: 	-      <node_state id="3232261517">
Oct 15 15:16:23 [14870] vm1 stonith-ng:    debug: Config update: 	--       <lrm id="3232261517">
Oct 15 15:16:23 [14870] vm1 stonith-ng:    debug: Config update: 	--         <lrm_resources/>
Oct 15 15:16:23 [14870] vm1 stonith-ng:    debug: Config update: 	--       </lrm>
Oct 15 15:16:23 [14870] vm1 stonith-ng:    debug: Config update: 	-      </node_state>
Oct 15 15:16:23 [14870] vm1 stonith-ng:    debug: Config update: 	-    </status>
Oct 15 15:16:23 [14870] vm1 stonith-ng:    debug: Config update: 	-  </cib>
Oct 15 15:16:23 [14870] vm1 stonith-ng:    debug: Config update: 	++ <cib epoch="8" num_updates="4" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.7" cib-last-written="Tue Oct 15 15:16:21 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517"/>
Oct 15 15:16:23 [14870] vm1 stonith-ng:    debug: Config update: 	Diff: --- 0.8.4
Oct 15 15:16:23 [14870] vm1 stonith-ng:    debug: Config update: 	Diff: +++ 0.8.5 d7982443d0e8a4f5a988a7401cb45e60
Oct 15 15:16:23 [14870] vm1 stonith-ng:    debug: Config update: 	-- <cib num_updates="4"/>
Oct 15 15:16:23 [14870] vm1 stonith-ng:    debug: Config update: 	+  <cib epoch="8" num_updates="5" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.7" cib-last-written="Tue Oct 15 15:16:21 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517">
Oct 15 15:16:23 [14870] vm1 stonith-ng:    debug: Config update: 	+    <status>
Oct 15 15:16:23 [14870] vm1 stonith-ng:    debug: Config update: 	+      <node_state id="3232261517" uname="vm1" in_ccm="true" crmd="online" crm-debug-origin="do_lrm_query_internal" join="member" expected="member">
Oct 15 15:16:23 [14870] vm1 stonith-ng:    debug: Config update: 	++       <lrm id="3232261517">
Oct 15 15:16:23 [14870] vm1 stonith-ng:    debug: Config update: 	++         <lrm_resources/>
Oct 15 15:16:23 [14870] vm1 stonith-ng:    debug: Config update: 	++       </lrm>
Oct 15 15:16:23 [14870] vm1 stonith-ng:    debug: Config update: 	+      </node_state>
Oct 15 15:16:23 [14870] vm1 stonith-ng:    debug: Config update: 	+    </status>
Oct 15 15:16:23 [14870] vm1 stonith-ng:    debug: Config update: 	+  </cib>
Oct 15 15:16:23 [14870] vm1 stonith-ng:    debug: Config update: 	Diff: --- 0.8.5
Oct 15 15:16:23 [14870] vm1 stonith-ng:    debug: Config update: 	Diff: +++ 0.8.6 0b457b89d9f80a572fc4864dff070680
Oct 15 15:16:23 [14870] vm1 stonith-ng:    debug: Config update: 	-  <cib num_updates="5">
Oct 15 15:16:23 [14870] vm1 stonith-ng:    debug: Config update: 	-    <status>
Oct 15 15:16:23 [14870] vm1 stonith-ng:    debug: Config update: 	-      <node_state id="3232261518">
Oct 15 15:16:23 [14870] vm1 stonith-ng:    debug: Config update: 	--       <lrm id="3232261518">
Oct 15 15:16:23 [14870] vm1 stonith-ng:    debug: Config update: 	--         <lrm_resources/>
Oct 15 15:16:23 [14870] vm1 stonith-ng:    debug: Config update: 	--       </lrm>
Oct 15 15:16:23 [14870] vm1 stonith-ng:    debug: Config update: 	-      </node_state>
Oct 15 15:16:23 [14870] vm1 stonith-ng:    debug: Config update: 	-    </status>
Oct 15 15:16:23 [14870] vm1 stonith-ng:    debug: Config update: 	-  </cib>
Oct 15 15:16:23 [14870] vm1 stonith-ng:    debug: Config update: 	++ <cib epoch="8" num_updates="6" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.7" cib-last-written="Tue Oct 15 15:16:21 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517"/>
Oct 15 15:16:23 [14870] vm1 stonith-ng:    debug: Config update: 	Diff: --- 0.8.6
Oct 15 15:16:23 [14870] vm1 stonith-ng:    debug: Config update: 	Diff: +++ 0.8.7 c5bab0ce2e3fb53a82b2c68e9ed07811
Oct 15 15:16:23 [14870] vm1 stonith-ng:    debug: Config update: 	-- <cib num_updates="6"/>
Oct 15 15:16:23 [14870] vm1 stonith-ng:    debug: Config update: 	+  <cib epoch="8" num_updates="7" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.7" cib-last-written="Tue Oct 15 15:16:21 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517">
Oct 15 15:16:23 [14870] vm1 stonith-ng:    debug: Config update: 	+    <status>
Oct 15 15:16:23 [14870] vm1 stonith-ng:    debug: Config update: 	+      <node_state id="3232261518" uname="vm2" in_ccm="true" crmd="online" crm-debug-origin="do_lrm_query_internal" join="member" expected="member">
Oct 15 15:16:23 [14870] vm1 stonith-ng:    debug: Config update: 	++       <lrm id="3232261518">
Oct 15 15:16:23 [14870] vm1 stonith-ng:    debug: Config update: 	++         <lrm_resources/>
Oct 15 15:16:23 [14870] vm1 stonith-ng:    debug: Config update: 	++       </lrm>
Oct 15 15:16:23 [14870] vm1 stonith-ng:    debug: Config update: 	+      </node_state>
Oct 15 15:16:23 [14870] vm1 stonith-ng:    debug: Config update: 	+    </status>
Oct 15 15:16:23 [14870] vm1 stonith-ng:    debug: Config update: 	+  </cib>
Oct 15 15:16:23 [14874] vm1       crmd:     info: crm_timer_popped: 	New Transition Timer (I_PE_CALC) just popped (2000ms)
Oct 15 15:16:23 [14874] vm1       crmd:    debug: s_crmd_fsa: 	Processing I_PE_CALC: [ state=S_POLICY_ENGINE cause=C_TIMER_POPPED origin=crm_timer_popped ]
Oct 15 15:16:23 [14874] vm1       crmd:    debug: do_pe_invoke: 	Query 60: Requesting the current CIB: S_POLICY_ENGINE
Oct 15 15:16:23 [14869] vm1        cib:     info: cib_process_request: 	Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/60, version=0.8.7)
Oct 15 15:16:23 [14874] vm1       crmd:    debug: do_pe_invoke_callback: 	Invoking the PE: query=60, ref=pe_calc-dc-1381817783-33, seq=12, quorate=1
Oct 15 15:16:23 [14873] vm1    pengine:    debug: unpack_config: 	STONITH timeout: 60000
Oct 15 15:16:23 [14873] vm1    pengine:    debug: unpack_config: 	STONITH of failed nodes is enabled
Oct 15 15:16:23 [14873] vm1    pengine:    debug: unpack_config: 	Stop all active resources: false
Oct 15 15:16:23 [14873] vm1    pengine:    debug: unpack_config: 	Cluster is symmetric - resources can run anywhere by default
Oct 15 15:16:23 [14873] vm1    pengine:    debug: unpack_config: 	Default stickiness: 0
Oct 15 15:16:23 [14873] vm1    pengine:    debug: unpack_config: 	On loss of CCM Quorum: Freeze resources
Oct 15 15:16:23 [14873] vm1    pengine:    debug: unpack_config: 	Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
Oct 15 15:16:23 [14873] vm1    pengine:    debug: unpack_domains: 	Unpacking domains
Oct 15 15:16:23 [14873] vm1    pengine:     info: determine_online_status_fencing: 	Node vm2 is active
Oct 15 15:16:23 [14873] vm1    pengine:     info: determine_online_status: 	Node vm2 is online
Oct 15 15:16:23 [14873] vm1    pengine:     info: determine_online_status_fencing: 	Node vm3 is active
Oct 15 15:16:23 [14873] vm1    pengine:     info: determine_online_status: 	Node vm3 is online
Oct 15 15:16:23 [14873] vm1    pengine:     info: determine_online_status_fencing: 	Node vm1 is active
Oct 15 15:16:23 [14873] vm1    pengine:     info: determine_online_status: 	Node vm1 is online
Oct 15 15:16:23 [14873] vm1    pengine:     info: native_print: 	pDummy	(ocf::pacemaker:Dummy):	Stopped 
Oct 15 15:16:23 [14873] vm1    pengine:     info: group_print: 	 Resource Group: gStonith3
Oct 15 15:16:23 [14873] vm1    pengine:     info: native_print: 	     f1	(stonith:external/libvirt):	Stopped 
Oct 15 15:16:23 [14873] vm1    pengine:     info: native_print: 	     f2	(stonith:external/ssh):	Stopped 
Oct 15 15:16:23 [14873] vm1    pengine:    debug: group_rsc_location: 	Processing rsc_location l2-rule-0 for gStonith3
Oct 15 15:16:23 [14873] vm1    pengine:    debug: group_rsc_location: 	Processing rsc_location l2-rule for gStonith3
Oct 15 15:16:23 [14873] vm1    pengine:    debug: native_assign_node: 	Assigning vm3 to pDummy
Oct 15 15:16:23 [14873] vm1    pengine:    debug: native_assign_node: 	Assigning vm1 to f1
Oct 15 15:16:23 [14873] vm1    pengine:    debug: native_assign_node: 	Assigning vm1 to f2
Oct 15 15:16:23 [14873] vm1    pengine:    debug: native_create_probe: 	Probing pDummy on vm1 (Stopped)
Oct 15 15:16:23 [14873] vm1    pengine:    debug: native_create_probe: 	Probing f1 on vm1 (Stopped)
Oct 15 15:16:23 [14873] vm1    pengine:    debug: native_create_probe: 	Probing f2 on vm1 (Stopped)
Oct 15 15:16:23 [14873] vm1    pengine:    debug: native_create_probe: 	Probing pDummy on vm2 (Stopped)
Oct 15 15:16:23 [14873] vm1    pengine:    debug: native_create_probe: 	Probing f1 on vm2 (Stopped)
Oct 15 15:16:23 [14873] vm1    pengine:    debug: native_create_probe: 	Probing f2 on vm2 (Stopped)
Oct 15 15:16:23 [14873] vm1    pengine:    debug: native_create_probe: 	Probing pDummy on vm3 (Stopped)
Oct 15 15:16:23 [14873] vm1    pengine:    debug: native_create_probe: 	Probing f1 on vm3 (Stopped)
Oct 15 15:16:23 [14873] vm1    pengine:    debug: native_create_probe: 	Probing f2 on vm3 (Stopped)
Oct 15 15:16:23 [14873] vm1    pengine:     info: RecurringOp: 	 Start recurring monitor (10s) for pDummy on vm3
Oct 15 15:16:23 [14873] vm1    pengine:     info: RecurringOp: 	 Start recurring monitor (3600s) for f1 on vm1
Oct 15 15:16:23 [14873] vm1    pengine:     info: RecurringOp: 	 Start recurring monitor (3600s) for f2 on vm1
Oct 15 15:16:23 [14873] vm1    pengine:   notice: LogActions: 	Start   pDummy	(vm3)
Oct 15 15:16:23 [14873] vm1    pengine:   notice: LogActions: 	Start   f1	(vm1)
Oct 15 15:16:23 [14873] vm1    pengine:   notice: LogActions: 	Start   f2	(vm1)
Oct 15 15:16:23 [14874] vm1       crmd:    debug: s_crmd_fsa: 	Processing I_PE_SUCCESS: [ state=S_POLICY_ENGINE cause=C_IPC_MESSAGE origin=handle_response ]
Oct 15 15:16:23 [14874] vm1       crmd:     info: do_state_transition: 	State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Oct 15 15:16:24 [14874] vm1       crmd:    debug: unpack_graph: 	Unpacked transition 1: 21 actions in 21 synapses
Oct 15 15:16:24 [14874] vm1       crmd:     info: do_te_invoke: 	Processing graph 1 (ref=pe_calc-dc-1381817783-33) derived from /var/lib/pacemaker/pengine/pe-input-1.bz2
Oct 15 15:16:24 [14874] vm1       crmd:   notice: te_rsc_command: 	Initiating action 12: monitor pDummy_monitor_0 on vm3
Oct 15 15:16:24 [14873] vm1    pengine:   notice: process_pe_message: 	Calculated Transition 1: /var/lib/pacemaker/pengine/pe-input-1.bz2
Oct 15 15:16:24 [14874] vm1       crmd:   notice: te_rsc_command: 	Initiating action 8: monitor pDummy_monitor_0 on vm2
Oct 15 15:16:24 [14874] vm1       crmd:   notice: te_rsc_command: 	Initiating action 4: monitor pDummy_monitor_0 on vm1 (local)
Oct 15 15:16:24 [14871] vm1       lrmd:     info: process_lrmd_get_rsc_info: 	Resource 'pDummy' not found (0 active resources)
Oct 15 15:16:24 [14871] vm1       lrmd:    debug: process_lrmd_message: 	Processed lrmd_rsc_info operation from 0245b4d8-632b-498b-abda-91d20df4709f: rc=0, reply=0, notify=0, exit=4201864
Oct 15 15:16:24 [14871] vm1       lrmd:     info: process_lrmd_rsc_register: 	Added 'pDummy' to the rsc list (1 active resources)
Oct 15 15:16:24 [14871] vm1       lrmd:    debug: process_lrmd_message: 	Processed lrmd_rsc_register operation from 0245b4d8-632b-498b-abda-91d20df4709f: rc=0, reply=1, notify=1, exit=4201864
Oct 15 15:16:24 [14871] vm1       lrmd:    debug: process_lrmd_message: 	Processed lrmd_rsc_info operation from 0245b4d8-632b-498b-abda-91d20df4709f: rc=0, reply=0, notify=0, exit=4201864
Oct 15 15:16:24 [14874] vm1       crmd:     info: do_lrm_rsc_op: 	Performing key=4:1:7:cffe5b98-3c92-4ed3-8992-426ef00df4ed op=pDummy_monitor_0
Oct 15 15:16:24 [14871] vm1       lrmd:    debug: process_lrmd_message: 	Processed lrmd_rsc_exec operation from 0245b4d8-632b-498b-abda-91d20df4709f: rc=5, reply=1, notify=0, exit=4201864
Oct 15 15:16:24 [14871] vm1       lrmd:    debug: log_execute: 	executing - rsc:pDummy action:monitor call_id:5
Oct 15 15:16:24 [14874] vm1       crmd:    debug: te_pseudo_action: 	Pseudo action 21 fired and confirmed
Oct 15 15:16:24 [14874] vm1       crmd:   notice: te_rsc_command: 	Initiating action 13: monitor f1_monitor_0 on vm3
Oct 15 15:16:24 [14874] vm1       crmd:   notice: te_rsc_command: 	Initiating action 9: monitor f1_monitor_0 on vm2
Oct 15 15:16:24 [14874] vm1       crmd:   notice: te_rsc_command: 	Initiating action 5: monitor f1_monitor_0 on vm1 (local)
Oct 15 15:16:24 [14871] vm1       lrmd:     info: process_lrmd_get_rsc_info: 	Resource 'f1' not found (1 active resources)
Oct 15 15:16:24 [14871] vm1       lrmd:    debug: process_lrmd_message: 	Processed lrmd_rsc_info operation from 0245b4d8-632b-498b-abda-91d20df4709f: rc=0, reply=0, notify=0, exit=4201864
Oct 15 15:16:24 [14871] vm1       lrmd:     info: process_lrmd_rsc_register: 	Added 'f1' to the rsc list (2 active resources)
Oct 15 15:16:24 [14871] vm1       lrmd:    debug: process_lrmd_message: 	Processed lrmd_rsc_register operation from 0245b4d8-632b-498b-abda-91d20df4709f: rc=0, reply=1, notify=1, exit=4201864
Oct 15 15:16:24 [14874] vm1       crmd:     info: do_lrm_rsc_op: 	Performing key=5:1:7:cffe5b98-3c92-4ed3-8992-426ef00df4ed op=f1_monitor_0
Oct 15 15:16:24 [14871] vm1       lrmd:    debug: process_lrmd_message: 	Processed lrmd_rsc_info operation from 0245b4d8-632b-498b-abda-91d20df4709f: rc=0, reply=0, notify=0, exit=4201864
Oct 15 15:16:24 [14871] vm1       lrmd:    debug: process_lrmd_message: 	Processed lrmd_rsc_exec operation from 0245b4d8-632b-498b-abda-91d20df4709f: rc=9, reply=1, notify=0, exit=4201864
Oct 15 15:16:24 [14874] vm1       crmd:   notice: te_rsc_command: 	Initiating action 14: monitor f2_monitor_0 on vm3
Oct 15 15:16:24 [14874] vm1       crmd:   notice: te_rsc_command: 	Initiating action 10: monitor f2_monitor_0 on vm2
Oct 15 15:16:24 [14874] vm1       crmd:   notice: te_rsc_command: 	Initiating action 6: monitor f2_monitor_0 on vm1 (local)
Oct 15 15:16:24 [14871] vm1       lrmd:    debug: log_execute: 	executing - rsc:f1 action:monitor call_id:9
Oct 15 15:16:24 [14870] vm1 stonith-ng:     info: crm_client_new: 	Connecting 0xd8c2a0 for uid=0 gid=0 pid=14871 id=8326641a-5ab4-4fc9-971f-95fa8e280da8
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: handle_new_connection: 	IPC credentials authenticated (14870-14871-10)
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: qb_ipcs_shm_connect: 	connecting to client [14871]
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: qb_rb_open_2: 	shm size:131085; real_size:135168; rb->word_size:33792
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: qb_rb_open_2: 	shm size:131085; real_size:135168; rb->word_size:33792
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: qb_rb_open_2: 	shm size:131085; real_size:135168; rb->word_size:33792
Oct 15 15:16:24 [14871] vm1       lrmd:    debug: qb_rb_open_2: 	shm size:131085; real_size:135168; rb->word_size:33792
Oct 15 15:16:24 [14871] vm1       lrmd:    debug: qb_rb_open_2: 	shm size:131085; real_size:135168; rb->word_size:33792
Oct 15 15:16:24 [14871] vm1       lrmd:    debug: qb_rb_open_2: 	shm size:131085; real_size:135168; rb->word_size:33792
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: stonith_command: 	Processing register 1 from lrmd.14871 (               0)
Oct 15 15:16:24 [14871] vm1       lrmd:    debug: stonith_api_signon: 	Connection to STONITH successful
Oct 15 15:16:24 [14870] vm1 stonith-ng:     info: stonith_command: 	Processed register from lrmd.14871: OK (0)
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: stonith_command: 	Processing st_notify 2 from lrmd.14871 (               0)
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: handle_request: 	Setting st_notify_disconnect callbacks for lrmd.14871 (8326641a-5ab4-4fc9-971f-95fa8e280da8): ON
Oct 15 15:16:24 [14870] vm1 stonith-ng:     info: stonith_command: 	Processed st_notify from lrmd.14871: OK (0)
Oct 15 15:16:24 [14871] vm1       lrmd:    debug: log_finished: 	finished - rsc:f1 action:monitor call_id:9  exit-code:7 exec-time:10ms queue-time:1ms
Oct 15 15:16:24 [14871] vm1       lrmd:     info: process_lrmd_get_rsc_info: 	Resource 'f2' not found (2 active resources)
Oct 15 15:16:24 [14871] vm1       lrmd:    debug: process_lrmd_message: 	Processed lrmd_rsc_info operation from 0245b4d8-632b-498b-abda-91d20df4709f: rc=0, reply=0, notify=0, exit=4201864
Oct 15 15:16:24 [14871] vm1       lrmd:     info: process_lrmd_rsc_register: 	Added 'f2' to the rsc list (3 active resources)
Oct 15 15:16:24 [14871] vm1       lrmd:    debug: process_lrmd_message: 	Processed lrmd_rsc_register operation from 0245b4d8-632b-498b-abda-91d20df4709f: rc=0, reply=1, notify=1, exit=4201864
Oct 15 15:16:24 [14874] vm1       crmd:     info: do_lrm_rsc_op: 	Performing key=6:1:7:cffe5b98-3c92-4ed3-8992-426ef00df4ed op=f2_monitor_0
Oct 15 15:16:24 [14871] vm1       lrmd:    debug: process_lrmd_message: 	Processed lrmd_rsc_info operation from 0245b4d8-632b-498b-abda-91d20df4709f: rc=0, reply=0, notify=0, exit=4201864
Oct 15 15:16:24 [14871] vm1       lrmd:    debug: process_lrmd_message: 	Processed lrmd_rsc_exec operation from 0245b4d8-632b-498b-abda-91d20df4709f: rc=13, reply=1, notify=0, exit=4201864
Oct 15 15:16:24 [14874] vm1       crmd:    debug: run_graph: 	Transition 1 (Complete=0, Pending=9, Fired=10, Skipped=0, Incomplete=11, Source=/var/lib/pacemaker/pengine/pe-input-1.bz2): In-progress
Oct 15 15:16:24 [14874] vm1       crmd:    debug: create_operation_update: 	do_update_resource: Updating resource f1 after monitor op complete (interval=0)
Oct 15 15:16:24 [14871] vm1       lrmd:    debug: log_execute: 	executing - rsc:f2 action:monitor call_id:13
Oct 15 15:16:24 [14871] vm1       lrmd:    debug: log_finished: 	finished - rsc:f2 action:monitor call_id:13  exit-code:7 exec-time:0ms queue-time:2ms
Dummy(pDummy)[14927]:	2013/10/15_15:16:24 DEBUG: pDummy monitor : 7
Oct 15 15:16:24 [14871] vm1       lrmd:    debug: operation_finished: 	pDummy_monitor_0:14927 - exited with rc=7
Oct 15 15:16:24 [14871] vm1       lrmd:    debug: operation_finished: 	pDummy_monitor_0:14927:stderr [ -- empty -- ]
Oct 15 15:16:24 [14871] vm1       lrmd:    debug: operation_finished: 	pDummy_monitor_0:14927:stdout [ -- empty -- ]
Oct 15 15:16:24 [14871] vm1       lrmd:    debug: log_finished: 	finished - rsc:pDummy action:monitor call_id:5 pid:14927 exit-code:7 exec-time:87ms queue-time:0ms
Oct 15 15:16:24 [14869] vm1        cib:     info: cib_process_request: 	Completed cib_modify operation for section status: OK (rc=0, origin=vm2/crmd/13, version=0.8.8)
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	Diff: --- 0.8.7
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	Diff: +++ 0.8.8 de6d9baeb243bf5e0da724dc7b145736
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	-- <cib num_updates="7"/>
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	+  <cib epoch="8" num_updates="8" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.7" cib-last-written="Tue Oct 15 15:16:21 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517">
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	+    <status>
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	+      <node_state id="3232261518" uname="vm2" in_ccm="true" crmd="online" crm-debug-origin="do_update_resource" join="member" expected="member">
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	+        <lrm id="3232261518">
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	+          <lrm_resources>
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	++           <lrm_resource id="f1" type="external/libvirt" class="stonith">
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	++             <lrm_rsc_op id="f1_last_0" operation_key="f1_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.7" transition-key="9:1:7:cffe5b98-3c92-4ed3-8992-426ef00df4ed" transition-magic="0:7;9:1:7:cffe5b98-3c92-4ed3-8992-426ef00df4ed" call-id="9" rc-code="7" op-status="0" interval="0" last-run="1381817784" last-rc-change="1381817784" exec-time="4" queue-time="0" op-digest="28866033b977ab11a8954be4e
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	++           </lrm_resource>
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	+          </lrm_resources>
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	+        </lrm>
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	+      </node_state>
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	+    </status>
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	+  </cib>
Oct 15 15:16:24 [14869] vm1        cib:     info: cib_process_request: 	Completed cib_modify operation for section status: OK (rc=0, origin=vm3/crmd/13, version=0.8.9)
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	Diff: --- 0.8.8
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	Diff: +++ 0.8.9 a14c81077354bd26a618dfa1ad6351d0
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	-- <cib num_updates="8"/>
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	+  <cib epoch="8" num_updates="9" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.7" cib-last-written="Tue Oct 15 15:16:21 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517">
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	+    <status>
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	+      <node_state id="3232261519" uname="vm3" in_ccm="true" crmd="online" crm-debug-origin="do_update_resource" join="member" expected="member">
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	+        <lrm id="3232261519">
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	+          <lrm_resources>
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	++           <lrm_resource id="f1" type="external/libvirt" class="stonith">
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	++             <lrm_rsc_op id="f1_last_0" operation_key="f1_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.7" transition-key="13:1:7:cffe5b98-3c92-4ed3-8992-426ef00df4ed" transition-magic="0:7;13:1:7:cffe5b98-3c92-4ed3-8992-426ef00df4ed" call-id="9" rc-code="7" op-status="0" interval="0" last-run="1381817784" last-rc-change="1381817784" exec-time="5" queue-time="0" op-digest="28866033b977ab11a8954be
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	++           </lrm_resource>
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	+          </lrm_resources>
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	+        </lrm>
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	+      </node_state>
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	+    </status>
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	+  </cib>
Oct 15 15:16:24 [14874] vm1       crmd:     info: process_lrm_event: 	LRM operation f1_monitor_0 (call=9, rc=7, cib-update=61, confirmed=true) not running
Oct 15 15:16:24 [14874] vm1       crmd:    debug: update_history_cache: 	Updating history for 'f1' with monitor op
Oct 15 15:16:24 [14874] vm1       crmd:    debug: create_operation_update: 	do_update_resource: Updating resource f2 after monitor op complete (interval=0)
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	Diff: --- 0.8.9
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	Diff: +++ 0.8.10 917c1e4c2ed01e235107abd3911958e7
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	-- <cib num_updates="9"/>
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	+  <cib epoch="8" num_updates="10" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.7" cib-last-written="Tue Oct 15 15:16:21 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517">
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	+    <status>
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	+      <node_state id="3232261517" uname="vm1" in_ccm="true" crmd="online" crm-debug-origin="do_update_resource" join="member" expected="member">
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	+        <lrm id="3232261517">
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	+          <lrm_resources>
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	++           <lrm_resource id="f1" type="external/libvirt" class="stonith">
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	++             <lrm_rsc_op id="f1_last_0" operation_key="f1_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.7" transition-key="5:1:7:cffe5b98-3c92-4ed3-8992-426ef00df4ed" transition-magic="0:7;5:1:7:cffe5b98-3c92-4ed3-8992-426ef00df4ed" call-id="9" rc-code="7" op-status="0" interval="0" last-run="1381817784" last-rc-change="1381817784" exec-time="10" queue-time="1" op-digest="28866033b977ab11a8954be4
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	++           </lrm_resource>
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	+          </lrm_resources>
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	+        </lrm>
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	+      </node_state>
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	+    </status>
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	+  </cib>
Oct 15 15:16:24 [14869] vm1        cib:     info: cib_process_request: 	Completed cib_modify operation for section status: OK (rc=0, origin=local/crmd/61, version=0.8.10)
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	Diff: --- 0.8.10
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	Diff: +++ 0.8.11 5492d658e0d6be6c7c304dfc3162cd7b
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	-- <cib num_updates="10"/>
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	+  <cib epoch="8" num_updates="11" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.7" cib-last-written="Tue Oct 15 15:16:21 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517">
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	+    <status>
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	+      <node_state id="3232261519" uname="vm3" in_ccm="true" crmd="online" crm-debug-origin="do_update_resource" join="member" expected="member">
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	+        <lrm id="3232261519">
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	+          <lrm_resources>
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	++           <lrm_resource id="f2" type="external/ssh" class="stonith">
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	++             <lrm_rsc_op id="f2_last_0" operation_key="f2_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.7" transition-key="14:1:7:cffe5b98-3c92-4ed3-8992-426ef00df4ed" transition-magic="0:7;14:1:7:cffe5b98-3c92-4ed3-8992-426ef00df4ed" call-id="13" rc-code="7" op-status="0" interval="0" last-run="1381817784" last-rc-change="1381817784" exec-time="0" queue-time="1" op-digest="efc9e41f336b1cf0d8474a
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	++           </lrm_resource>
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	+          </lrm_resources>
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	+        </lrm>
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	+      </node_state>
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	+    </status>
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	+  </cib>
Oct 15 15:16:24 [14869] vm1        cib:     info: cib_process_request: 	Completed cib_modify operation for section status: OK (rc=0, origin=vm3/crmd/14, version=0.8.11)
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	Diff: --- 0.8.11
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	Diff: +++ 0.8.12 7847f4911aab9f8fbda03c527462b83c
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	-- <cib num_updates="11"/>
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	+  <cib epoch="8" num_updates="12" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.7" cib-last-written="Tue Oct 15 15:16:21 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517">
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	+    <status>
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	+      <node_state id="3232261518" uname="vm2" in_ccm="true" crmd="online" crm-debug-origin="do_update_resource" join="member" expected="member">
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	+        <lrm id="3232261518">
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	+          <lrm_resources>
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	++           <lrm_resource id="f2" type="external/ssh" class="stonith">
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	++             <lrm_rsc_op id="f2_last_0" operation_key="f2_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.7" transition-key="10:1:7:cffe5b98-3c92-4ed3-8992-426ef00df4ed" transition-magic="0:7;10:1:7:cffe5b98-3c92-4ed3-8992-426ef00df4ed" call-id="13" rc-code="7" op-status="0" interval="0" last-run="1381817784" last-rc-change="1381817784" exec-time="0" queue-time="0" op-digest="efc9e41f336b1cf0d8474a
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	++           </lrm_resource>
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	+          </lrm_resources>
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	+        </lrm>
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	+      </node_state>
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	+    </status>
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	+  </cib>
Oct 15 15:16:24 [14869] vm1        cib:     info: cib_process_request: 	Completed cib_modify operation for section status: OK (rc=0, origin=vm2/crmd/14, version=0.8.12)
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	Diff: --- 0.8.12
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	Diff: +++ 0.8.13 db066c5246d0107290e5d3f56ed07486
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	-- <cib num_updates="12"/>
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	+  <cib epoch="8" num_updates="13" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.7" cib-last-written="Tue Oct 15 15:16:21 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517">
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	+    <status>
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	+      <node_state id="3232261519" uname="vm3" in_ccm="true" crmd="online" crm-debug-origin="do_update_resource" join="member" expected="member">
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	+        <lrm id="3232261519">
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	+          <lrm_resources>
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	++           <lrm_resource id="pDummy" type="Dummy" class="ocf" provider="pacemaker">
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	++             <lrm_rsc_op id="pDummy_last_0" operation_key="pDummy_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.7" transition-key="12:1:7:cffe5b98-3c92-4ed3-8992-426ef00df4ed" transition-magic="0:7;12:1:7:cffe5b98-3c92-4ed3-8992-426ef00df4ed" call-id="5" rc-code="7" op-status="0" interval="0" last-run="1381817784" last-rc-change="1381817784" exec-time="60" queue-time="0" op-digest="f2317cad3d54ce
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	++           </lrm_resource>
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	+          </lrm_resources>
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	+        </lrm>
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	+      </node_state>
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	+    </status>
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	+  </cib>
Oct 15 15:16:24 [14869] vm1        cib:     info: cib_process_request: 	Completed cib_modify operation for section status: OK (rc=0, origin=vm3/crmd/15, version=0.8.13)
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	Diff: --- 0.8.13
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	Diff: +++ 0.8.14 97a36cd5a845d6d6adb981fcaa6b6ee7
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	-- <cib num_updates="13"/>
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	+  <cib epoch="8" num_updates="14" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.7" cib-last-written="Tue Oct 15 15:16:21 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517">
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	+    <status>
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	+      <node_state id="3232261518" uname="vm2" in_ccm="true" crmd="online" crm-debug-origin="do_update_resource" join="member" expected="member">
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	+        <lrm id="3232261518">
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	+          <lrm_resources>
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	++           <lrm_resource id="pDummy" type="Dummy" class="ocf" provider="pacemaker">
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	++             <lrm_rsc_op id="pDummy_last_0" operation_key="pDummy_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.7" transition-key="8:1:7:cffe5b98-3c92-4ed3-8992-426ef00df4ed" transition-magic="0:7;8:1:7:cffe5b98-3c92-4ed3-8992-426ef00df4ed" call-id="5" rc-code="7" op-status="0" interval="0" last-run="1381817784" last-rc-change="1381817784" exec-time="54" queue-time="0" op-digest="f2317cad3d54cec5
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	++           </lrm_resource>
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	+          </lrm_resources>
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	+        </lrm>
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	+      </node_state>
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	+    </status>
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	+  </cib>
Oct 15 15:16:24 [14869] vm1        cib:     info: cib_process_request: 	Completed cib_modify operation for section status: OK (rc=0, origin=vm2/crmd/15, version=0.8.14)
Oct 15 15:16:24 [14874] vm1       crmd:     info: process_lrm_event: 	LRM operation f2_monitor_0 (call=13, rc=7, cib-update=62, confirmed=true) not running
Oct 15 15:16:24 [14874] vm1       crmd:    debug: update_history_cache: 	Updating history for 'f2' with monitor op
Oct 15 15:16:24 [14874] vm1       crmd:    debug: create_operation_update: 	do_update_resource: Updating resource pDummy after monitor op complete (interval=0)
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	Diff: --- 0.8.14
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	Diff: +++ 0.8.15 1524ade4c92f59a5ec6743cf9288d7a3
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	-- <cib num_updates="14"/>
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	+  <cib epoch="8" num_updates="15" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.7" cib-last-written="Tue Oct 15 15:16:21 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517">
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	+    <status>
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	+      <node_state id="3232261517" uname="vm1" in_ccm="true" crmd="online" crm-debug-origin="do_update_resource" join="member" expected="member">
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	+        <lrm id="3232261517">
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	+          <lrm_resources>
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	++           <lrm_resource id="f2" type="external/ssh" class="stonith">
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	++             <lrm_rsc_op id="f2_last_0" operation_key="f2_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.7" transition-key="6:1:7:cffe5b98-3c92-4ed3-8992-426ef00df4ed" transition-magic="0:7;6:1:7:cffe5b98-3c92-4ed3-8992-426ef00df4ed" call-id="13" rc-code="7" op-status="0" interval="0" last-run="1381817784" last-rc-change="1381817784" exec-time="0" queue-time="2" op-digest="efc9e41f336b1cf0d8474ae7
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	++           </lrm_resource>
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	+          </lrm_resources>
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	+        </lrm>
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	+      </node_state>
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	+    </status>
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	+  </cib>
Oct 15 15:16:24 [14869] vm1        cib:     info: cib_process_request: 	Completed cib_modify operation for section status: OK (rc=0, origin=local/crmd/62, version=0.8.15)
Oct 15 15:16:24 [14874] vm1       crmd:     info: services_os_action_execute: 	Managed Dummy_meta-data_0 process 14961 exited with rc=0
Oct 15 15:16:24 [14874] vm1       crmd:    debug: get_rsc_restart_list: 	Attr state is not reloadable
Oct 15 15:16:24 [14874] vm1       crmd:    debug: get_rsc_restart_list: 	Attr op_sleep is not reloadable
Oct 15 15:16:24 [14869] vm1        cib:     info: cib_process_request: 	Completed cib_modify operation for section status: OK (rc=0, origin=local/crmd/63, version=0.8.16)
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	Diff: --- 0.8.15
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	Diff: +++ 0.8.16 cc763a5c6523ae0b1e3809c88fc538ee
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	-- <cib num_updates="15"/>
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	+  <cib epoch="8" num_updates="16" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.7" cib-last-written="Tue Oct 15 15:16:21 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517">
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	+    <status>
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	+      <node_state id="3232261517" uname="vm1" in_ccm="true" crmd="online" crm-debug-origin="do_update_resource" join="member" expected="member">
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	+        <lrm id="3232261517">
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	+          <lrm_resources>
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	++           <lrm_resource id="pDummy" type="Dummy" class="ocf" provider="pacemaker">
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	++             <lrm_rsc_op id="pDummy_last_0" operation_key="pDummy_monitor_0" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.7" transition-key="4:1:7:cffe5b98-3c92-4ed3-8992-426ef00df4ed" transition-magic="0:7;4:1:7:cffe5b98-3c92-4ed3-8992-426ef00df4ed" call-id="5" rc-code="7" op-status="0" interval="0" last-run="1381817784" last-rc-change="1381817784" exec-time="87" queue-time="0" op-digest="f2317cad3d54cec5
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	++           </lrm_resource>
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	+          </lrm_resources>
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	+        </lrm>
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	+      </node_state>
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	+    </status>
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: Config update: 	+  </cib>
Oct 15 15:16:24 [14874] vm1       crmd:   notice: process_lrm_event: 	LRM operation pDummy_monitor_0 (call=5, rc=7, cib-update=63, confirmed=true) not running
Oct 15 15:16:24 [14874] vm1       crmd:    debug: update_history_cache: 	Updating history for 'pDummy' with monitor op
Oct 15 15:16:24 [14874] vm1       crmd:    debug: te_update_diff: 	Processing diff (cib_modify): 0.8.7 -> 0.8.8 (S_TRANSITION_ENGINE)
Oct 15 15:16:24 [14874] vm1       crmd:     info: match_graph_event: 	Action f1_monitor_0 (9) confirmed on vm2 (rc=0)
Oct 15 15:16:24 [14874] vm1       crmd:    debug: te_update_diff: 	Processing diff (cib_modify): 0.8.8 -> 0.8.9 (S_TRANSITION_ENGINE)
Oct 15 15:16:24 [14874] vm1       crmd:     info: match_graph_event: 	Action f1_monitor_0 (13) confirmed on vm3 (rc=0)
Oct 15 15:16:24 [14874] vm1       crmd:    debug: te_update_diff: 	Processing diff (cib_modify): 0.8.9 -> 0.8.10 (S_TRANSITION_ENGINE)
Oct 15 15:16:24 [14874] vm1       crmd:     info: match_graph_event: 	Action f1_monitor_0 (5) confirmed on vm1 (rc=0)
Oct 15 15:16:24 [14874] vm1       crmd:    debug: te_update_diff: 	Processing diff (cib_modify): 0.8.10 -> 0.8.11 (S_TRANSITION_ENGINE)
Oct 15 15:16:24 [14874] vm1       crmd:     info: match_graph_event: 	Action f2_monitor_0 (14) confirmed on vm3 (rc=0)
Oct 15 15:16:24 [14874] vm1       crmd:    debug: te_update_diff: 	Processing diff (cib_modify): 0.8.11 -> 0.8.12 (S_TRANSITION_ENGINE)
Oct 15 15:16:24 [14874] vm1       crmd:     info: match_graph_event: 	Action f2_monitor_0 (10) confirmed on vm2 (rc=0)
Oct 15 15:16:24 [14874] vm1       crmd:    debug: te_update_diff: 	Processing diff (cib_modify): 0.8.12 -> 0.8.13 (S_TRANSITION_ENGINE)
Oct 15 15:16:24 [14874] vm1       crmd:     info: match_graph_event: 	Action pDummy_monitor_0 (12) confirmed on vm3 (rc=0)
Oct 15 15:16:24 [14874] vm1       crmd:    debug: te_update_diff: 	Processing diff (cib_modify): 0.8.13 -> 0.8.14 (S_TRANSITION_ENGINE)
Oct 15 15:16:24 [14874] vm1       crmd:     info: match_graph_event: 	Action pDummy_monitor_0 (8) confirmed on vm2 (rc=0)
Oct 15 15:16:24 [14874] vm1       crmd:    debug: te_update_diff: 	Processing diff (cib_modify): 0.8.14 -> 0.8.15 (S_TRANSITION_ENGINE)
Oct 15 15:16:24 [14874] vm1       crmd:     info: match_graph_event: 	Action f2_monitor_0 (6) confirmed on vm1 (rc=0)
Oct 15 15:16:24 [14874] vm1       crmd:    debug: te_update_diff: 	Processing diff (cib_modify): 0.8.15 -> 0.8.16 (S_TRANSITION_ENGINE)
Oct 15 15:16:24 [14874] vm1       crmd:     info: match_graph_event: 	Action pDummy_monitor_0 (4) confirmed on vm1 (rc=0)
Oct 15 15:16:24 [14874] vm1       crmd:   notice: te_rsc_command: 	Initiating action 11: probe_complete probe_complete on vm3 - no waiting
Oct 15 15:16:24 [14874] vm1       crmd:     info: te_rsc_command: 	Action 11 confirmed - no wait
Oct 15 15:16:24 [14874] vm1       crmd:   notice: te_rsc_command: 	Initiating action 7: probe_complete probe_complete on vm2 - no waiting
Oct 15 15:16:24 [14874] vm1       crmd:     info: te_rsc_command: 	Action 7 confirmed - no wait
Oct 15 15:16:24 [14874] vm1       crmd:   notice: te_rsc_command: 	Initiating action 3: probe_complete probe_complete on vm1 (local) - no waiting
Oct 15 15:16:24 [14872] vm1      attrd:     info: attrd_client_message: 	Broadcasting probe_complete[vm1] = true (writer)
Oct 15 15:16:24 [14874] vm1       crmd:    debug: attrd_update_delegate: 	Sent update: probe_complete=true for vm1
Oct 15 15:16:24 [14874] vm1       crmd:     info: te_rsc_command: 	Action 3 confirmed - no wait
Oct 15 15:16:24 [14874] vm1       crmd:    debug: te_pseudo_action: 	Pseudo action 2 fired and confirmed
Oct 15 15:16:24 [14874] vm1       crmd:    debug: run_graph: 	Transition 1 (Complete=10, Pending=0, Fired=4, Skipped=0, Incomplete=7, Source=/var/lib/pacemaker/pengine/pe-input-1.bz2): In-progress
Oct 15 15:16:24 [14874] vm1       crmd:   notice: te_rsc_command: 	Initiating action 15: start pDummy_start_0 on vm3
Oct 15 15:16:24 [14874] vm1       crmd:   notice: te_rsc_command: 	Initiating action 17: start f1_start_0 on vm1 (local)
Oct 15 15:16:24 [14874] vm1       crmd:    debug: do_lrm_rsc_op: 	Stopped 0 recurring operations in preparation for f1_start_0
Oct 15 15:16:24 [14874] vm1       crmd:     info: do_lrm_rsc_op: 	Performing key=17:1:0:cffe5b98-3c92-4ed3-8992-426ef00df4ed op=f1_start_0
Oct 15 15:16:24 [14871] vm1       lrmd:    debug: process_lrmd_message: 	Processed lrmd_rsc_exec operation from 0245b4d8-632b-498b-abda-91d20df4709f: rc=14, reply=1, notify=0, exit=4201864
Oct 15 15:16:24 [14871] vm1       lrmd:     info: log_execute: 	executing - rsc:f1 action:start call_id:14
Oct 15 15:16:24 [14874] vm1       crmd:    debug: run_graph: 	Transition 1 (Complete=14, Pending=2, Fired=2, Skipped=0, Incomplete=5, Source=/var/lib/pacemaker/pengine/pe-input-1.bz2): In-progress
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: stonith_command: 	Processing st_device_register 3 from lrmd.14871 (            1000)
Oct 15 15:16:24 [14870] vm1 stonith-ng:     info: stonith_action_create: 	Initiating action metadata for agent fence_legacy (target=(null))
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: internal_stonith_action_execute: 	forking
Oct 15 15:16:24 [14870] vm1 stonith-ng:    debug: internal_stonith_action_execute: 	sending args
Oct 15 15:16:24 [14874] vm1       crmd:    debug: te_update_diff: 	Processing diff (cib_modify): 0.8.16 -> 0.8.17 (S_TRANSITION_ENGINE)
Oct 15 15:16:24 [14874] vm1       crmd:     info: match_graph_event: 	Action pDummy_start_0 (15) confirmed on vm3 (rc=0)
Oct 15 15:16:24 [14874] vm1       crmd:   notice: te_rsc_command: 	Initiating action 16: monitor pDummy_monitor_10000 on vm3
Oct 15 15:16:24 [14874] vm1       crmd:    debug: run_graph: 	Transition 1 (Complete=15, Pending=2, Fired=1, Skipped=0, Incomplete=4, Source=/var/lib/pacemaker/pengine/pe-input-1.bz2): In-progress
Oct 15 15:16:24 [14869] vm1        cib:     info: cib_process_request: 	Completed cib_modify operation for section status: OK (rc=0, origin=vm3/crmd/16, version=0.8.17)
Oct 15 15:16:24 [14869] vm1        cib:     info: cib_process_request: 	Completed cib_modify operation for section status: OK (rc=0, origin=vm3/crmd/17, version=0.8.18)
Oct 15 15:16:24 [14874] vm1       crmd:    debug: te_update_diff: 	Processing diff (cib_modify): 0.8.17 -> 0.8.18 (S_TRANSITION_ENGINE)
Oct 15 15:16:24 [14874] vm1       crmd:     info: match_graph_event: 	Action pDummy_monitor_10000 (16) confirmed on vm3 (rc=0)
Oct 15 15:16:24 [14874] vm1       crmd:    debug: run_graph: 	Transition 1 (Complete=16, Pending=1, Fired=0, Skipped=0, Incomplete=4, Source=/var/lib/pacemaker/pengine/pe-input-1.bz2): In-progress
Oct 15 15:16:25 [14870] vm1 stonith-ng:    debug: internal_stonith_action_execute: 	result = 0
Oct 15 15:16:25 [14870] vm1 stonith-ng:   notice: stonith_device_register: 	Device 'f1' already existed in device list (2 active devices)
Oct 15 15:16:25 [14870] vm1 stonith-ng:     info: stonith_command: 	Processed st_device_register from lrmd.14871: OK (0)
Oct 15 15:16:25 [14870] vm1 stonith-ng:    debug: Config update: 	Diff: --- 0.8.16
Oct 15 15:16:25 [14870] vm1 stonith-ng:    debug: Config update: 	Diff: +++ 0.8.17 4c7c945f1e610774b2271cf20cfd42dc
Oct 15 15:16:25 [14870] vm1 stonith-ng:    debug: Config update: 	-  <cib num_updates="16">
Oct 15 15:16:25 [14870] vm1 stonith-ng:    debug: Config update: 	-    <status>
Oct 15 15:16:25 [14870] vm1 stonith-ng:    debug: Config update: 	-      <node_state id="3232261519">
Oct 15 15:16:25 [14870] vm1 stonith-ng:    debug: Config update: 	-        <lrm id="3232261519">
Oct 15 15:16:25 [14870] vm1 stonith-ng:    debug: Config update: 	-          <lrm_resources>
Oct 15 15:16:25 [14870] vm1 stonith-ng:    debug: Config update: 	-            <lrm_resource id="pDummy">
Oct 15 15:16:25 [14870] vm1 stonith-ng:    debug: Config update: 	--             <lrm_rsc_op operation_key="pDummy_monitor_0" operation="monitor" transition-key="12:1:7:cffe5b98-3c92-4ed3-8992-426ef00df4ed" transition-magic="0:7;12:1:7:cffe5b98-3c92-4ed3-8992-426ef00df4ed" call-id="5" rc-code="7" exec-time="60" queue-time="0" id="pDummy_last_0"/>
Oct 15 15:16:25 [14870] vm1 stonith-ng:    debug: Config update: 	-            </lrm_resource>
Oct 15 15:16:25 [14870] vm1 stonith-ng:    debug: Config update: 	-          </lrm_resources>
Oct 15 15:16:25 [14870] vm1 stonith-ng:    debug: Config update: 	-        </lrm>
Oct 15 15:16:25 [14870] vm1 stonith-ng:    debug: Config update: 	-      </node_state>
Oct 15 15:16:25 [14870] vm1 stonith-ng:    debug: Config update: 	-    </status>
Oct 15 15:16:25 [14870] vm1 stonith-ng:    debug: Config update: 	-  </cib>
Oct 15 15:16:25 [14870] vm1 stonith-ng:    debug: Config update: 	+  <cib epoch="8" num_updates="17" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.7" cib-last-written="Tue Oct 15 15:16:21 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517">
Oct 15 15:16:25 [14870] vm1 stonith-ng:    debug: Config update: 	+    <status>
Oct 15 15:16:25 [14870] vm1 stonith-ng:    debug: Config update: 	+      <node_state id="3232261519" uname="vm3" in_ccm="true" crmd="online" crm-debug-origin="do_update_resource" join="member" expected="member">
Oct 15 15:16:25 [14870] vm1 stonith-ng:    debug: Config update: 	+        <lrm id="3232261519">
Oct 15 15:16:25 [14870] vm1 stonith-ng:    debug: Config update: 	+          <lrm_resources>
Oct 15 15:16:25 [14870] vm1 stonith-ng:    debug: Config update: 	+            <lrm_resource id="pDummy" type="Dummy" class="ocf" provider="pacemaker">
Oct 15 15:16:25 [14870] vm1 stonith-ng:    debug: Config update: 	++             <lrm_rsc_op id="pDummy_last_0" operation_key="pDummy_start_0" operation="start" crm-debug-origin="do_update_resource" crm_feature_set="3.0.7" transition-key="15:1:0:cffe5b98-3c92-4ed3-8992-426ef00df4ed" transition-magic="0:0;15:1:0:cffe5b98-3c92-4ed3-8992-426ef00df4ed" call-id="14" rc-code="0" op-status="0" interval="0" last-run="1381817784" last-rc-change="1381817784" exec-time="54" queue-time="1" op-digest="f2317cad3d54cec5d
Oct 15 15:16:25 [14870] vm1 stonith-ng:    debug: Config update: 	+            </lrm_resource>
Oct 15 15:16:25 [14870] vm1 stonith-ng:    debug: Config update: 	+          </lrm_resources>
Oct 15 15:16:25 [14870] vm1 stonith-ng:    debug: Config update: 	+        </lrm>
Oct 15 15:16:25 [14870] vm1 stonith-ng:    debug: Config update: 	+      </node_state>
Oct 15 15:16:25 [14870] vm1 stonith-ng:    debug: Config update: 	+    </status>
Oct 15 15:16:25 [14870] vm1 stonith-ng:    debug: Config update: 	+  </cib>
Oct 15 15:16:25 [14870] vm1 stonith-ng:    debug: Config update: 	Diff: --- 0.8.17
Oct 15 15:16:25 [14870] vm1 stonith-ng:    debug: Config update: 	Diff: +++ 0.8.18 ffd5fe2e454bbe2cc56a9f62b57e25e6
Oct 15 15:16:25 [14870] vm1 stonith-ng:    debug: Config update: 	-- <cib num_updates="17"/>
Oct 15 15:16:25 [14870] vm1 stonith-ng:    debug: Config update: 	+  <cib epoch="8" num_updates="18" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.7" cib-last-written="Tue Oct 15 15:16:21 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517">
Oct 15 15:16:25 [14870] vm1 stonith-ng:    debug: Config update: 	+    <status>
Oct 15 15:16:25 [14870] vm1 stonith-ng:    debug: Config update: 	+      <node_state id="3232261519" uname="vm3" in_ccm="true" crmd="online" crm-debug-origin="do_update_resource" join="member" expected="member">
Oct 15 15:16:25 [14870] vm1 stonith-ng:    debug: Config update: 	+        <lrm id="3232261519">
Oct 15 15:16:25 [14870] vm1 stonith-ng:    debug: Config update: 	+          <lrm_resources>
Oct 15 15:16:25 [14870] vm1 stonith-ng:    debug: Config update: 	+            <lrm_resource id="pDummy" type="Dummy" class="ocf" provider="pacemaker">
Oct 15 15:16:25 [14870] vm1 stonith-ng:    debug: Config update: 	++             <lrm_rsc_op id="pDummy_monitor_10000" operation_key="pDummy_monitor_10000" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.7" transition-key="16:1:0:cffe5b98-3c92-4ed3-8992-426ef00df4ed" transition-magic="0:0;16:1:0:cffe5b98-3c92-4ed3-8992-426ef00df4ed" call-id="15" rc-code="0" op-status="0" interval="10000" last-rc-change="1381817784" exec-time="48" queue-time="0" op-digest="5ce203b19bbe022929c2
Oct 15 15:16:25 [14870] vm1 stonith-ng:    debug: Config update: 	+            </lrm_resource>
Oct 15 15:16:25 [14870] vm1 stonith-ng:    debug: Config update: 	+          </lrm_resources>
Oct 15 15:16:25 [14870] vm1 stonith-ng:    debug: Config update: 	+        </lrm>
Oct 15 15:16:25 [14870] vm1 stonith-ng:    debug: Config update: 	+      </node_state>
Oct 15 15:16:25 [14870] vm1 stonith-ng:    debug: Config update: 	+    </status>
Oct 15 15:16:25 [14870] vm1 stonith-ng:    debug: Config update: 	+  </cib>
Oct 15 15:16:25 [14870] vm1 stonith-ng:    debug: stonith_command: 	Processing st_execute 4 from lrmd.14871 (               0)
Oct 15 15:16:25 [14870] vm1 stonith-ng:    debug: schedule_stonith_command: 	Scheduling monitor on f1 for 8326641a-5ab4-4fc9-971f-95fa8e280da8 (timeout=60s)
Oct 15 15:16:25 [14870] vm1 stonith-ng:     info: stonith_command: 	Processed st_execute from lrmd.14871: Operation now in progress (-115)
Oct 15 15:16:25 [14870] vm1 stonith-ng:     info: stonith_action_create: 	Initiating action monitor for agent fence_legacy (target=(null))
Oct 15 15:16:25 [14870] vm1 stonith-ng:    debug: internal_stonith_action_execute: 	forking
Oct 15 15:16:25 [14870] vm1 stonith-ng:    debug: internal_stonith_action_execute: 	sending args
Oct 15 15:16:25 [14870] vm1 stonith-ng:    debug: stonith_device_execute: 	Operation monitor on f1 now running with pid=14966, timeout=60s
Oct 15 15:16:26 [14870] vm1 stonith-ng:    debug: stonith_action_async_done: 	Child process 14966 performing action 'monitor' exited with rc 0
Oct 15 15:16:26 [14870] vm1 stonith-ng:    debug: log_operation: 	Operation 'monitor' [14966] for device 'f1' returned: 0 (OK)
Oct 15 15:16:26 [14870] vm1 stonith-ng:     info: log_operation: 	f1:14966 [ Performing: stonith -t external/libvirt -S ]
Oct 15 15:16:26 [14870] vm1 stonith-ng:     info: log_operation: 	f1:14966 [ success:  0 ]
Oct 15 15:16:26 [14871] vm1       lrmd:     info: log_finished: 	finished - rsc:f1 action:start call_id:14  exit-code:0 exec-time:2339ms queue-time:0ms
Oct 15 15:16:26 [14874] vm1       crmd:    debug: create_operation_update: 	do_update_resource: Updating resource f1 after start op complete (interval=0)
Oct 15 15:16:26 [14874] vm1       crmd:   notice: process_lrm_event: 	LRM operation f1_start_0 (call=14, rc=0, cib-update=64, confirmed=true) ok
Oct 15 15:16:26 [14874] vm1       crmd:    debug: update_history_cache: 	Updating history for 'f1' with start op
Oct 15 15:16:26 [14870] vm1 stonith-ng:    debug: Config update: 	Diff: --- 0.8.18
Oct 15 15:16:26 [14870] vm1 stonith-ng:    debug: Config update: 	Diff: +++ 0.8.19 5c5cd9c89db47200df51f99d28f4eff5
Oct 15 15:16:26 [14870] vm1 stonith-ng:    debug: Config update: 	-  <cib num_updates="18">
Oct 15 15:16:26 [14870] vm1 stonith-ng:    debug: Config update: 	-    <status>
Oct 15 15:16:26 [14870] vm1 stonith-ng:    debug: Config update: 	-      <node_state id="3232261517">
Oct 15 15:16:26 [14870] vm1 stonith-ng:    debug: Config update: 	-        <lrm id="3232261517">
Oct 15 15:16:26 [14870] vm1 stonith-ng:    debug: Config update: 	-          <lrm_resources>
Oct 15 15:16:26 [14870] vm1 stonith-ng:    debug: Config update: 	-            <lrm_resource id="f1">
Oct 15 15:16:26 [14870] vm1 stonith-ng:    debug: Config update: 	--             <lrm_rsc_op operation_key="f1_monitor_0" operation="monitor" transition-key="5:1:7:cffe5b98-3c92-4ed3-8992-426ef00df4ed" transition-magic="0:7;5:1:7:cffe5b98-3c92-4ed3-8992-426ef00df4ed" call-id="9" rc-code="7" exec-time="10" queue-time="1" id="f1_last_0"/>
Oct 15 15:16:26 [14870] vm1 stonith-ng:    debug: Config update: 	-            </lrm_resource>
Oct 15 15:16:26 [14870] vm1 stonith-ng:    debug: Config update: 	-          </lrm_resources>
Oct 15 15:16:26 [14870] vm1 stonith-ng:    debug: Config update: 	-        </lrm>
Oct 15 15:16:26 [14870] vm1 stonith-ng:    debug: Config update: 	-      </node_state>
Oct 15 15:16:26 [14870] vm1 stonith-ng:    debug: Config update: 	-    </status>
Oct 15 15:16:26 [14870] vm1 stonith-ng:    debug: Config update: 	-  </cib>
Oct 15 15:16:26 [14870] vm1 stonith-ng:    debug: Config update: 	+  <cib epoch="8" num_updates="19" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.7" cib-last-written="Tue Oct 15 15:16:21 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517">
Oct 15 15:16:26 [14870] vm1 stonith-ng:    debug: Config update: 	+    <status>
Oct 15 15:16:26 [14870] vm1 stonith-ng:    debug: Config update: 	+      <node_state id="3232261517" uname="vm1" in_ccm="true" crmd="online" crm-debug-origin="do_update_resource" join="member" expected="member">
Oct 15 15:16:26 [14870] vm1 stonith-ng:    debug: Config update: 	+        <lrm id="3232261517">
Oct 15 15:16:26 [14870] vm1 stonith-ng:    debug: Config update: 	+          <lrm_resources>
Oct 15 15:16:26 [14870] vm1 stonith-ng:    debug: Config update: 	+            <lrm_resource id="f1" type="external/libvirt" class="stonith">
Oct 15 15:16:26 [14870] vm1 stonith-ng:    debug: Config update: 	++             <lrm_rsc_op id="f1_last_0" operation_key="f1_start_0" operation="start" crm-debug-origin="do_update_resource" crm_feature_set="3.0.7" transition-key="17:1:0:cffe5b98-3c92-4ed3-8992-426ef00df4ed" transition-magic="0:0;17:1:0:cffe5b98-3c92-4ed3-8992-426ef00df4ed" call-id="14" rc-code="0" op-status="0" interval="0" last-run="1381817784" last-rc-change="1381817784" exec-time="2339" queue-time="0" op-digest="28866033b977ab11a8954be
Oct 15 15:16:26 [14870] vm1 stonith-ng:    debug: Config update: 	+            </lrm_resource>
Oct 15 15:16:26 [14870] vm1 stonith-ng:    debug: Config update: 	+          </lrm_resources>
Oct 15 15:16:26 [14870] vm1 stonith-ng:    debug: Config update: 	+        </lrm>
Oct 15 15:16:26 [14870] vm1 stonith-ng:    debug: Config update: 	+      </node_state>
Oct 15 15:16:26 [14870] vm1 stonith-ng:    debug: Config update: 	+    </status>
Oct 15 15:16:26 [14870] vm1 stonith-ng:    debug: Config update: 	+  </cib>
Oct 15 15:16:26 [14874] vm1       crmd:    debug: te_update_diff: 	Processing diff (cib_modify): 0.8.18 -> 0.8.19 (S_TRANSITION_ENGINE)
Oct 15 15:16:26 [14874] vm1       crmd:     info: match_graph_event: 	Action f1_start_0 (17) confirmed on vm1 (rc=0)
Oct 15 15:16:26 [14874] vm1       crmd:   notice: te_rsc_command: 	Initiating action 18: monitor f1_monitor_3600000 on vm1 (local)
Oct 15 15:16:26 [14874] vm1       crmd:     info: do_lrm_rsc_op: 	Performing key=18:1:0:cffe5b98-3c92-4ed3-8992-426ef00df4ed op=f1_monitor_3600000
Oct 15 15:16:26 [14869] vm1        cib:     info: cib_process_request: 	Completed cib_modify operation for section status: OK (rc=0, origin=local/crmd/64, version=0.8.19)
Oct 15 15:16:26 [14871] vm1       lrmd:    debug: process_lrmd_message: 	Processed lrmd_rsc_exec operation from 0245b4d8-632b-498b-abda-91d20df4709f: rc=15, reply=1, notify=0, exit=4201864
Oct 15 15:16:26 [14871] vm1       lrmd:    debug: log_execute: 	executing - rsc:f1 action:monitor call_id:15
Oct 15 15:16:26 [14870] vm1 stonith-ng:    debug: stonith_command: 	Processing st_execute 5 from lrmd.14871 (               0)
Oct 15 15:16:26 [14870] vm1 stonith-ng:    debug: schedule_stonith_command: 	Scheduling monitor on f1 for 8326641a-5ab4-4fc9-971f-95fa8e280da8 (timeout=60s)
Oct 15 15:16:26 [14874] vm1       crmd:   notice: te_rsc_command: 	Initiating action 19: start f2_start_0 on vm1 (local)
Oct 15 15:16:26 [14874] vm1       crmd:    debug: do_lrm_rsc_op: 	Stopped 0 recurring operations in preparation for f2_start_0
Oct 15 15:16:26 [14874] vm1       crmd:     info: do_lrm_rsc_op: 	Performing key=19:1:0:cffe5b98-3c92-4ed3-8992-426ef00df4ed op=f2_start_0
Oct 15 15:16:26 [14870] vm1 stonith-ng:     info: stonith_command: 	Processed st_execute from lrmd.14871: Operation now in progress (-115)
Oct 15 15:16:26 [14871] vm1       lrmd:    debug: process_lrmd_message: 	Processed lrmd_rsc_exec operation from 0245b4d8-632b-498b-abda-91d20df4709f: rc=16, reply=1, notify=0, exit=4201864
Oct 15 15:16:26 [14874] vm1       crmd:    debug: run_graph: 	Transition 1 (Complete=17, Pending=2, Fired=2, Skipped=0, Incomplete=2, Source=/var/lib/pacemaker/pengine/pe-input-1.bz2): In-progress
Oct 15 15:16:26 [14871] vm1       lrmd:     info: log_execute: 	executing - rsc:f2 action:start call_id:16
Oct 15 15:16:26 [14870] vm1 stonith-ng:     info: stonith_action_create: 	Initiating action monitor for agent fence_legacy (target=(null))
Oct 15 15:16:26 [14870] vm1 stonith-ng:    debug: internal_stonith_action_execute: 	forking
Oct 15 15:16:26 [14870] vm1 stonith-ng:    debug: internal_stonith_action_execute: 	sending args
Oct 15 15:16:26 [14870] vm1 stonith-ng:    debug: stonith_device_execute: 	Operation monitor on f1 now running with pid=14982, timeout=60s
Oct 15 15:16:26 [14870] vm1 stonith-ng:    debug: stonith_command: 	Processing st_device_register 6 from lrmd.14871 (            1000)
Oct 15 15:16:26 [14870] vm1 stonith-ng:     info: stonith_action_create: 	Initiating action metadata for agent fence_legacy (target=(null))
Oct 15 15:16:26 [14870] vm1 stonith-ng:    debug: internal_stonith_action_execute: 	forking
Oct 15 15:16:26 [14870] vm1 stonith-ng:    debug: internal_stonith_action_execute: 	sending args
Oct 15 15:16:26 [14870] vm1 stonith-ng:    debug: internal_stonith_action_execute: 	result = 0
Oct 15 15:16:26 [14870] vm1 stonith-ng:   notice: stonith_device_register: 	Device 'f2' already existed in device list (2 active devices)
Oct 15 15:16:26 [14870] vm1 stonith-ng:     info: stonith_command: 	Processed st_device_register from lrmd.14871: OK (0)
Oct 15 15:16:26 [14870] vm1 stonith-ng:    debug: stonith_command: 	Processing st_execute 7 from lrmd.14871 (               0)
Oct 15 15:16:26 [14870] vm1 stonith-ng:    debug: schedule_stonith_command: 	Scheduling monitor on f2 for 8326641a-5ab4-4fc9-971f-95fa8e280da8 (timeout=60s)
Oct 15 15:16:26 [14870] vm1 stonith-ng:     info: stonith_command: 	Processed st_execute from lrmd.14871: Operation now in progress (-115)
Oct 15 15:16:26 [14870] vm1 stonith-ng:     info: stonith_action_create: 	Initiating action monitor for agent fence_legacy (target=(null))
Oct 15 15:16:26 [14870] vm1 stonith-ng:    debug: internal_stonith_action_execute: 	forking
Oct 15 15:16:26 [14870] vm1 stonith-ng:    debug: internal_stonith_action_execute: 	sending args
Oct 15 15:16:26 [14870] vm1 stonith-ng:    debug: stonith_device_execute: 	Operation monitor on f2 now running with pid=14985, timeout=60s
Oct 15 15:16:27 [14870] vm1 stonith-ng:    debug: stonith_action_async_done: 	Child process 14985 performing action 'monitor' exited with rc 0
Oct 15 15:16:27 [14870] vm1 stonith-ng:    debug: log_operation: 	Operation 'monitor' [14985] for device 'f2' returned: 0 (OK)
Oct 15 15:16:27 [14870] vm1 stonith-ng:     info: log_operation: 	f2:14985 [ Performing: stonith -t external/ssh -S ]
Oct 15 15:16:27 [14870] vm1 stonith-ng:     info: log_operation: 	f2:14985 [ success:  0 ]
Oct 15 15:16:27 [14871] vm1       lrmd:     info: log_finished: 	finished - rsc:f2 action:start call_id:16  exit-code:0 exec-time:1215ms queue-time:1ms
Oct 15 15:16:27 [14874] vm1       crmd:    debug: create_operation_update: 	do_update_resource: Updating resource f2 after start op complete (interval=0)
Oct 15 15:16:27 [14874] vm1       crmd:   notice: process_lrm_event: 	LRM operation f2_start_0 (call=16, rc=0, cib-update=65, confirmed=true) ok
Oct 15 15:16:27 [14874] vm1       crmd:    debug: update_history_cache: 	Updating history for 'f2' with start op
Oct 15 15:16:27 [14869] vm1        cib:     info: cib_process_request: 	Completed cib_modify operation for section status: OK (rc=0, origin=local/crmd/65, version=0.8.20)
Oct 15 15:16:27 [14874] vm1       crmd:    debug: te_update_diff: 	Processing diff (cib_modify): 0.8.19 -> 0.8.20 (S_TRANSITION_ENGINE)
Oct 15 15:16:27 [14870] vm1 stonith-ng:    debug: Config update: 	Diff: --- 0.8.19
Oct 15 15:16:27 [14874] vm1       crmd:     info: match_graph_event: 	Action f2_start_0 (19) confirmed on vm1 (rc=0)
Oct 15 15:16:27 [14870] vm1 stonith-ng:    debug: Config update: 	Diff: +++ 0.8.20 0390945a03c5071c7992bbce1ac378a0
Oct 15 15:16:27 [14870] vm1 stonith-ng:    debug: Config update: 	-  <cib num_updates="19">
Oct 15 15:16:27 [14870] vm1 stonith-ng:    debug: Config update: 	-    <status>
Oct 15 15:16:27 [14870] vm1 stonith-ng:    debug: Config update: 	-      <node_state id="3232261517">
Oct 15 15:16:27 [14870] vm1 stonith-ng:    debug: Config update: 	-        <lrm id="3232261517">
Oct 15 15:16:27 [14870] vm1 stonith-ng:    debug: Config update: 	-          <lrm_resources>
Oct 15 15:16:27 [14870] vm1 stonith-ng:    debug: Config update: 	-            <lrm_resource id="f2">
Oct 15 15:16:27 [14870] vm1 stonith-ng:    debug: Config update: 	--             <lrm_rsc_op operation_key="f2_monitor_0" operation="monitor" transition-key="6:1:7:cffe5b98-3c92-4ed3-8992-426ef00df4ed" transition-magic="0:7;6:1:7:cffe5b98-3c92-4ed3-8992-426ef00df4ed" call-id="13" rc-code="7" last-run="1381817784" last-rc-change="1381817784" exec-time="0" queue-time="2" id="f2_last_0"/>
Oct 15 15:16:27 [14874] vm1       crmd:    debug: te_pseudo_action: 	Pseudo action 22 fired and confirmed
Oct 15 15:16:27 [14870] vm1 stonith-ng:    debug: Config update: 	-            </lrm_resource>
Oct 15 15:16:27 [14870] vm1 stonith-ng:    debug: Config update: 	-          </lrm_resources>
Oct 15 15:16:27 [14870] vm1 stonith-ng:    debug: Config update: 	-        </lrm>
Oct 15 15:16:27 [14870] vm1 stonith-ng:    debug: Config update: 	-      </node_state>
Oct 15 15:16:27 [14870] vm1 stonith-ng:    debug: Config update: 	-    </status>
Oct 15 15:16:27 [14870] vm1 stonith-ng:    debug: Config update: 	-  </cib>
Oct 15 15:16:27 [14870] vm1 stonith-ng:    debug: Config update: 	+  <cib epoch="8" num_updates="20" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.7" cib-last-written="Tue Oct 15 15:16:21 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517">
Oct 15 15:16:27 [14870] vm1 stonith-ng:    debug: Config update: 	+    <status>
Oct 15 15:16:27 [14870] vm1 stonith-ng:    debug: Config update: 	+      <node_state id="3232261517" uname="vm1" in_ccm="true" crmd="online" crm-debug-origin="do_update_resource" join="member" expected="member">
Oct 15 15:16:27 [14874] vm1       crmd:   notice: te_rsc_command: 	Initiating action 20: monitor f2_monitor_3600000 on vm1 (local)
Oct 15 15:16:27 [14870] vm1 stonith-ng:    debug: Config update: 	+        <lrm id="3232261517">
Oct 15 15:16:27 [14870] vm1 stonith-ng:    debug: Config update: 	+          <lrm_resources>
Oct 15 15:16:27 [14870] vm1 stonith-ng:    debug: Config update: 	+            <lrm_resource id="f2" type="external/ssh" class="stonith">
Oct 15 15:16:27 [14874] vm1       crmd:     info: do_lrm_rsc_op: 	Performing key=20:1:0:cffe5b98-3c92-4ed3-8992-426ef00df4ed op=f2_monitor_3600000
Oct 15 15:16:27 [14870] vm1 stonith-ng:    debug: Config update: 	++             <lrm_rsc_op id="f2_last_0" operation_key="f2_start_0" operation="start" crm-debug-origin="do_update_resource" crm_feature_set="3.0.7" transition-key="19:1:0:cffe5b98-3c92-4ed3-8992-426ef00df4ed" transition-magic="0:0;19:1:0:cffe5b98-3c92-4ed3-8992-426ef00df4ed" call-id="16" rc-code="0" op-status="0" interval="0" last-run="1381817786" last-rc-change="1381817786" exec-time="1215" queue-time="1" op-digest="efc9e41f336b1cf0d8474ae
Oct 15 15:16:27 [14870] vm1 stonith-ng:    debug: Config update: 	+            </lrm_resource>
Oct 15 15:16:27 [14870] vm1 stonith-ng:    debug: Config update: 	+          </lrm_resources>
Oct 15 15:16:27 [14870] vm1 stonith-ng:    debug: Config update: 	+        </lrm>
Oct 15 15:16:27 [14870] vm1 stonith-ng:    debug: Config update: 	+      </node_state>
Oct 15 15:16:27 [14870] vm1 stonith-ng:    debug: Config update: 	+    </status>
Oct 15 15:16:27 [14870] vm1 stonith-ng:    debug: Config update: 	+  </cib>
Oct 15 15:16:27 [14871] vm1       lrmd:    debug: process_lrmd_message: 	Processed lrmd_rsc_exec operation from 0245b4d8-632b-498b-abda-91d20df4709f: rc=17, reply=1, notify=0, exit=4201864
Oct 15 15:16:27 [14871] vm1       lrmd:    debug: log_execute: 	executing - rsc:f2 action:monitor call_id:17
Oct 15 15:16:27 [14874] vm1       crmd:    debug: run_graph: 	Transition 1 (Complete=18, Pending=2, Fired=2, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-1.bz2): In-progress
Oct 15 15:16:27 [14874] vm1       crmd:    debug: run_graph: 	Transition 1 (Complete=19, Pending=2, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-1.bz2): In-progress
Oct 15 15:16:27 [14870] vm1 stonith-ng:    debug: stonith_command: 	Processing st_execute 8 from lrmd.14871 (               0)
Oct 15 15:16:27 [14870] vm1 stonith-ng:    debug: schedule_stonith_command: 	Scheduling monitor on f2 for 8326641a-5ab4-4fc9-971f-95fa8e280da8 (timeout=60s)
Oct 15 15:16:27 [14870] vm1 stonith-ng:     info: stonith_command: 	Processed st_execute from lrmd.14871: Operation now in progress (-115)
Oct 15 15:16:27 [14870] vm1 stonith-ng:     info: stonith_action_create: 	Initiating action monitor for agent fence_legacy (target=(null))
Oct 15 15:16:27 [14870] vm1 stonith-ng:    debug: internal_stonith_action_execute: 	forking
Oct 15 15:16:27 [14870] vm1 stonith-ng:    debug: internal_stonith_action_execute: 	sending args
Oct 15 15:16:27 [14870] vm1 stonith-ng:    debug: stonith_device_execute: 	Operation monitor on f2 now running with pid=15011, timeout=60s
Oct 15 15:16:28 [14870] vm1 stonith-ng:    debug: stonith_action_async_done: 	Child process 14982 performing action 'monitor' exited with rc 0
Oct 15 15:16:28 [14870] vm1 stonith-ng:    debug: log_operation: 	Operation 'monitor' [14982] for device 'f1' returned: 0 (OK)
Oct 15 15:16:28 [14870] vm1 stonith-ng:     info: log_operation: 	f1:14982 [ Performing: stonith -t external/libvirt -S ]
Oct 15 15:16:28 [14870] vm1 stonith-ng:     info: log_operation: 	f1:14982 [ success:  0 ]
Oct 15 15:16:28 [14871] vm1       lrmd:    debug: log_finished: 	finished - rsc:f1 action:monitor call_id:15  exit-code:0 exec-time:1383ms queue-time:0ms
Oct 15 15:16:28 [14874] vm1       crmd:    debug: create_operation_update: 	do_update_resource: Updating resource f1 after monitor op complete (interval=3600000)
Oct 15 15:16:28 [14874] vm1       crmd:   notice: process_lrm_event: 	LRM operation f1_monitor_3600000 (call=15, rc=0, cib-update=66, confirmed=false) ok
Oct 15 15:16:28 [14874] vm1       crmd:    debug: update_history_cache: 	Updating history for 'f1' with monitor op
Oct 15 15:16:28 [14870] vm1 stonith-ng:    debug: Config update: 	Diff: --- 0.8.20
Oct 15 15:16:28 [14870] vm1 stonith-ng:    debug: Config update: 	Diff: +++ 0.8.21 cd010b699d6adcacec052c3611bdfb8b
Oct 15 15:16:28 [14870] vm1 stonith-ng:    debug: Config update: 	-- <cib num_updates="20"/>
Oct 15 15:16:28 [14870] vm1 stonith-ng:    debug: Config update: 	+  <cib epoch="8" num_updates="21" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.7" cib-last-written="Tue Oct 15 15:16:21 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517">
Oct 15 15:16:28 [14870] vm1 stonith-ng:    debug: Config update: 	+    <status>
Oct 15 15:16:28 [14870] vm1 stonith-ng:    debug: Config update: 	+      <node_state id="3232261517" uname="vm1" in_ccm="true" crmd="online" crm-debug-origin="do_update_resource" join="member" expected="member">
Oct 15 15:16:28 [14870] vm1 stonith-ng:    debug: Config update: 	+        <lrm id="3232261517">
Oct 15 15:16:28 [14870] vm1 stonith-ng:    debug: Config update: 	+          <lrm_resources>
Oct 15 15:16:28 [14870] vm1 stonith-ng:    debug: Config update: 	+            <lrm_resource id="f1" type="external/libvirt" class="stonith">
Oct 15 15:16:28 [14870] vm1 stonith-ng:    debug: Config update: 	++             <lrm_rsc_op id="f1_monitor_3600000" operation_key="f1_monitor_3600000" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.7" transition-key="18:1:0:cffe5b98-3c92-4ed3-8992-426ef00df4ed" transition-magic="0:0;18:1:0:cffe5b98-3c92-4ed3-8992-426ef00df4ed" call-id="15" rc-code="0" op-status="0" interval="3600000" last-rc-change="1381817786" exec-time="1383" queue-time="0" op-digest="671ca9559ec67e22788f
Oct 15 15:16:28 [14870] vm1 stonith-ng:    debug: Config update: 	+            </lrm_resource>
Oct 15 15:16:28 [14870] vm1 stonith-ng:    debug: Config update: 	+          </lrm_resources>
Oct 15 15:16:28 [14870] vm1 stonith-ng:    debug: Config update: 	+        </lrm>
Oct 15 15:16:28 [14870] vm1 stonith-ng:    debug: Config update: 	+      </node_state>
Oct 15 15:16:28 [14870] vm1 stonith-ng:    debug: Config update: 	+    </status>
Oct 15 15:16:28 [14870] vm1 stonith-ng:    debug: Config update: 	+  </cib>
Oct 15 15:16:28 [14874] vm1       crmd:    debug: te_update_diff: 	Processing diff (cib_modify): 0.8.20 -> 0.8.21 (S_TRANSITION_ENGINE)
Oct 15 15:16:28 [14869] vm1        cib:     info: cib_process_request: 	Completed cib_modify operation for section status: OK (rc=0, origin=local/crmd/66, version=0.8.21)
Oct 15 15:16:28 [14874] vm1       crmd:     info: match_graph_event: 	Action f1_monitor_3600000 (18) confirmed on vm1 (rc=0)
Oct 15 15:16:28 [14874] vm1       crmd:    debug: run_graph: 	Transition 1 (Complete=20, Pending=1, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-1.bz2): In-progress
Oct 15 15:16:29 [14870] vm1 stonith-ng:    debug: stonith_action_async_done: 	Child process 15011 performing action 'monitor' exited with rc 0
Oct 15 15:16:29 [14870] vm1 stonith-ng:    debug: log_operation: 	Operation 'monitor' [15011] for device 'f2' returned: 0 (OK)
Oct 15 15:16:29 [14870] vm1 stonith-ng:     info: log_operation: 	f2:15011 [ Performing: stonith -t external/ssh -S ]
Oct 15 15:16:29 [14870] vm1 stonith-ng:     info: log_operation: 	f2:15011 [ success:  0 ]
Oct 15 15:16:29 [14871] vm1       lrmd:    debug: log_finished: 	finished - rsc:f2 action:monitor call_id:17  exit-code:0 exec-time:1137ms queue-time:1ms
Oct 15 15:16:29 [14874] vm1       crmd:    debug: create_operation_update: 	do_update_resource: Updating resource f2 after monitor op complete (interval=3600000)
Oct 15 15:16:29 [14874] vm1       crmd:   notice: process_lrm_event: 	LRM operation f2_monitor_3600000 (call=17, rc=0, cib-update=67, confirmed=false) ok
Oct 15 15:16:29 [14874] vm1       crmd:    debug: update_history_cache: 	Updating history for 'f2' with monitor op
Oct 15 15:16:29 [14869] vm1        cib:     info: cib_process_request: 	Completed cib_modify operation for section status: OK (rc=0, origin=local/crmd/67, version=0.8.22)
Oct 15 15:16:29 [14870] vm1 stonith-ng:    debug: Config update: 	Diff: --- 0.8.21
Oct 15 15:16:29 [14874] vm1       crmd:    debug: te_update_diff: 	Processing diff (cib_modify): 0.8.21 -> 0.8.22 (S_TRANSITION_ENGINE)
Oct 15 15:16:29 [14870] vm1 stonith-ng:    debug: Config update: 	Diff: +++ 0.8.22 48312f0b1278bd3b454d5dfe6d743e2f
Oct 15 15:16:29 [14870] vm1 stonith-ng:    debug: Config update: 	-- <cib num_updates="21"/>
Oct 15 15:16:29 [14870] vm1 stonith-ng:    debug: Config update: 	+  <cib epoch="8" num_updates="22" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.7" cib-last-written="Tue Oct 15 15:16:21 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517">
Oct 15 15:16:29 [14870] vm1 stonith-ng:    debug: Config update: 	+    <status>
Oct 15 15:16:29 [14870] vm1 stonith-ng:    debug: Config update: 	+      <node_state id="3232261517" uname="vm1" in_ccm="true" crmd="online" crm-debug-origin="do_update_resource" join="member" expected="member">
Oct 15 15:16:29 [14870] vm1 stonith-ng:    debug: Config update: 	+        <lrm id="3232261517">
Oct 15 15:16:29 [14870] vm1 stonith-ng:    debug: Config update: 	+          <lrm_resources>
Oct 15 15:16:29 [14870] vm1 stonith-ng:    debug: Config update: 	+            <lrm_resource id="f2" type="external/ssh" class="stonith">
Oct 15 15:16:29 [14870] vm1 stonith-ng:    debug: Config update: 	++             <lrm_rsc_op id="f2_monitor_3600000" operation_key="f2_monitor_3600000" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.7" transition-key="20:1:0:cffe5b98-3c92-4ed3-8992-426ef00df4ed" transition-magic="0:0;20:1:0:cffe5b98-3c92-4ed3-8992-426ef00df4ed" call-id="17" rc-code="0" op-status="0" interval="3600000" last-rc-change="1381817787" exec-time="1137" queue-time="1" op-digest="ddb637f11ba4277e2354
Oct 15 15:16:29 [14870] vm1 stonith-ng:    debug: Config update: 	+            </lrm_resource>
Oct 15 15:16:29 [14874] vm1       crmd:     info: match_graph_event: 	Action f2_monitor_3600000 (20) confirmed on vm1 (rc=0)
Oct 15 15:16:29 [14870] vm1 stonith-ng:    debug: Config update: 	+          </lrm_resources>
Oct 15 15:16:29 [14870] vm1 stonith-ng:    debug: Config update: 	+        </lrm>
Oct 15 15:16:29 [14870] vm1 stonith-ng:    debug: Config update: 	+      </node_state>
Oct 15 15:16:29 [14870] vm1 stonith-ng:    debug: Config update: 	+    </status>
Oct 15 15:16:29 [14870] vm1 stonith-ng:    debug: Config update: 	+  </cib>
Oct 15 15:16:29 [14874] vm1       crmd:   notice: run_graph: 	Transition 1 (Complete=21, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-1.bz2): Complete
Oct 15 15:16:29 [14874] vm1       crmd:    debug: te_graph_trigger: 	Transition 1 is now complete
Oct 15 15:16:29 [14874] vm1       crmd:    debug: notify_crmd: 	Processing transition completion in state S_TRANSITION_ENGINE
Oct 15 15:16:29 [14874] vm1       crmd:    debug: notify_crmd: 	Transition 1 status: done - <null>
Oct 15 15:16:29 [14874] vm1       crmd:    debug: s_crmd_fsa: 	Processing I_TE_SUCCESS: [ state=S_TRANSITION_ENGINE cause=C_FSA_INTERNAL origin=notify_crmd ]
Oct 15 15:16:29 [14874] vm1       crmd:     info: do_log: 	FSA: Input I_TE_SUCCESS from notify_crmd() received in state S_TRANSITION_ENGINE
Oct 15 15:16:29 [14874] vm1       crmd:   notice: do_state_transition: 	State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]
Oct 15 15:16:29 [14874] vm1       crmd:    debug: do_state_transition: 	Starting PEngine Recheck Timer
Oct 15 15:16:29 [14874] vm1       crmd:    debug: crm_timer_start: 	Started PEngine Recheck Timer (I_PE_CALC:900000ms), src=91
Oct 15 15:16:37 [14869] vm1        cib:     info: crm_client_new: 	Connecting 0x143a6c0 for uid=0 gid=0 pid=15029 id=384ff994-a623-4790-8ad1-a66967c08079
Oct 15 15:16:37 [14869] vm1        cib:    debug: handle_new_connection: 	IPC credentials authenticated (14869-15029-14)
Oct 15 15:16:37 [14869] vm1        cib:    debug: qb_ipcs_shm_connect: 	connecting to client [15029]
Oct 15 15:16:37 [14869] vm1        cib:    debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Oct 15 15:16:37 [14869] vm1        cib:    debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Oct 15 15:16:37 [14869] vm1        cib:    debug: qb_rb_open_2: 	shm size:524301; real_size:528384; rb->word_size:132096
Oct 15 15:16:37 [14869] vm1        cib:     info: cib_process_request: 	Completed cib_query operation for section 'all': OK (rc=0, origin=local/crm_mon/2, version=0.8.22)
Oct 15 15:16:37 [14869] vm1        cib:    debug: qb_ipcs_dispatch_connection_request: 	HUP conn (14869-15029-14)
Oct 15 15:16:37 [14869] vm1        cib:    debug: qb_ipcs_disconnect: 	qb_ipcs_disconnect(14869-15029-14) state:2
Oct 15 15:16:37 [14869] vm1        cib:     info: crm_client_destroy: 	Destroying 0 events
Oct 15 15:16:37 [14869] vm1        cib:    debug: qb_rb_close: 	Free'ing ringbuffer: /dev/shm/qb-cib_ro-response-14869-15029-14-header
Oct 15 15:16:37 [14869] vm1        cib:    debug: qb_rb_close: 	Free'ing ringbuffer: /dev/shm/qb-cib_ro-event-14869-15029-14-header
Oct 15 15:16:37 [14869] vm1        cib:    debug: qb_rb_close: 	Free'ing ringbuffer: /dev/shm/qb-cib_ro-request-14869-15029-14-header
Oct 15 15:17:04 [14874] vm1       crmd:    debug: te_update_diff: 	Processing diff (cib_modify): 0.8.22 -> 0.8.23 (S_IDLE)
Oct 15 15:17:04 [14874] vm1       crmd:     info: abort_transition_graph: 	process_graph_event:583 - Triggered transition abort (complete=1, node=vm3, tag=lrm_rsc_op, id=pDummy_last_failure_0, magic=0:7;16:1:0:cffe5b98-3c92-4ed3-8992-426ef00df4ed, cib=0.8.23) : Inactive graph
Oct 15 15:17:04 [14874] vm1       crmd:    debug: crm_timer_start: 	Started New Transition Timer (I_PE_CALC:2000ms), src=92
Oct 15 15:17:04 [14874] vm1       crmd:  warning: update_failcount: 	Updating failcount for pDummy on vm3 after failed monitor: rc=7 (update=value++, time=1381817824)
Oct 15 15:17:04 [14874] vm1       crmd:    debug: attrd_update_delegate: 	Sent update: fail-count-pDummy=value++ for vm3
Oct 15 15:17:04 [14874] vm1       crmd:    debug: attrd_update_delegate: 	Sent update: last-failure-pDummy=1381817824 for vm3
Oct 15 15:17:04 [14874] vm1       crmd:     info: process_graph_event: 	Detected action (1.16) pDummy_monitor_10000.15=not running: failed
Oct 15 15:17:04 [14872] vm1      attrd:     info: attrd_client_message: 	Expanded fail-count-pDummy=value++ to 1
Oct 15 15:17:04 [14872] vm1      attrd:     info: attrd_client_message: 	Broadcasting fail-count-pDummy[vm3] = 1 (writer)
Oct 15 15:17:04 [14872] vm1      attrd:     info: attrd_client_message: 	Broadcasting last-failure-pDummy[vm3] = 1381817824 (writer)
Oct 15 15:17:04 [14869] vm1        cib:     info: cib_process_request: 	Completed cib_modify operation for section status: OK (rc=0, origin=vm3/crmd/18, version=0.8.23)
Oct 15 15:17:04 [14870] vm1 stonith-ng:    debug: Config update: 	Diff: --- 0.8.22
Oct 15 15:17:04 [14870] vm1 stonith-ng:    debug: Config update: 	Diff: +++ 0.8.23 3f0c5edee31ff093f977b532afdf2ed0
Oct 15 15:17:04 [14870] vm1 stonith-ng:    debug: Config update: 	-- <cib num_updates="22"/>
Oct 15 15:17:04 [14870] vm1 stonith-ng:    debug: Config update: 	+  <cib epoch="8" num_updates="23" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.7" cib-last-written="Tue Oct 15 15:16:21 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517">
Oct 15 15:17:04 [14870] vm1 stonith-ng:    debug: Config update: 	+    <status>
Oct 15 15:17:04 [14870] vm1 stonith-ng:    debug: Config update: 	+      <node_state id="3232261519" uname="vm3" in_ccm="true" crmd="online" crm-debug-origin="do_update_resource" join="member" expected="member">
Oct 15 15:17:04 [14870] vm1 stonith-ng:    debug: Config update: 	+        <lrm id="3232261519">
Oct 15 15:17:04 [14870] vm1 stonith-ng:    debug: Config update: 	+          <lrm_resources>
Oct 15 15:17:04 [14870] vm1 stonith-ng:    debug: Config update: 	+            <lrm_resource id="pDummy" type="Dummy" class="ocf" provider="pacemaker">
Oct 15 15:17:04 [14870] vm1 stonith-ng:    debug: Config update: 	++             <lrm_rsc_op id="pDummy_last_failure_0" operation_key="pDummy_monitor_10000" operation="monitor" crm-debug-origin="do_update_resource" crm_feature_set="3.0.7" transition-key="16:1:0:cffe5b98-3c92-4ed3-8992-426ef00df4ed" transition-magic="0:7;16:1:0:cffe5b98-3c92-4ed3-8992-426ef00df4ed" call-id="15" rc-code="7" op-status="0" interval="10000" last-rc-change="1381817824" exec-time="0" queue-time="0" op-digest="5ce203b19bbe022929c2
Oct 15 15:17:04 [14870] vm1 stonith-ng:    debug: Config update: 	+            </lrm_resource>
Oct 15 15:17:04 [14870] vm1 stonith-ng:    debug: Config update: 	+          </lrm_resources>
Oct 15 15:17:04 [14870] vm1 stonith-ng:    debug: Config update: 	+        </lrm>
Oct 15 15:17:04 [14870] vm1 stonith-ng:    debug: Config update: 	+      </node_state>
Oct 15 15:17:04 [14870] vm1 stonith-ng:    debug: Config update: 	+    </status>
Oct 15 15:17:04 [14870] vm1 stonith-ng:    debug: Config update: 	+  </cib>
Oct 15 15:17:04 [14872] vm1      attrd:    debug: write_attribute: 	Update: vm3[fail-count-pDummy]=1 (3232261519 3232261519 3232261519 vm3)
Oct 15 15:17:04 [14872] vm1      attrd:   notice: write_attribute: 	Sent update 9 with 1 changes for fail-count-pDummy, id=<n/a>, set=(null)
Oct 15 15:17:04 [14872] vm1      attrd:    debug: write_attribute: 	Update: vm3[last-failure-pDummy]=1381817824 (3232261519 3232261519 3232261519 vm3)
Oct 15 15:17:04 [14872] vm1      attrd:   notice: write_attribute: 	Sent update 10 with 1 changes for last-failure-pDummy, id=<n/a>, set=(null)
Oct 15 15:17:04 [14874] vm1       crmd:    debug: te_update_diff: 	Processing diff (cib_modify): 0.8.23 -> 0.8.24 (S_IDLE)
Oct 15 15:17:04 [14874] vm1       crmd:     info: abort_transition_graph: 	te_update_diff:172 - Triggered transition abort (complete=1, node=vm3, tag=nvpair, id=status-3232261519-fail-count-pDummy, name=fail-count-pDummy, value=1, magic=NA, cib=0.8.24) : Transient attribute: update
Oct 15 15:17:04 [14874] vm1       crmd:    debug: abort_transition_graph: 	Cause   <nvpair id="status-3232261519-fail-count-pDummy" name="fail-count-pDummy" value="1" __crm_diff_marker__="added:top"/>
Oct 15 15:17:04 [14874] vm1       crmd:    debug: crm_timer_start: 	Started New Transition Timer (I_PE_CALC:2000ms), src=93
Oct 15 15:17:04 [14869] vm1        cib:     info: cib_process_request: 	Completed cib_modify operation for section status: OK (rc=0, origin=local/attrd/9, version=0.8.24)
Oct 15 15:17:04 [14870] vm1 stonith-ng:    debug: Config update: 	Diff: --- 0.8.23
Oct 15 15:17:04 [14870] vm1 stonith-ng:    debug: Config update: 	Diff: +++ 0.8.24 30925430a4fa6939d2650d75a367ed48
Oct 15 15:17:04 [14870] vm1 stonith-ng:    debug: Config update: 	-- <cib num_updates="23"/>
Oct 15 15:17:04 [14870] vm1 stonith-ng:    debug: Config update: 	+  <cib epoch="8" num_updates="24" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.7" cib-last-written="Tue Oct 15 15:16:21 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517">
Oct 15 15:17:04 [14870] vm1 stonith-ng:    debug: Config update: 	+    <status>
Oct 15 15:17:04 [14870] vm1 stonith-ng:    debug: Config update: 	+      <node_state id="3232261519" uname="vm3" in_ccm="true" crmd="online" crm-debug-origin="do_update_resource" join="member" expected="member">
Oct 15 15:17:04 [14870] vm1 stonith-ng:    debug: Config update: 	+        <transient_attributes id="3232261519">
Oct 15 15:17:04 [14870] vm1 stonith-ng:    debug: Config update: 	+          <instance_attributes id="status-3232261519">
Oct 15 15:17:04 [14870] vm1 stonith-ng:    debug: Config update: 	++           <nvpair id="status-3232261519-fail-count-pDummy" name="fail-count-pDummy" value="1"/>
Oct 15 15:17:04 [14870] vm1 stonith-ng:    debug: Config update: 	+          </instance_attributes>
Oct 15 15:17:04 [14870] vm1 stonith-ng:    debug: Config update: 	+        </transient_attributes>
Oct 15 15:17:04 [14870] vm1 stonith-ng:    debug: Config update: 	+      </node_state>
Oct 15 15:17:04 [14870] vm1 stonith-ng:    debug: Config update: 	+    </status>
Oct 15 15:17:04 [14870] vm1 stonith-ng:    debug: Config update: 	+  </cib>
Oct 15 15:17:04 [14874] vm1       crmd:    debug: te_update_diff: 	Processing diff (cib_modify): 0.8.24 -> 0.8.25 (S_IDLE)
Oct 15 15:17:04 [14874] vm1       crmd:     info: abort_transition_graph: 	te_update_diff:172 - Triggered transition abort (complete=1, node=vm3, tag=nvpair, id=status-3232261519-last-failure-pDummy, name=last-failure-pDummy, value=1381817824, magic=NA, cib=0.8.25) : Transient attribute: update
Oct 15 15:17:04 [14874] vm1       crmd:    debug: abort_transition_graph: 	Cause   <nvpair id="status-3232261519-last-failure-pDummy" name="last-failure-pDummy" value="1381817824" __crm_diff_marker__="added:top"/>
Oct 15 15:17:04 [14874] vm1       crmd:    debug: crm_timer_start: 	Started New Transition Timer (I_PE_CALC:2000ms), src=94
Oct 15 15:17:04 [14869] vm1        cib:     info: cib_process_request: 	Completed cib_modify operation for section status: OK (rc=0, origin=local/attrd/10, version=0.8.25)
Oct 15 15:17:04 [14872] vm1      attrd:     info: attrd_cib_callback: 	Update 9 for fail-count-pDummy: OK (0)
Oct 15 15:17:04 [14872] vm1      attrd:   notice: attrd_cib_callback: 	Update 9 for fail-count-pDummy[vm3]=1: OK (0)
Oct 15 15:17:04 [14870] vm1 stonith-ng:    debug: Config update: 	Diff: --- 0.8.24
Oct 15 15:17:04 [14870] vm1 stonith-ng:    debug: Config update: 	Diff: +++ 0.8.25 0ba8596743b1d36c4fc8a958c6d9a677
Oct 15 15:17:04 [14870] vm1 stonith-ng:    debug: Config update: 	-- <cib num_updates="24"/>
Oct 15 15:17:04 [14870] vm1 stonith-ng:    debug: Config update: 	+  <cib epoch="8" num_updates="25" admin_epoch="0" validate-with="pacemaker-1.2" crm_feature_set="3.0.7" cib-last-written="Tue Oct 15 15:16:21 2013" update-origin="vm1" update-client="crmd" have-quorum="1" dc-uuid="3232261517">
Oct 15 15:17:04 [14870] vm1 stonith-ng:    debug: Config update: 	+    <status>
Oct 15 15:17:04 [14870] vm1 stonith-ng:    debug: Config update: 	+      <node_state id="3232261519" uname="vm3" in_ccm="true" crmd="online" crm-debug-origin="do_update_resource" join="member" expected="member">
Oct 15 15:17:04 [14870] vm1 stonith-ng:    debug: Config update: 	+        <transient_attributes id="3232261519">
Oct 15 15:17:04 [14870] vm1 stonith-ng:    debug: Config update: 	+          <instance_attributes id="status-3232261519">
Oct 15 15:17:04 [14870] vm1 stonith-ng:    debug: Config update: 	++           <nvpair id="status-3232261519-last-failure-pDummy" name="last-failure-pDummy" value="1381817824"/>
Oct 15 15:17:04 [14870] vm1 stonith-ng:    debug: Config update: 	+          </instance_attributes>
Oct 15 15:17:04 [14870] vm1 stonith-ng:    debug: Config update: 	+        </transient_attributes>
Oct 15 15:17:04 [14870] vm1 stonith-ng:    debug: Config update: 	+      </node_state>
Oct 15 15:17:04 [14870] vm1 stonith-ng:    debug: Config update: 	+    </status>
Oct 15 15:17:04 [14870] vm1 stonith-ng:    debug: Config update: 	+  </cib>
Oct 15 15:17:04 [14872] vm1      attrd:     info: attrd_cib_callback: 	Update 10 for last-failure-pDummy: OK (0)
Oct 15 15:17:04 [14872] vm1      attrd:   notice: attrd_cib_callback: 	Update 10 for last-failure-pDummy[vm3]=1381817824: OK (0)
Oct 15 15:17:06 [14874] vm1       crmd:     info: crm_timer_popped: 	New Transition Timer (I_PE_CALC) just popped (2000ms)
Oct 15 15:17:06 [14874] vm1       crmd:    debug: s_crmd_fsa: 	Processing I_PE_CALC: [ state=S_IDLE cause=C_TIMER_POPPED origin=crm_timer_popped ]
Oct 15 15:17:06 [14874] vm1       crmd:   notice: do_state_transition: 	State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_TIMER_POPPED origin=crm_timer_popped ]
Oct 15 15:17:06 [14874] vm1       crmd:     info: do_state_transition: 	Progressed to state S_POLICY_ENGINE after C_TIMER_POPPED
Oct 15 15:17:06 [14874] vm1       crmd:    debug: do_state_transition: 	All 3 cluster nodes are eligible to run resources.
Oct 15 15:17:06 [14874] vm1       crmd:    debug: do_pe_invoke: 	Query 68: Requesting the current CIB: S_POLICY_ENGINE
Oct 15 15:17:06 [14869] vm1        cib:     info: cib_process_request: 	Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/68, version=0.8.25)
Oct 15 15:17:06 [14874] vm1       crmd:    debug: do_pe_invoke_callback: 	Invoking the PE: query=68, ref=pe_calc-dc-1381817826-52, seq=12, quorate=1
Oct 15 15:17:06 [14873] vm1    pengine:    debug: unpack_config: 	STONITH timeout: 60000
Oct 15 15:17:06 [14873] vm1    pengine:    debug: unpack_config: 	STONITH of failed nodes is enabled
Oct 15 15:17:06 [14873] vm1    pengine:    debug: unpack_config: 	Stop all active resources: false
Oct 15 15:17:06 [14873] vm1    pengine:    debug: unpack_config: 	Cluster is symmetric - resources can run anywhere by default
Oct 15 15:17:06 [14873] vm1    pengine:    debug: unpack_config: 	Default stickiness: 0
Oct 15 15:17:06 [14873] vm1    pengine:    debug: unpack_config: 	On loss of CCM Quorum: Freeze resources
Oct 15 15:17:06 [14873] vm1    pengine:    debug: unpack_config: 	Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
Oct 15 15:17:06 [14873] vm1    pengine:    debug: unpack_domains: 	Unpacking domains
Oct 15 15:17:06 [14873] vm1    pengine:     info: determine_online_status_fencing: 	Node vm2 is active
Oct 15 15:17:06 [14873] vm1    pengine:     info: determine_online_status: 	Node vm2 is online
Oct 15 15:17:06 [14873] vm1    pengine:     info: determine_online_status_fencing: 	Node vm3 is active
Oct 15 15:17:06 [14873] vm1    pengine:     info: determine_online_status: 	Node vm3 is online
Oct 15 15:17:06 [14873] vm1    pengine:     info: determine_online_status_fencing: 	Node vm1 is active
Oct 15 15:17:06 [14873] vm1    pengine:     info: determine_online_status: 	Node vm1 is online
Oct 15 15:17:06 [14873] vm1    pengine:    debug: determine_op_status: 	pDummy_monitor_10000 on vm3 returned 'not running' (7) instead of the expected value: 'ok' (0)
Oct 15 15:17:06 [14873] vm1    pengine:  warning: unpack_rsc_op_failure: 	Processing failed op monitor for pDummy on vm3: not running (7)
Oct 15 15:17:06 [14873] vm1    pengine:  warning: pe_fence_node: 	Node vm3 will be fenced because of resource failure(s)
Oct 15 15:17:06 [14873] vm1    pengine:     info: native_print: 	pDummy	(ocf::pacemaker:Dummy):	FAILED vm3 
Oct 15 15:17:06 [14873] vm1    pengine:     info: group_print: 	 Resource Group: gStonith3
Oct 15 15:17:06 [14873] vm1    pengine:     info: native_print: 	     f1	(stonith:external/libvirt):	Started vm1 
Oct 15 15:17:06 [14873] vm1    pengine:     info: native_print: 	     f2	(stonith:external/ssh):	Started vm1 
Oct 15 15:17:06 [14873] vm1    pengine:    debug: group_rsc_location: 	Processing rsc_location l2-rule-0 for gStonith3
Oct 15 15:17:06 [14873] vm1    pengine:    debug: group_rsc_location: 	Processing rsc_location l2-rule for gStonith3
Oct 15 15:17:06 [14873] vm1    pengine:    debug: common_apply_stickiness: 	Resource f1: preferring current location (node=vm1, weight=1000000)
Oct 15 15:17:06 [14873] vm1    pengine:    debug: common_apply_stickiness: 	Resource f2: preferring current location (node=vm1, weight=1000000)
Oct 15 15:17:06 [14873] vm1    pengine:    debug: common_apply_stickiness: 	Resource pDummy: preferring current location (node=vm3, weight=1000000)
Oct 15 15:17:06 [14873] vm1    pengine:     info: get_failcount_full: 	pDummy has failed 1 times on vm3
Oct 15 15:17:06 [14873] vm1    pengine:  warning: common_apply_stickiness: 	Forcing pDummy away from vm3 after 1 failures (max=1)
Oct 15 15:17:06 [14873] vm1    pengine:    debug: native_assign_node: 	Assigning vm1 to pDummy
Oct 15 15:17:06 [14873] vm1    pengine:    debug: native_assign_node: 	Assigning vm1 to f1
Oct 15 15:17:06 [14873] vm1    pengine:    debug: native_assign_node: 	Assigning vm1 to f2
Oct 15 15:17:06 [14873] vm1    pengine:     info: RecurringOp: 	 Start recurring monitor (10s) for pDummy on vm1
Oct 15 15:17:06 [14873] vm1    pengine:  warning: stage6: 	Scheduling Node vm3 for STONITH
Oct 15 15:17:06 [14873] vm1    pengine:   notice: native_stop_constraints: 	Stop of failed resource pDummy is implicit after vm3 is fenced
Oct 15 15:17:06 [14873] vm1    pengine:   notice: LogActions: 	Recover pDummy	(Started vm3 -> vm1)
Oct 15 15:17:06 [14873] vm1    pengine:     info: LogActions: 	Leave   f1	(Started vm1)
Oct 15 15:17:06 [14873] vm1    pengine:     info: LogActions: 	Leave   f2	(Started vm1)
Oct 15 15:17:06 [14873] vm1    pengine:    debug: get_last_sequence: 	Series file /var/lib/pacemaker/pengine/pe-warn.last does not exist
Oct 15 15:17:06 [14874] vm1       crmd:    debug: s_crmd_fsa: 	Processing I_PE_SUCCESS: [ state=S_POLICY_ENGINE cause=C_IPC_MESSAGE origin=handle_response ]
Oct 15 15:17:06 [14874] vm1       crmd:     info: do_state_transition: 	State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Oct 15 15:17:06 [14874] vm1       crmd:    debug: unpack_graph: 	Unpacked transition 2: 6 actions in 6 synapses
Oct 15 15:17:06 [14874] vm1       crmd:     info: do_te_invoke: 	Processing graph 2 (ref=pe_calc-dc-1381817826-52) derived from /var/lib/pacemaker/pengine/pe-warn-0.bz2
Oct 15 15:17:06 [14873] vm1    pengine:  warning: process_pe_message: 	Calculated Transition 2: /var/lib/pacemaker/pengine/pe-warn-0.bz2
Oct 15 15:17:06 [14874] vm1       crmd:   notice: te_fence_node: 	Executing reboot fencing operation (20) on vm3 (timeout=60000)
Oct 15 15:17:06 [14870] vm1 stonith-ng:    debug: stonith_command: 	Processing st_fence 103 from crmd.14874 (               0)
Oct 15 15:17:06 [14870] vm1 stonith-ng:   notice: handle_request: 	Client crmd.14874.4eb0ff33 wants to fence (reboot) 'vm3' with device '(any)'
Oct 15 15:17:06 [14870] vm1 stonith-ng:   notice: initiate_remote_stonith_op: 	Initiating remote operation reboot for vm3: c9b3e4f1-269f-48a8-ba27-c7573dead8e2 (0)
Oct 15 15:17:06 [14874] vm1       crmd:    debug: run_graph: 	Transition 2 (Complete=0, Pending=1, Fired=1, Skipped=0, Incomplete=5, Source=/var/lib/pacemaker/pengine/pe-warn-0.bz2): In-progress
Oct 15 15:17:06 [14870] vm1 stonith-ng:     info: stonith_command: 	Processed st_fence from crmd.14874: Operation now in progress (-115)
Oct 15 15:17:06 [14870] vm1 stonith-ng:    debug: stonith_command: 	Processing st_query 0 from vm1 (               0)
Oct 15 15:17:06 [14870] vm1 stonith-ng:    debug: create_remote_stonith_op: 	c9b3e4f1-269f-48a8-ba27-c7573dead8e2 already exists
Oct 15 15:17:06 [14870] vm1 stonith-ng:    debug: stonith_query: 	Query   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="c9b3e4f1-269f-48a8-ba27-c7573dead8e2" st_op="st_query" st_callid="2" st_callopt="0" st_remote_op="c9b3e4f1-269f-48a8-ba27-c7573dead8e2" st_target="vm3" st_device_action="reboot" st_origin="vm1" st_clientid="4eb0ff33-6154-4b90-9801-dc2d005b765a" st_clientname="crmd.14874" st_timeout="60" src="vm1"/>
Oct 15 15:17:06 [14870] vm1 stonith-ng:    debug: get_capable_devices: 	Searching through 2 devices to see what is capable of action (reboot) for target vm3
Oct 15 15:17:06 [14870] vm1 stonith-ng:    debug: schedule_stonith_command: 	Scheduling list on f1 for stonith-ng (timeout=30s)
Oct 15 15:17:06 [14870] vm1 stonith-ng:    debug: schedule_stonith_command: 	Scheduling list on f2 for stonith-ng (timeout=30s)
Oct 15 15:17:06 [14870] vm1 stonith-ng:     info: stonith_command: 	Processed st_query from vm1: OK (0)
Oct 15 15:17:06 [14870] vm1 stonith-ng:     info: stonith_action_create: 	Initiating action list for agent fence_legacy (target=(null))
Oct 15 15:17:06 [14870] vm1 stonith-ng:    debug: internal_stonith_action_execute: 	forking
Oct 15 15:17:06 [14870] vm1 stonith-ng:    debug: internal_stonith_action_execute: 	sending args
Oct 15 15:17:06 [14870] vm1 stonith-ng:    debug: stonith_device_execute: 	Operation list on f1 now running with pid=15031, timeout=30s
Oct 15 15:17:06 [14870] vm1 stonith-ng:     info: stonith_action_create: 	Initiating action list for agent fence_legacy (target=(null))
Oct 15 15:17:06 [14870] vm1 stonith-ng:    debug: internal_stonith_action_execute: 	forking
Oct 15 15:17:06 [14870] vm1 stonith-ng:    debug: internal_stonith_action_execute: 	sending args
Oct 15 15:17:06 [14870] vm1 stonith-ng:    debug: stonith_device_execute: 	Operation list on f2 now running with pid=15032, timeout=30s
Oct 15 15:17:06 [14870] vm1 stonith-ng:    debug: stonith_command: 	Processing st_query reply 0 from vm3 (               0)
Oct 15 15:17:06 [14870] vm1 stonith-ng:     info: process_remote_stonith_query: 	Ignoring reply from vm3, hosts are not permitted to commit suicide
Oct 15 15:17:06 [14870] vm1 stonith-ng:     info: stonith_command: 	Processed st_query reply from vm3: OK (0)
Oct 15 15:17:06 [14870] vm1 stonith-ng:    debug: stonith_command: 	Processing st_query reply 0 from vm2 (               0)
Oct 15 15:17:06 [14870] vm1 stonith-ng:     info: process_remote_stonith_query: 	Query result 2 of 3 from vm2 (2 devices)
Oct 15 15:17:06 [14870] vm1 stonith-ng:     info: call_remote_stonith: 	Total remote op timeout set to 120 for fencing of node vm3 for crmd.14874.c9b3e4f1
Oct 15 15:17:06 [14870] vm1 stonith-ng:     info: call_remote_stonith: 	Requesting that vm2 perform op reboot vm3 with f1 for crmd.14874 (72s)
Oct 15 15:17:06 [14870] vm1 stonith-ng:     info: stonith_command: 	Processed st_query reply from vm2: OK (0)
Oct 15 15:17:06 [14870] vm1 stonith-ng:    debug: stonith_action_async_done: 	Child process 15032 performing action 'list' exited with rc 0
Oct 15 15:17:06 [14870] vm1 stonith-ng:     info: dynamic_list_search_cb: 	Refreshing port list for f2
Oct 15 15:17:06 [14870] vm1 stonith-ng:    debug: stonith_action_async_done: 	Child process 15031 performing action 'list' exited with rc 0
Oct 15 15:17:06 [14870] vm1 stonith-ng:     info: dynamic_list_search_cb: 	Refreshing port list for f1
Oct 15 15:17:06 [14870] vm1 stonith-ng:    debug: search_devices_record_result: 	Finished Search. 2 devices can perform action (reboot) on node vm3
Oct 15 15:17:06 [14870] vm1 stonith-ng:    debug: stonith_query_capable_device_cb: 	Found 2 matching devices for 'vm3'
Oct 15 15:17:06 [14870] vm1 stonith-ng:    debug: stonith_command: 	Processing st_query reply 0 from vm1 (               0)
Oct 15 15:17:06 [14870] vm1 stonith-ng:     info: process_remote_stonith_query: 	Query result 3 of 3 from vm1 (2 devices)
Oct 15 15:17:06 [14870] vm1 stonith-ng:     info: stonith_command: 	Processed st_query reply from vm1: OK (0)
Oct 15 15:17:16 [14870] vm1 stonith-ng:    debug: stonith_command: 	Processing st_fence reply 0 from vm2 (               0)
Oct 15 15:17:16 [14870] vm1 stonith-ng:   notice: process_remote_stonith_exec: 	Call to f1 for vm3 on behalf of crmd.14874@vm1: Generic Pacemaker error (-201)
Oct 15 15:17:16 [14870] vm1 stonith-ng:     info: call_remote_stonith: 	Requesting that vm1 perform op reboot vm3 with f2 for crmd.14874 (72s)
Oct 15 15:17:16 [14870] vm1 stonith-ng:     info: stonith_command: 	Processed st_fence reply from vm2: OK (0)
Oct 15 15:17:16 [14870] vm1 stonith-ng:    debug: stonith_command: 	Processing st_fence 0 from vm1 (               0)
Oct 15 15:17:16 [14870] vm1 stonith-ng:    debug: schedule_stonith_command: 	Scheduling reboot on f2 for remote peer vm1 with op id (c9b3e4f1-269f-48a8-ba27-c7573dead8e2) (timeout=60s)
Oct 15 15:17:16 [14870] vm1 stonith-ng:     info: stonith_command: 	Processed st_fence from vm1: Operation now in progress (-115)
Oct 15 15:17:16 [14870] vm1 stonith-ng:     info: stonith_action_create: 	Initiating action reboot for agent fence_legacy (target=vm3)
Oct 15 15:17:16 [14870] vm1 stonith-ng:    debug: make_args: 	Performing reboot action for node 'vm3' as 'port=vm3'
Oct 15 15:17:16 [14870] vm1 stonith-ng:    debug: internal_stonith_action_execute: 	forking
Oct 15 15:17:16 [14870] vm1 stonith-ng:    debug: internal_stonith_action_execute: 	sending args
Oct 15 15:17:16 [14870] vm1 stonith-ng:    debug: stonith_device_execute: 	Operation reboot for node vm3 on f2 now running with pid=15076, timeout=60s
Oct 15 15:17:35 vm1 stonith: [15077]: CRIT: external_reset_req: 'ssh reset' for host vm3 failed with rc 1
Oct 15 15:17:35 [14870] vm1 stonith-ng:    debug: stonith_action_async_done: 	Child process 15076 performing action 'reboot' exited with rc 1
Oct 15 15:17:35 [14870] vm1 stonith-ng:     info: update_remaining_timeout: 	Attempted to execute agent fence_legacy (reboot) the maximum number of times (1) allowed
Oct 15 15:17:35 [14870] vm1 stonith-ng:    error: log_operation: 	Operation 'reboot' [15076] (call 2 from crmd.14874) for host 'vm3' with device 'f2' returned: -201 (Generic Pacemaker error)
Oct 15 15:17:35 [14870] vm1 stonith-ng:  warning: log_operation: 	f2:15076 [ Performing: stonith -t external/ssh -T reset vm3 ]
Oct 15 15:17:35 [14870] vm1 stonith-ng:  warning: log_operation: 	f2:15076 [ failed: vm3 5 ]
Oct 15 15:17:35 [14870] vm1 stonith-ng:    debug: stonith_command: 	Processing st_fence reply 0 from vm1 (               0)
Oct 15 15:17:35 [14870] vm1 stonith-ng:   notice: process_remote_stonith_exec: 	Call to f2 for vm3 on behalf of crmd.14874@vm1: Generic Pacemaker error (-201)
Oct 15 15:17:35 [14870] vm1 stonith-ng:   notice: stonith_topology_next: 	All fencing options to fence vm3 for crmd.14874@vm1.c9b3e4f1 failed
Oct 15 15:17:35 [14870] vm1 stonith-ng:     info: stonith_command: 	Processed st_fence reply from vm1: OK (0)
Oct 15 15:17:35 [14870] vm1 stonith-ng:    debug: stonith_command: 	Processing st_notify reply 0 from vm1 (               0)
Oct 15 15:17:35 [14870] vm1 stonith-ng:    debug: process_remote_stonith_exec: 	Marking call to reboot for vm3 on behalf of crmd.14874@c9b3e4f1-269f-48a8-ba27-c7573dead8e2.vm1: Generic Pacemaker error (-201)
Oct 15 15:17:35 [14870] vm1 stonith-ng:    error: remote_op_done: 	Operation reboot of vm3 by vm1 for crmd.14874@vm1.c9b3e4f1: Generic Pacemaker error
Oct 15 15:17:35 [14870] vm1 stonith-ng:     info: stonith_command: 	Processed st_notify reply from vm1: OK (0)
Oct 15 15:17:35 [14874] vm1       crmd:   notice: tengine_stonith_callback: 	Stonith operation 2/20:2:0:cffe5b98-3c92-4ed3-8992-426ef00df4ed: Generic Pacemaker error (-201)
Oct 15 15:17:35 [14874] vm1       crmd:   notice: tengine_stonith_callback: 	Stonith operation 2 for vm3 failed (Generic Pacemaker error): aborting transition.
Oct 15 15:17:35 [14874] vm1       crmd:     info: abort_transition_graph: 	tengine_stonith_callback:463 - Triggered transition abort (complete=0) : Stonith failed
Oct 15 15:17:35 [14874] vm1       crmd:    debug: update_abort_priority: 	Abort priority upgraded from 0 to 1000000
Oct 15 15:17:35 [14874] vm1       crmd:    debug: update_abort_priority: 	Abort action done superceeded by restart
Oct 15 15:17:35 [14874] vm1       crmd:   notice: tengine_stonith_notify: 	Peer vm3 was not terminated (reboot) by vm1 for vm1: Generic Pacemaker error (ref=c9b3e4f1-269f-48a8-ba27-c7573dead8e2) by client crmd.14874
Oct 15 15:17:35 [14874] vm1       crmd:   notice: run_graph: 	Transition 2 (Complete=1, Pending=0, Fired=0, Skipped=5, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-warn-0.bz2): Stopped
Oct 15 15:17:35 [14874] vm1       crmd:    debug: te_graph_trigger: 	Transition 2 is now complete
Oct 15 15:17:35 [14874] vm1       crmd:    debug: notify_crmd: 	Processing transition completion in state S_TRANSITION_ENGINE
Oct 15 15:17:35 [14874] vm1       crmd:    debug: crm_timer_start: 	Started New Transition Timer (I_PE_CALC:2000ms), src=98
Oct 15 15:17:35 [14874] vm1       crmd:    debug: notify_crmd: 	Transition 2 status: restart - Stonith failed
Oct 15 15:17:37 [14874] vm1       crmd:     info: crm_timer_popped: 	New Transition Timer (I_PE_CALC) just popped (2000ms)
Oct 15 15:17:37 [14874] vm1       crmd:    debug: s_crmd_fsa: 	Processing I_PE_CALC: [ state=S_TRANSITION_ENGINE cause=C_TIMER_POPPED origin=crm_timer_popped ]
Oct 15 15:17:37 [14874] vm1       crmd:     info: do_state_transition: 	State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_TIMER_POPPED origin=crm_timer_popped ]
Oct 15 15:17:37 [14874] vm1       crmd:     info: do_state_transition: 	Progressed to state S_POLICY_ENGINE after C_TIMER_POPPED
Oct 15 15:17:37 [14874] vm1       crmd:    debug: do_state_transition: 	All 3 cluster nodes are eligible to run resources.
Oct 15 15:17:37 [14874] vm1       crmd:    debug: do_pe_invoke: 	Query 69: Requesting the current CIB: S_POLICY_ENGINE
Oct 15 15:17:37 [14869] vm1        cib:     info: cib_process_request: 	Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/69, version=0.8.25)
Oct 15 15:17:37 [14874] vm1       crmd:    debug: do_pe_invoke_callback: 	Invoking the PE: query=69, ref=pe_calc-dc-1381817857-53, seq=12, quorate=1
Oct 15 15:17:37 [14873] vm1    pengine:     info: process_pe_message: 	Input has not changed since last time, not saving to disk
Oct 15 15:17:37 [14873] vm1    pengine:    debug: unpack_config: 	STONITH timeout: 60000
Oct 15 15:17:37 [14873] vm1    pengine:    debug: unpack_config: 	STONITH of failed nodes is enabled
Oct 15 15:17:37 [14873] vm1    pengine:    debug: unpack_config: 	Stop all active resources: false
Oct 15 15:17:37 [14873] vm1    pengine:    debug: unpack_config: 	Cluster is symmetric - resources can run anywhere by default
Oct 15 15:17:37 [14873] vm1    pengine:    debug: unpack_config: 	Default stickiness: 0
Oct 15 15:17:37 [14873] vm1    pengine:    debug: unpack_config: 	On loss of CCM Quorum: Freeze resources
Oct 15 15:17:37 [14873] vm1    pengine:    debug: unpack_config: 	Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
Oct 15 15:17:37 [14873] vm1    pengine:    debug: unpack_domains: 	Unpacking domains
Oct 15 15:17:37 [14873] vm1    pengine:     info: determine_online_status_fencing: 	Node vm2 is active
Oct 15 15:17:37 [14873] vm1    pengine:     info: determine_online_status: 	Node vm2 is online
Oct 15 15:17:37 [14873] vm1    pengine:     info: determine_online_status_fencing: 	Node vm3 is active
Oct 15 15:17:37 [14873] vm1    pengine:     info: determine_online_status: 	Node vm3 is online
Oct 15 15:17:37 [14873] vm1    pengine:     info: determine_online_status_fencing: 	Node vm1 is active
Oct 15 15:17:37 [14873] vm1    pengine:     info: determine_online_status: 	Node vm1 is online
Oct 15 15:17:37 [14873] vm1    pengine:    debug: determine_op_status: 	pDummy_monitor_10000 on vm3 returned 'not running' (7) instead of the expected value: 'ok' (0)
Oct 15 15:17:37 [14873] vm1    pengine:  warning: unpack_rsc_op_failure: 	Processing failed op monitor for pDummy on vm3: not running (7)
Oct 15 15:17:37 [14873] vm1    pengine:  warning: pe_fence_node: 	Node vm3 will be fenced because of resource failure(s)
Oct 15 15:17:37 [14873] vm1    pengine:     info: native_print: 	pDummy	(ocf::pacemaker:Dummy):	FAILED vm3 
Oct 15 15:17:37 [14873] vm1    pengine:     info: group_print: 	 Resource Group: gStonith3
Oct 15 15:17:37 [14873] vm1    pengine:     info: native_print: 	     f1	(stonith:external/libvirt):	Started vm1 
Oct 15 15:17:37 [14873] vm1    pengine:     info: native_print: 	     f2	(stonith:external/ssh):	Started vm1 
Oct 15 15:17:37 [14873] vm1    pengine:    debug: group_rsc_location: 	Processing rsc_location l2-rule-0 for gStonith3
Oct 15 15:17:37 [14873] vm1    pengine:    debug: group_rsc_location: 	Processing rsc_location l2-rule for gStonith3
Oct 15 15:17:37 [14873] vm1    pengine:    debug: common_apply_stickiness: 	Resource f1: preferring current location (node=vm1, weight=1000000)
Oct 15 15:17:37 [14873] vm1    pengine:    debug: common_apply_stickiness: 	Resource f2: preferring current location (node=vm1, weight=1000000)
Oct 15 15:17:37 [14873] vm1    pengine:    debug: common_apply_stickiness: 	Resource pDummy: preferring current location (node=vm3, weight=1000000)
Oct 15 15:17:37 [14873] vm1    pengine:     info: get_failcount_full: 	pDummy has failed 1 times on vm3
Oct 15 15:17:37 [14873] vm1    pengine:  warning: common_apply_stickiness: 	Forcing pDummy away from vm3 after 1 failures (max=1)
Oct 15 15:17:37 [14873] vm1    pengine:    debug: native_assign_node: 	Assigning vm1 to pDummy
Oct 15 15:17:37 [14873] vm1    pengine:    debug: native_assign_node: 	Assigning vm1 to f1
Oct 15 15:17:37 [14873] vm1    pengine:    debug: native_assign_node: 	Assigning vm1 to f2
Oct 15 15:17:37 [14873] vm1    pengine:     info: RecurringOp: 	 Start recurring monitor (10s) for pDummy on vm1
Oct 15 15:17:37 [14873] vm1    pengine:  warning: stage6: 	Scheduling Node vm3 for STONITH
Oct 15 15:17:37 [14873] vm1    pengine:   notice: native_stop_constraints: 	Stop of failed resource pDummy is implicit after vm3 is fenced
Oct 15 15:17:37 [14873] vm1    pengine:   notice: LogActions: 	Recover pDummy	(Started vm3 -> vm1)
Oct 15 15:17:37 [14873] vm1    pengine:     info: LogActions: 	Leave   f1	(Started vm1)
Oct 15 15:17:37 [14873] vm1    pengine:     info: LogActions: 	Leave   f2	(Started vm1)
Oct 15 15:17:37 [14874] vm1       crmd:    debug: s_crmd_fsa: 	Processing I_PE_SUCCESS: [ state=S_POLICY_ENGINE cause=C_IPC_MESSAGE origin=handle_response ]
Oct 15 15:17:37 [14874] vm1       crmd:     info: do_state_transition: 	State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Oct 15 15:17:37 [14874] vm1       crmd:    debug: unpack_graph: 	Unpacked transition 3: 6 actions in 6 synapses
Oct 15 15:17:37 [14874] vm1       crmd:     info: do_te_invoke: 	Processing graph 3 (ref=pe_calc-dc-1381817857-53) derived from /var/lib/pacemaker/pengine/pe-warn-0.bz2
Oct 15 15:17:37 [14874] vm1       crmd:   notice: te_fence_node: 	Executing reboot fencing operation (20) on vm3 (timeout=60000)
Oct 15 15:17:37 [14874] vm1       crmd:    debug: run_graph: 	Transition 3 (Complete=0, Pending=1, Fired=1, Skipped=0, Incomplete=5, Source=/var/lib/pacemaker/pengine/pe-warn-0.bz2): In-progress
Oct 15 15:17:37 [14870] vm1 stonith-ng:    debug: stonith_command: 	Processing st_fence 106 from crmd.14874 (               0)
Oct 15 15:17:37 [14873] vm1    pengine:  warning: process_pe_message: 	Calculated Transition 3: /var/lib/pacemaker/pengine/pe-warn-0.bz2
Oct 15 15:17:37 [14870] vm1 stonith-ng:   notice: handle_request: 	Client crmd.14874.4eb0ff33 wants to fence (reboot) 'vm3' with device '(any)'
Oct 15 15:17:37 [14870] vm1 stonith-ng:   notice: initiate_remote_stonith_op: 	Initiating remote operation reboot for vm3: d5bd243d-da15-4098-80ab-c9f1bce3827f (0)
Oct 15 15:17:37 [14870] vm1 stonith-ng:     info: stonith_command: 	Processed st_fence from crmd.14874: Operation now in progress (-115)
Oct 15 15:17:37 [14870] vm1 stonith-ng:    debug: stonith_command: 	Processing st_query 0 from vm1 (               0)
Oct 15 15:17:37 [14870] vm1 stonith-ng:    debug: create_remote_stonith_op: 	d5bd243d-da15-4098-80ab-c9f1bce3827f already exists
Oct 15 15:17:37 [14870] vm1 stonith-ng:    debug: stonith_query: 	Query   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="d5bd243d-da15-4098-80ab-c9f1bce3827f" st_op="st_query" st_callid="3" st_callopt="0" st_remote_op="d5bd243d-da15-4098-80ab-c9f1bce3827f" st_target="vm3" st_device_action="reboot" st_origin="vm1" st_clientid="4eb0ff33-6154-4b90-9801-dc2d005b765a" st_clientname="crmd.14874" st_timeout="60" src="vm1"/>
Oct 15 15:17:37 [14870] vm1 stonith-ng:    debug: get_capable_devices: 	Searching through 2 devices to see what is capable of action (reboot) for target vm3
Oct 15 15:17:37 [14870] vm1 stonith-ng:   notice: can_fence_host_with_device: 	f1 can fence vm3: dynamic-list
Oct 15 15:17:37 [14870] vm1 stonith-ng:   notice: can_fence_host_with_device: 	f2 can fence vm3: dynamic-list
Oct 15 15:17:37 [14870] vm1 stonith-ng:    debug: search_devices_record_result: 	Finished Search. 2 devices can perform action (reboot) on node vm3
Oct 15 15:17:37 [14870] vm1 stonith-ng:    debug: stonith_query_capable_device_cb: 	Found 2 matching devices for 'vm3'
Oct 15 15:17:37 [14870] vm1 stonith-ng:     info: stonith_command: 	Processed st_query from vm1: OK (0)
Oct 15 15:17:37 [14870] vm1 stonith-ng:    debug: stonith_command: 	Processing st_query reply 0 from vm2 (               0)
Oct 15 15:17:37 [14870] vm1 stonith-ng:     info: process_remote_stonith_query: 	Query result 1 of 3 from vm2 (2 devices)
Oct 15 15:17:37 [14870] vm1 stonith-ng:     info: call_remote_stonith: 	Total remote op timeout set to 120 for fencing of node vm3 for crmd.14874.d5bd243d
Oct 15 15:17:37 [14870] vm1 stonith-ng:     info: call_remote_stonith: 	Requesting that vm2 perform op reboot vm3 with f1 for crmd.14874 (72s)
Oct 15 15:17:37 [14870] vm1 stonith-ng:     info: stonith_command: 	Processed st_query reply from vm2: OK (0)
Oct 15 15:17:37 [14870] vm1 stonith-ng:    debug: stonith_command: 	Processing st_query reply 0 from vm3 (               0)
Oct 15 15:17:37 [14870] vm1 stonith-ng:     info: process_remote_stonith_query: 	Ignoring reply from vm3, hosts are not permitted to commit suicide
Oct 15 15:17:37 [14870] vm1 stonith-ng:     info: stonith_command: 	Processed st_query reply from vm3: OK (0)
Oct 15 15:17:37 [14870] vm1 stonith-ng:    debug: stonith_command: 	Processing st_query reply 0 from vm1 (               0)
Oct 15 15:17:37 [14870] vm1 stonith-ng:     info: process_remote_stonith_query: 	Query result 3 of 3 from vm1 (2 devices)
Oct 15 15:17:37 [14870] vm1 stonith-ng:     info: stonith_command: 	Processed st_query reply from vm1: OK (0)
Oct 15 15:17:46 [14870] vm1 stonith-ng:    debug: stonith_command: 	Processing st_fence reply 0 from vm2 (               0)
Oct 15 15:17:46 [14870] vm1 stonith-ng:   notice: process_remote_stonith_exec: 	Call to f1 for vm3 on behalf of crmd.14874@vm1: Generic Pacemaker error (-201)
Oct 15 15:17:46 [14870] vm1 stonith-ng:     info: call_remote_stonith: 	Requesting that vm1 perform op reboot vm3 with f2 for crmd.14874 (72s)
Oct 15 15:17:46 [14870] vm1 stonith-ng:     info: stonith_command: 	Processed st_fence reply from vm2: OK (0)
Oct 15 15:17:46 [14870] vm1 stonith-ng:    debug: stonith_command: 	Processing st_fence 0 from vm1 (               0)
Oct 15 15:17:46 [14870] vm1 stonith-ng:    debug: schedule_stonith_command: 	Scheduling reboot on f2 for remote peer vm1 with op id (d5bd243d-da15-4098-80ab-c9f1bce3827f) (timeout=60s)
Oct 15 15:17:46 [14870] vm1 stonith-ng:     info: stonith_command: 	Processed st_fence from vm1: Operation now in progress (-115)
Oct 15 15:17:46 [14870] vm1 stonith-ng:     info: stonith_action_create: 	Initiating action reboot for agent fence_legacy (target=vm3)
Oct 15 15:17:46 [14870] vm1 stonith-ng:    debug: make_args: 	Performing reboot action for node 'vm3' as 'port=vm3'
Oct 15 15:17:46 [14870] vm1 stonith-ng:    debug: internal_stonith_action_execute: 	forking
Oct 15 15:17:46 [14870] vm1 stonith-ng:    debug: internal_stonith_action_execute: 	sending args
Oct 15 15:17:46 [14870] vm1 stonith-ng:    debug: stonith_device_execute: 	Operation reboot for node vm3 on f2 now running with pid=15464, timeout=60s
Oct 15 15:18:06 vm1 stonith: [15465]: CRIT: external_reset_req: 'ssh reset' for host vm3 failed with rc 1
Oct 15 15:18:06 [14870] vm1 stonith-ng:    debug: stonith_action_async_done: 	Child process 15464 performing action 'reboot' exited with rc 1
Oct 15 15:18:06 [14870] vm1 stonith-ng:     info: update_remaining_timeout: 	Attempted to execute agent fence_legacy (reboot) the maximum number of times (1) allowed
Oct 15 15:18:06 [14870] vm1 stonith-ng:    error: log_operation: 	Operation 'reboot' [15464] (call 3 from crmd.14874) for host 'vm3' with device 'f2' returned: -201 (Generic Pacemaker error)
Oct 15 15:18:06 [14870] vm1 stonith-ng:  warning: log_operation: 	f2:15464 [ Performing: stonith -t external/ssh -T reset vm3 ]
Oct 15 15:18:06 [14870] vm1 stonith-ng:  warning: log_operation: 	f2:15464 [ failed: vm3 5 ]
Oct 15 15:18:06 [14870] vm1 stonith-ng:    debug: stonith_command: 	Processing st_fence reply 0 from vm1 (               0)
Oct 15 15:18:06 [14870] vm1 stonith-ng:   notice: process_remote_stonith_exec: 	Call to f2 for vm3 on behalf of crmd.14874@vm1: Generic Pacemaker error (-201)
Oct 15 15:18:06 [14870] vm1 stonith-ng:   notice: stonith_topology_next: 	All fencing options to fence vm3 for crmd.14874@vm1.d5bd243d failed
Oct 15 15:18:06 [14870] vm1 stonith-ng:     info: stonith_command: 	Processed st_fence reply from vm1: OK (0)
Oct 15 15:18:06 [14870] vm1 stonith-ng:    debug: stonith_command: 	Processing st_notify reply 0 from vm1 (               0)
Oct 15 15:18:06 [14870] vm1 stonith-ng:    debug: process_remote_stonith_exec: 	Marking call to reboot for vm3 on behalf of crmd.14874@d5bd243d-da15-4098-80ab-c9f1bce3827f.vm1: Generic Pacemaker error (-201)
Oct 15 15:18:06 [14870] vm1 stonith-ng:    error: remote_op_done: 	Operation reboot of vm3 by vm1 for crmd.14874@vm1.d5bd243d: Generic Pacemaker error
Oct 15 15:18:06 [14870] vm1 stonith-ng:     info: stonith_command: 	Processed st_notify reply from vm1: OK (0)
Oct 15 15:18:06 [14874] vm1       crmd:   notice: tengine_stonith_callback: 	Stonith operation 3/20:3:0:cffe5b98-3c92-4ed3-8992-426ef00df4ed: Generic Pacemaker error (-201)
Oct 15 15:18:06 [14874] vm1       crmd:   notice: tengine_stonith_callback: 	Stonith operation 3 for vm3 failed (Generic Pacemaker error): aborting transition.
Oct 15 15:18:06 [14874] vm1       crmd:     info: abort_transition_graph: 	tengine_stonith_callback:463 - Triggered transition abort (complete=0) : Stonith failed
Oct 15 15:18:06 [14874] vm1       crmd:    debug: update_abort_priority: 	Abort priority upgraded from 0 to 1000000
Oct 15 15:18:06 [14874] vm1       crmd:    debug: update_abort_priority: 	Abort action done superceeded by restart
Oct 15 15:18:06 [14874] vm1       crmd:   notice: tengine_stonith_notify: 	Peer vm3 was not terminated (reboot) by vm1 for vm1: Generic Pacemaker error (ref=d5bd243d-da15-4098-80ab-c9f1bce3827f) by client crmd.14874
Oct 15 15:18:06 [14874] vm1       crmd:   notice: run_graph: 	Transition 3 (Complete=1, Pending=0, Fired=0, Skipped=5, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-warn-0.bz2): Stopped
Oct 15 15:18:06 [14874] vm1       crmd:    debug: te_graph_trigger: 	Transition 3 is now complete
Oct 15 15:18:06 [14874] vm1       crmd:    debug: notify_crmd: 	Processing transition completion in state S_TRANSITION_ENGINE
Oct 15 15:18:06 [14874] vm1       crmd:    debug: crm_timer_start: 	Started New Transition Timer (I_PE_CALC:2000ms), src=102
Oct 15 15:18:06 [14874] vm1       crmd:    debug: notify_crmd: 	Transition 3 status: restart - Stonith failed
Oct 15 15:18:08 [14874] vm1       crmd:     info: crm_timer_popped: 	New Transition Timer (I_PE_CALC) just popped (2000ms)
Oct 15 15:18:08 [14874] vm1       crmd:    debug: s_crmd_fsa: 	Processing I_PE_CALC: [ state=S_TRANSITION_ENGINE cause=C_TIMER_POPPED origin=crm_timer_popped ]
Oct 15 15:18:08 [14874] vm1       crmd:     info: do_state_transition: 	State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_TIMER_POPPED origin=crm_timer_popped ]
Oct 15 15:18:08 [14874] vm1       crmd:     info: do_state_transition: 	Progressed to state S_POLICY_ENGINE after C_TIMER_POPPED
Oct 15 15:18:08 [14874] vm1       crmd:    debug: do_state_transition: 	All 3 cluster nodes are eligible to run resources.
Oct 15 15:18:08 [14874] vm1       crmd:    debug: do_pe_invoke: 	Query 70: Requesting the current CIB: S_POLICY_ENGINE
Oct 15 15:18:08 [14869] vm1        cib:     info: cib_process_request: 	Completed cib_query operation for section 'all': OK (rc=0, origin=local/crmd/70, version=0.8.25)
Oct 15 15:18:08 [14874] vm1       crmd:    debug: do_pe_invoke_callback: 	Invoking the PE: query=70, ref=pe_calc-dc-1381817888-54, seq=12, quorate=1
Oct 15 15:18:08 [14873] vm1    pengine:     info: process_pe_message: 	Input has not changed since last time, not saving to disk
Oct 15 15:18:08 [14873] vm1    pengine:    debug: unpack_config: 	STONITH timeout: 60000
Oct 15 15:18:08 [14873] vm1    pengine:    debug: unpack_config: 	STONITH of failed nodes is enabled
Oct 15 15:18:08 [14873] vm1    pengine:    debug: unpack_config: 	Stop all active resources: false
Oct 15 15:18:08 [14873] vm1    pengine:    debug: unpack_config: 	Cluster is symmetric - resources can run anywhere by default
Oct 15 15:18:08 [14873] vm1    pengine:    debug: unpack_config: 	Default stickiness: 0
Oct 15 15:18:08 [14873] vm1    pengine:    debug: unpack_config: 	On loss of CCM Quorum: Freeze resources
Oct 15 15:18:08 [14873] vm1    pengine:    debug: unpack_config: 	Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
Oct 15 15:18:08 [14873] vm1    pengine:    debug: unpack_domains: 	Unpacking domains
Oct 15 15:18:08 [14873] vm1    pengine:     info: determine_online_status_fencing: 	Node vm2 is active
Oct 15 15:18:08 [14873] vm1    pengine:     info: determine_online_status: 	Node vm2 is online
Oct 15 15:18:08 [14873] vm1    pengine:     info: determine_online_status_fencing: 	Node vm3 is active
Oct 15 15:18:08 [14873] vm1    pengine:     info: determine_online_status: 	Node vm3 is online
Oct 15 15:18:08 [14873] vm1    pengine:     info: determine_online_status_fencing: 	Node vm1 is active
Oct 15 15:18:08 [14873] vm1    pengine:     info: determine_online_status: 	Node vm1 is online
Oct 15 15:18:08 [14873] vm1    pengine:    debug: determine_op_status: 	pDummy_monitor_10000 on vm3 returned 'not running' (7) instead of the expected value: 'ok' (0)
Oct 15 15:18:08 [14873] vm1    pengine:  warning: unpack_rsc_op_failure: 	Processing failed op monitor for pDummy on vm3: not running (7)
Oct 15 15:18:08 [14873] vm1    pengine:  warning: pe_fence_node: 	Node vm3 will be fenced because of resource failure(s)
Oct 15 15:18:08 [14873] vm1    pengine:     info: native_print: 	pDummy	(ocf::pacemaker:Dummy):	FAILED vm3 
Oct 15 15:18:08 [14873] vm1    pengine:     info: group_print: 	 Resource Group: gStonith3
Oct 15 15:18:08 [14873] vm1    pengine:     info: native_print: 	     f1	(stonith:external/libvirt):	Started vm1 
Oct 15 15:18:08 [14873] vm1    pengine:     info: native_print: 	     f2	(stonith:external/ssh):	Started vm1 
Oct 15 15:18:08 [14873] vm1    pengine:    debug: group_rsc_location: 	Processing rsc_location l2-rule-0 for gStonith3
Oct 15 15:18:08 [14873] vm1    pengine:    debug: group_rsc_location: 	Processing rsc_location l2-rule for gStonith3
Oct 15 15:18:08 [14873] vm1    pengine:    debug: common_apply_stickiness: 	Resource f1: preferring current location (node=vm1, weight=1000000)
Oct 15 15:18:08 [14873] vm1    pengine:    debug: common_apply_stickiness: 	Resource f2: preferring current location (node=vm1, weight=1000000)
Oct 15 15:18:08 [14873] vm1    pengine:    debug: common_apply_stickiness: 	Resource pDummy: preferring current location (node=vm3, weight=1000000)
Oct 15 15:18:08 [14873] vm1    pengine:     info: get_failcount_full: 	pDummy has failed 1 times on vm3
Oct 15 15:18:08 [14873] vm1    pengine:  warning: common_apply_stickiness: 	Forcing pDummy away from vm3 after 1 failures (max=1)
Oct 15 15:18:08 [14873] vm1    pengine:    debug: native_assign_node: 	Assigning vm1 to pDummy
Oct 15 15:18:08 [14873] vm1    pengine:    debug: native_assign_node: 	Assigning vm1 to f1
Oct 15 15:18:08 [14873] vm1    pengine:    debug: native_assign_node: 	Assigning vm1 to f2
Oct 15 15:18:08 [14873] vm1    pengine:     info: RecurringOp: 	 Start recurring monitor (10s) for pDummy on vm1
Oct 15 15:18:08 [14873] vm1    pengine:  warning: stage6: 	Scheduling Node vm3 for STONITH
Oct 15 15:18:08 [14873] vm1    pengine:   notice: native_stop_constraints: 	Stop of failed resource pDummy is implicit after vm3 is fenced
Oct 15 15:18:08 [14873] vm1    pengine:   notice: LogActions: 	Recover pDummy	(Started vm3 -> vm1)
Oct 15 15:18:08 [14873] vm1    pengine:     info: LogActions: 	Leave   f1	(Started vm1)
Oct 15 15:18:08 [14873] vm1    pengine:     info: LogActions: 	Leave   f2	(Started vm1)
Oct 15 15:18:08 [14874] vm1       crmd:    debug: s_crmd_fsa: 	Processing I_PE_SUCCESS: [ state=S_POLICY_ENGINE cause=C_IPC_MESSAGE origin=handle_response ]
Oct 15 15:18:08 [14874] vm1       crmd:     info: do_state_transition: 	State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
Oct 15 15:18:08 [14874] vm1       crmd:    debug: unpack_graph: 	Unpacked transition 4: 6 actions in 6 synapses
Oct 15 15:18:08 [14874] vm1       crmd:     info: do_te_invoke: 	Processing graph 4 (ref=pe_calc-dc-1381817888-54) derived from /var/lib/pacemaker/pengine/pe-warn-0.bz2
Oct 15 15:18:08 [14874] vm1       crmd:   notice: te_fence_node: 	Executing reboot fencing operation (20) on vm3 (timeout=60000)
Oct 15 15:18:08 [14874] vm1       crmd:    debug: run_graph: 	Transition 4 (Complete=0, Pending=1, Fired=1, Skipped=0, Incomplete=5, Source=/var/lib/pacemaker/pengine/pe-warn-0.bz2): In-progress
Oct 15 15:18:08 [14873] vm1    pengine:  warning: process_pe_message: 	Calculated Transition 4: /var/lib/pacemaker/pengine/pe-warn-0.bz2
Oct 15 15:18:08 [14870] vm1 stonith-ng:    debug: stonith_command: 	Processing st_fence 109 from crmd.14874 (               0)
Oct 15 15:18:08 [14870] vm1 stonith-ng:   notice: handle_request: 	Client crmd.14874.4eb0ff33 wants to fence (reboot) 'vm3' with device '(any)'
Oct 15 15:18:08 [14870] vm1 stonith-ng:   notice: initiate_remote_stonith_op: 	Initiating remote operation reboot for vm3: 61dd2280-87df-4984-9bb7-d573c83bd42f (0)
Oct 15 15:18:08 [14870] vm1 stonith-ng:     info: stonith_command: 	Processed st_fence from crmd.14874: Operation now in progress (-115)
Oct 15 15:18:08 [14870] vm1 stonith-ng:    debug: stonith_command: 	Processing st_query 0 from vm1 (               0)
Oct 15 15:18:08 [14870] vm1 stonith-ng:    debug: create_remote_stonith_op: 	61dd2280-87df-4984-9bb7-d573c83bd42f already exists
Oct 15 15:18:08 [14870] vm1 stonith-ng:    debug: stonith_query: 	Query   <stonith_command __name__="stonith_command" t="stonith-ng" st_async_id="61dd2280-87df-4984-9bb7-d573c83bd42f" st_op="st_query" st_callid="4" st_callopt="0" st_remote_op="61dd2280-87df-4984-9bb7-d573c83bd42f" st_target="vm3" st_device_action="reboot" st_origin="vm1" st_clientid="4eb0ff33-6154-4b90-9801-dc2d005b765a" st_clientname="crmd.14874" st_timeout="60" src="vm1"/>
Oct 15 15:18:08 [14870] vm1 stonith-ng:    debug: get_capable_devices: 	Searching through 2 devices to see what is capable of action (reboot) for target vm3
Oct 15 15:18:08 [14870] vm1 stonith-ng:    debug: schedule_stonith_command: 	Scheduling list on f1 for stonith-ng (timeout=30s)
Oct 15 15:18:08 [14870] vm1 stonith-ng:    debug: schedule_stonith_command: 	Scheduling list on f2 for stonith-ng (timeout=30s)
Oct 15 15:18:08 [14870] vm1 stonith-ng:     info: stonith_command: 	Processed st_query from vm1: OK (0)
Oct 15 15:18:08 [14870] vm1 stonith-ng:     info: stonith_action_create: 	Initiating action list for agent fence_legacy (target=(null))
Oct 15 15:18:08 [14870] vm1 stonith-ng:    debug: internal_stonith_action_execute: 	forking
Oct 15 15:18:08 [14870] vm1 stonith-ng:    debug: internal_stonith_action_execute: 	sending args
Oct 15 15:18:08 [14870] vm1 stonith-ng:    debug: stonith_device_execute: 	Operation list on f1 now running with pid=15511, timeout=30s
Oct 15 15:18:08 [14870] vm1 stonith-ng:     info: stonith_action_create: 	Initiating action list for agent fence_legacy (target=(null))
Oct 15 15:18:08 [14870] vm1 stonith-ng:    debug: internal_stonith_action_execute: 	forking
Oct 15 15:18:08 [14870] vm1 stonith-ng:    debug: internal_stonith_action_execute: 	sending args
Oct 15 15:18:08 [14870] vm1 stonith-ng:    debug: stonith_device_execute: 	Operation list on f2 now running with pid=15512, timeout=30s
Oct 15 15:18:08 [14870] vm1 stonith-ng:    debug: stonith_command: 	Processing st_query reply 0 from vm3 (               0)
Oct 15 15:18:08 [14870] vm1 stonith-ng:     info: process_remote_stonith_query: 	Ignoring reply from vm3, hosts are not permitted to commit suicide
Oct 15 15:18:08 [14870] vm1 stonith-ng:     info: stonith_command: 	Processed st_query reply from vm3: OK (0)
Oct 15 15:18:08 [14870] vm1 stonith-ng:    debug: stonith_command: 	Processing st_query reply 0 from vm2 (               0)
Oct 15 15:18:08 [14870] vm1 stonith-ng:     info: process_remote_stonith_query: 	Query result 2 of 3 from vm2 (2 devices)
Oct 15 15:18:08 [14870] vm1 stonith-ng:     info: call_remote_stonith: 	Total remote op timeout set to 120 for fencing of node vm3 for crmd.14874.61dd2280
Oct 15 15:18:08 [14870] vm1 stonith-ng:     info: call_remote_stonith: 	Requesting that vm2 perform op reboot vm3 with f1 for crmd.14874 (72s)
Oct 15 15:18:08 [14870] vm1 stonith-ng:     info: stonith_command: 	Processed st_query reply from vm2: OK (0)
Oct 15 15:18:08 [14870] vm1 stonith-ng:    debug: stonith_action_async_done: 	Child process 15512 performing action 'list' exited with rc 0
Oct 15 15:18:08 [14870] vm1 stonith-ng:     info: dynamic_list_search_cb: 	Refreshing port list for f2
Oct 15 15:18:08 [14870] vm1 stonith-ng:    debug: stonith_action_async_done: 	Child process 15511 performing action 'list' exited with rc 0
Oct 15 15:18:08 [14870] vm1 stonith-ng:     info: dynamic_list_search_cb: 	Refreshing port list for f1
Oct 15 15:18:08 [14870] vm1 stonith-ng:    debug: search_devices_record_result: 	Finished Search. 2 devices can perform action (reboot) on node vm3
Oct 15 15:18:08 [14870] vm1 stonith-ng:    debug: stonith_query_capable_device_cb: 	Found 2 matching devices for 'vm3'
Oct 15 15:18:08 [14870] vm1 stonith-ng:    debug: stonith_command: 	Processing st_query reply 0 from vm1 (               0)
Oct 15 15:18:08 [14870] vm1 stonith-ng:     info: process_remote_stonith_query: 	Query result 3 of 3 from vm1 (2 devices)
Oct 15 15:18:08 [14870] vm1 stonith-ng:     info: stonith_command: 	Processed st_query reply from vm1: OK (0)
Oct 15 15:18:15 [14853] vm1 corosync debug   [QB    ] ipc_shm.c:qb_ipcs_shm_connect:295 connecting to client [15585]
Oct 15 15:18:15 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:18:15 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:18:15 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_open_2:236 shm size:1048589; real_size:1052672; rb->word_size:263168
Oct 15 15:18:15 [14853] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_created:272 connection created
Oct 15 15:18:15 [14853] vm1 corosync debug   [CMAP  ] cmap.c:cmap_lib_init_fn:306 lib_init_fn: conn=0x7f0cab0baf70
Oct 15 15:18:15 [14853] vm1 corosync debug   [QB    ] ipcs.c:qb_ipcs_dispatch_connection_request:757 HUP conn (14855-15585-34)
Oct 15 15:18:15 [14853] vm1 corosync debug   [QB    ] ipcs.c:qb_ipcs_disconnect:605 qb_ipcs_disconnect(14855-15585-34) state:2
Oct 15 15:18:15 [14853] vm1 corosync debug   [QB    ] loop_poll_epoll.c:_del:117 epoll_ctl(del): Bad file descriptor (9)
Oct 15 15:18:15 [14853] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_closed:417 cs_ipcs_connection_closed() 
Oct 15 15:18:15 [14853] vm1 corosync debug   [CMAP  ] cmap.c:cmap_lib_exit_fn:325 exit_fn for conn=0x7f0cab0baf70
Oct 15 15:18:15 [14853] vm1 corosync debug   [MAIN  ] ipc_glue.c:cs_ipcs_connection_destroyed:390 cs_ipcs_connection_destroyed() 
Oct 15 15:18:15 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-response-14855-15585-34-header
Oct 15 15:18:15 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-event-14855-15585-34-header
Oct 15 15:18:15 [14853] vm1 corosync debug   [QB    ] ringbuffer.c:qb_rb_close:299 Free'ing ringbuffer: /dev/shm/qb-cmap-request-14855-15585-34-header
Oct 15 15:18:17 [14870] vm1 stonith-ng:    debug: stonith_command: 	Processing st_fence reply 0 from vm2 (               0)
Oct 15 15:18:17 [14870] vm1 stonith-ng:   notice: process_remote_stonith_exec: 	Call to f1 for vm3 on behalf of crmd.14874@vm1: Generic Pacemaker error (-201)
Oct 15 15:18:17 [14870] vm1 stonith-ng:     info: call_remote_stonith: 	Requesting that vm1 perform op reboot vm3 with f2 for crmd.14874 (72s)
Oct 15 15:18:17 [14870] vm1 stonith-ng:     info: stonith_command: 	Processed st_fence reply from vm2: OK (0)
Oct 15 15:18:17 [14870] vm1 stonith-ng:    debug: stonith_command: 	Processing st_fence 0 from vm1 (               0)
Oct 15 15:18:17 [14870] vm1 stonith-ng:    debug: schedule_stonith_command: 	Scheduling reboot on f2 for remote peer vm1 with op id (61dd2280-87df-4984-9bb7-d573c83bd42f) (timeout=60s)
Oct 15 15:18:17 [14870] vm1 stonith-ng:     info: stonith_command: 	Processed st_fence from vm1: Operation now in progress (-115)
Oct 15 15:18:17 [14870] vm1 stonith-ng:     info: stonith_action_create: 	Initiating action reboot for agent fence_legacy (target=vm3)
Oct 15 15:18:17 [14870] vm1 stonith-ng:    debug: make_args: 	Performing reboot action for node 'vm3' as 'port=vm3'
Oct 15 15:18:17 [14870] vm1 stonith-ng:    debug: internal_stonith_action_execute: 	forking
Oct 15 15:18:17 [14870] vm1 stonith-ng:    debug: internal_stonith_action_execute: 	sending args
Oct 15 15:18:17 [14870] vm1 stonith-ng:    debug: stonith_device_execute: 	Operation reboot for node vm3 on f2 now running with pid=15624, timeout=60s
Oct 15 15:18:28 [14869] vm1        cib:     info: crm_client_new: 	Connecting 0x14d7ba0 for uid=0 gid=0 pid=17339 id=be5d31fc-c441-4d0e-819a-298e17eca0a7
