Nov 09 14:51:06 [944] ip-10-50-3-251 lrmd: notice: operation_finished: ClusterEIP_54.215.143.166_monitor_5000:29139 [ 2013/11/09_14:51:06 INFO: 54.215.143.166 is here ] Nov 09 14:51:17 [944] ip-10-50-3-251 lrmd: notice: operation_finished: ClusterEIP_54.215.143.166_monitor_5000:29278 [ 2013/11/09_14:51:17 INFO: 54.215.143.166 is here ] Nov 09 14:51:33 corosync [TOTEM ] A processor failed, forming new configuration. Nov 09 14:51:38 corosync [CMAN ] quorum lost, blocking activity Nov 09 14:51:38 corosync [QUORUM] This node is within the non-primary component and will NOT provide any services. Nov 09 14:51:38 corosync [QUORUM] Members[1]: 2 Nov 09 14:51:38 corosync [TOTEM ] A processor joined or left the membership and a new membership was formed. Nov 09 14:51:38 [947] ip-10-50-3-251 crmd: notice: cman_event_callback: Membership 3004136: quorum lost Nov 09 14:51:38 corosync [CPG ] chosen downlist: sender r(0) ip(10.50.3.251) ; members(old:2 left:1) Nov 09 14:51:38 corosync [MAIN ] Completed service synchronization, ready to provide service. Nov 09 14:51:38 [943] ip-10-50-3-251 stonith-ng: info: pcmk_cpg_membership: Left[1.0] stonith-ng.1 Nov 09 14:51:38 [943] ip-10-50-3-251 stonith-ng: info: crm_update_peer_proc: pcmk_cpg_membership: Node ip-10-50-3-122[1] - corosync-cpg is now offline Nov 09 14:51:38 [942] ip-10-50-3-251 cib: info: pcmk_cpg_membership: Left[1.0] cib.1 Nov 09 14:51:38 [942] ip-10-50-3-251 cib: info: crm_update_peer_proc: pcmk_cpg_membership: Node ip-10-50-3-122[1] - corosync-cpg is now offline Nov 09 14:51:38 [942] ip-10-50-3-251 cib: info: pcmk_cpg_membership: Member[1.0] cib.2 Nov 09 14:51:38 [943] ip-10-50-3-251 stonith-ng: info: pcmk_cpg_membership: Member[1.0] stonith-ng.2 Nov 09 14:51:38 [947] ip-10-50-3-251 crmd: notice: crm_update_peer_state: cman_event_callback: Node ip-10-50-3-122[1] - state is now lost Nov 09 14:51:38 [947] ip-10-50-3-251 crmd: info: peer_update_callback: ip-10-50-3-122 is now lost (was member) Nov 09 14:51:38 [947] ip-10-50-3-251 crmd: warning: check_dead_member: Our DC node (ip-10-50-3-122) left the cluster Nov 09 14:51:38 [947] ip-10-50-3-251 crmd: notice: do_state_transition: State transition S_NOT_DC -> S_ELECTION [ input=I_ELECTION cause=C_FSA_INTERNAL origin=check_dead_member ] Nov 09 14:51:38 [947] ip-10-50-3-251 crmd: info: pcmk_cpg_membership: Left[1.0] crmd.1 Nov 09 14:51:38 [947] ip-10-50-3-251 crmd: info: crm_update_peer_proc: pcmk_cpg_membership: Node ip-10-50-3-122[1] - corosync-cpg is now offline Nov 09 14:51:38 [947] ip-10-50-3-251 crmd: info: peer_update_callback: Client ip-10-50-3-122/peer now has status [offline] (DC=) Nov 09 14:51:38 [947] ip-10-50-3-251 crmd: info: pcmk_cpg_membership: Member[1.0] crmd.2 Nov 09 14:51:38 [947] ip-10-50-3-251 crmd: notice: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_FSA_INTERNAL origin=do_election_check ] Nov 09 14:51:38 [947] ip-10-50-3-251 crmd: info: do_te_control: Registering TE UUID: 53abcc86-91d9-416b-bd03-1c525c48bf05 Nov 09 14:51:38 [947] ip-10-50-3-251 crmd: info: set_graph_functions: Setting custom graph functions Nov 09 14:51:38 [947] ip-10-50-3-251 crmd: info: do_dc_takeover: Taking over DC status for this partition Nov 09 14:51:38 [942] ip-10-50-3-251 cib: info: cib_process_readwrite: We are now in R/W mode Nov 09 14:51:38 [942] ip-10-50-3-251 cib: info: cib_process_request: Operation complete: op cib_master for section 'all' (origin=local/crmd/22, version=1.1218.126): OK (rc=0) Nov 09 14:51:38 [942] ip-10-50-3-251 cib: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/23, version=1.1218.127): OK (rc=0) Nov 09 14:51:38 [942] ip-10-50-3-251 cib: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/25, version=1.1218.128): OK (rc=0) Nov 09 14:51:38 [947] ip-10-50-3-251 crmd: info: join_make_offer: Making join offers based on membership 3004136 Nov 09 14:51:38 [947] ip-10-50-3-251 crmd: info: do_dc_join_offer_all: join-1: Waiting on 1 outstanding join acks Nov 09 14:51:38 [947] ip-10-50-3-251 crmd: info: update_dc: Set DC to ip-10-50-3-251 (3.0.7) Nov 09 14:51:38 [942] ip-10-50-3-251 cib: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/27, version=1.1218.129): OK (rc=0) Nov 09 14:51:38 [947] ip-10-50-3-251 crmd: info: crm_update_peer_expected: do_dc_join_filter_offer: Node ip-10-50-3-251[2] - expected state is now member Nov 09 14:51:38 [947] ip-10-50-3-251 crmd: info: do_state_transition: State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_FSA_INTERNAL origin=check_join_state ] Nov 09 14:51:38 [947] ip-10-50-3-251 crmd: info: do_dc_join_finalize: join-1: Syncing the CIB from ip-10-50-3-251 to the rest of the cluster Nov 09 14:51:38 [942] ip-10-50-3-251 cib: info: cib_process_request: Operation complete: op cib_sync for section 'all' (origin=local/crmd/30, version=1.1218.129): OK (rc=0) Nov 09 14:51:38 [942] ip-10-50-3-251 cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/31, version=1.1218.130): OK (rc=0) Nov 09 14:51:38 [947] ip-10-50-3-251 crmd: info: do_dc_join_ack: join-1: Updating node state to member for ip-10-50-3-251 Nov 09 14:51:38 [947] ip-10-50-3-251 crmd: info: erase_status_tag: Deleting xpath: //node_state[@uname='ip-10-50-3-251']/lrm Nov 09 14:51:38 [942] ip-10-50-3-251 cib: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='ip-10-50-3-251']/lrm (origin=local/crmd/32, version=1.1218.131): OK (rc=0) Nov 09 14:51:38 [947] ip-10-50-3-251 crmd: info: do_state_transition: State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state ] Nov 09 14:51:38 [947] ip-10-50-3-251 crmd: info: abort_transition_graph: do_te_invoke:156 - Triggered transition abort (complete=1) : Peer Cancelled Nov 09 14:51:38 [945] ip-10-50-3-251 attrd: notice: attrd_local_callback: Sending full refresh (origin=crmd) Nov 09 14:51:38 [945] ip-10-50-3-251 attrd: notice: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true) Nov 09 14:51:38 [942] ip-10-50-3-251 cib: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/34, version=1.1218.133): OK (rc=0) Nov 09 14:51:38 [942] ip-10-50-3-251 cib: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/36, version=1.1218.135): OK (rc=0) Nov 09 14:51:39 [946] ip-10-50-3-251 pengine: info: unpack_config: Startup probes: enabled Nov 09 14:51:39 [946] ip-10-50-3-251 pengine: notice: unpack_config: On loss of CCM Quorum: Ignore Nov 09 14:51:39 [946] ip-10-50-3-251 pengine: info: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0 Nov 09 14:51:39 [946] ip-10-50-3-251 pengine: info: unpack_domains: Unpacking domains Nov 09 14:51:39 [946] ip-10-50-3-251 pengine: info: determine_online_status_fencing: Node ip-10-50-3-251 is active Nov 09 14:51:39 [946] ip-10-50-3-251 pengine: info: determine_online_status: Node ip-10-50-3-251 is online Nov 09 14:51:39 [946] ip-10-50-3-251 pengine: warning: pe_fence_node: Node ip-10-50-3-122 will be fenced because the node is no longer part of the cluster Nov 09 14:51:39 [946] ip-10-50-3-251 pengine: warning: determine_online_status: Node ip-10-50-3-122 is unclean Nov 09 14:51:39 [946] ip-10-50-3-251 pengine: crit: get_timet_now: Defaulting to 'now' Nov 09 14:51:39 [946] ip-10-50-3-251 pengine: info: unpack_rsc_op: Operation monitor found resource ClusterEIP_54.215.143.166 active on ip-10-50-3-251 Nov 09 14:51:39 [946] ip-10-50-3-251 pengine: crit: get_timet_now: Defaulting to 'now' Nov 09 14:51:39 [946] ip-10-50-3-251 pengine: crit: get_timet_now: Defaulting to 'now' Nov 09 14:51:39 [946] ip-10-50-3-251 pengine: info: find_anonymous_clone: Internally renamed Varnishlog on ip-10-50-3-251 to Varnishlog:0 Nov 09 14:51:39 [946] ip-10-50-3-251 pengine: crit: get_timet_now: Defaulting to 'now' Nov 09 14:51:39 [946] ip-10-50-3-251 pengine: info: unpack_rsc_op: Operation monitor found resource Varnishlog:0 active on ip-10-50-3-251 Nov 09 14:51:39 [946] ip-10-50-3-251 pengine: crit: get_timet_now: Defaulting to 'now' Nov 09 14:51:39 [946] ip-10-50-3-251 pengine: info: find_anonymous_clone: Internally renamed Varnishncsa on ip-10-50-3-251 to Varnishncsa:0 Nov 09 14:51:39 [946] ip-10-50-3-251 pengine: crit: get_timet_now: Defaulting to 'now' Nov 09 14:51:39 [946] ip-10-50-3-251 pengine: info: unpack_rsc_op: Operation monitor found resource Varnishncsa:0 active on ip-10-50-3-251 Nov 09 14:51:39 [946] ip-10-50-3-251 pengine: crit: get_timet_now: Defaulting to 'now' Nov 09 14:51:39 [946] ip-10-50-3-251 pengine: info: find_anonymous_clone: Internally renamed Varnish on ip-10-50-3-251 to Varnish:0 Nov 09 14:51:39 [946] ip-10-50-3-251 pengine: crit: get_timet_now: Defaulting to 'now' Nov 09 14:51:39 [946] ip-10-50-3-251 pengine: info: unpack_rsc_op: Operation monitor found resource Varnish:0 active on ip-10-50-3-251 Nov 09 14:51:39 [946] ip-10-50-3-251 pengine: crit: get_timet_now: Defaulting to 'now' Nov 09 14:51:39 [946] ip-10-50-3-251 pengine: crit: get_timet_now: Defaulting to 'now' Nov 09 14:51:39 [946] ip-10-50-3-251 pengine: crit: get_timet_now: Defaulting to 'now' Nov 09 14:51:39 [946] ip-10-50-3-251 pengine: info: find_anonymous_clone: Internally renamed Varnishlog on ip-10-50-3-122 to Varnishlog:1 Nov 09 14:51:39 [946] ip-10-50-3-251 pengine: crit: get_timet_now: Defaulting to 'now' Nov 09 14:51:39 [946] ip-10-50-3-251 pengine: info: unpack_rsc_op: Operation monitor found resource Varnishlog:1 active on ip-10-50-3-122 Nov 09 14:51:39 [946] ip-10-50-3-251 pengine: crit: get_timet_now: Defaulting to 'now' Nov 09 14:51:39 [946] ip-10-50-3-251 pengine: info: find_anonymous_clone: Internally renamed Varnish on ip-10-50-3-122 to Varnish:1 Nov 09 14:51:39 [946] ip-10-50-3-251 pengine: crit: get_timet_now: Defaulting to 'now' Nov 09 14:51:39 [946] ip-10-50-3-251 pengine: info: unpack_rsc_op: Operation monitor found resource Varnish:1 active on ip-10-50-3-122 Nov 09 14:51:39 [946] ip-10-50-3-251 pengine: crit: get_timet_now: Defaulting to 'now' Nov 09 14:51:39 [946] ip-10-50-3-251 pengine: info: find_anonymous_clone: Internally renamed Varnishncsa on ip-10-50-3-122 to Varnishncsa:1 Nov 09 14:51:39 [946] ip-10-50-3-251 pengine: crit: get_timet_now: Defaulting to 'now' Nov 09 14:51:39 [946] ip-10-50-3-251 pengine: info: unpack_rsc_op: Operation monitor found resource Varnishncsa:1 active on ip-10-50-3-122 Nov 09 14:51:39 [946] ip-10-50-3-251 pengine: crit: get_timet_now: Defaulting to 'now' Nov 09 14:51:39 [946] ip-10-50-3-251 pengine: info: native_print: ClusterEIP_54.215.143.166 (ocf::pacemaker:EIP): Started ip-10-50-3-251 Nov 09 14:51:39 [946] ip-10-50-3-251 pengine: info: clone_print: Clone Set: EIP-AND-VARNISH-clone [EIP-AND-VARNISH] Nov 09 14:51:39 [946] ip-10-50-3-251 pengine: info: short_print: Started: [ ip-10-50-3-122 ip-10-50-3-251 ] Nov 09 14:51:39 [946] ip-10-50-3-251 pengine: info: native_print: ec2-fencing (stonith:fence_ec2): Started ip-10-50-3-122 Nov 09 14:51:39 [946] ip-10-50-3-251 pengine: info: rsc_merge_weights: Varnish:1: Rolling back scores from Varnishlog:1 Nov 09 14:51:39 [946] ip-10-50-3-251 pengine: info: native_color: Resource Varnish:1 cannot run anywhere Nov 09 14:51:39 [946] ip-10-50-3-251 pengine: info: rsc_merge_weights: Varnishlog:1: Rolling back scores from Varnishncsa:1 Nov 09 14:51:39 [946] ip-10-50-3-251 pengine: info: native_color: Resource Varnishlog:1 cannot run anywhere Nov 09 14:51:39 [946] ip-10-50-3-251 pengine: info: native_color: Resource Varnishncsa:1 cannot run anywhere Nov 09 14:51:39 [946] ip-10-50-3-251 pengine: warning: custom_action: Action Varnish:1_stop_0 on ip-10-50-3-122 is unrunnable (offline) Nov 09 14:51:39 [946] ip-10-50-3-251 pengine: warning: custom_action: Action Varnish:1_stop_0 on ip-10-50-3-122 is unrunnable (offline) Nov 09 14:51:39 [946] ip-10-50-3-251 pengine: warning: custom_action: Action Varnishlog:1_stop_0 on ip-10-50-3-122 is unrunnable (offline) Nov 09 14:51:39 [946] ip-10-50-3-251 pengine: warning: custom_action: Action Varnishlog:1_stop_0 on ip-10-50-3-122 is unrunnable (offline) Nov 09 14:51:39 [946] ip-10-50-3-251 pengine: warning: custom_action: Action Varnishncsa:1_stop_0 on ip-10-50-3-122 is unrunnable (offline) Nov 09 14:51:39 [946] ip-10-50-3-251 pengine: warning: custom_action: Action Varnishncsa:1_stop_0 on ip-10-50-3-122 is unrunnable (offline) Nov 09 14:51:39 [946] ip-10-50-3-251 pengine: warning: custom_action: Action ec2-fencing_stop_0 on ip-10-50-3-122 is unrunnable (offline) Nov 09 14:51:39 [946] ip-10-50-3-251 pengine: warning: stage6: Scheduling Node ip-10-50-3-122 for STONITH Nov 09 14:51:39 [946] ip-10-50-3-251 pengine: info: native_stop_constraints: Varnish:1_stop_0 is implicit after ip-10-50-3-122 is fenced Nov 09 14:51:39 [946] ip-10-50-3-251 pengine: info: native_stop_constraints: Varnishlog:1_stop_0 is implicit after ip-10-50-3-122 is fenced Nov 09 14:51:39 [946] ip-10-50-3-251 pengine: info: native_stop_constraints: Varnishncsa:1_stop_0 is implicit after ip-10-50-3-122 is fenced Nov 09 14:51:39 [946] ip-10-50-3-251 pengine: info: native_stop_constraints: ec2-fencing_stop_0 is implicit after ip-10-50-3-122 is fenced Nov 09 14:51:39 [946] ip-10-50-3-251 pengine: info: LogActions: Leave ClusterEIP_54.215.143.166 (Started ip-10-50-3-251) Nov 09 14:51:39 [946] ip-10-50-3-251 pengine: info: LogActions: Leave Varnish:0 (Started ip-10-50-3-251) Nov 09 14:51:39 [946] ip-10-50-3-251 pengine: info: LogActions: Leave Varnishlog:0 (Started ip-10-50-3-251) Nov 09 14:51:39 [946] ip-10-50-3-251 pengine: info: LogActions: Leave Varnishncsa:0 (Started ip-10-50-3-251) Nov 09 14:51:39 [946] ip-10-50-3-251 pengine: notice: LogActions: Stop Varnish:1 (ip-10-50-3-122) Nov 09 14:51:39 [946] ip-10-50-3-251 pengine: notice: LogActions: Stop Varnishlog:1 (ip-10-50-3-122) Nov 09 14:51:39 [946] ip-10-50-3-251 pengine: notice: LogActions: Stop Varnishncsa:1 (ip-10-50-3-122) Nov 09 14:51:39 [946] ip-10-50-3-251 pengine: notice: LogActions: Move ec2-fencing (Started ip-10-50-3-122 -> ip-10-50-3-251) Nov 09 14:51:39 [947] ip-10-50-3-251 crmd: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ] Nov 09 14:51:39 [947] ip-10-50-3-251 crmd: info: do_te_invoke: Processing graph 0 (ref=pe_calc-dc-1384008699-14) derived from /var/lib/pacemaker/pengine/pe-warn-5.bz2 Nov 09 14:51:39 [947] ip-10-50-3-251 crmd: notice: te_fence_node: Executing reboot fencing operation (34) on ip-10-50-3-122 (timeout=60000) Nov 09 14:51:39 [943] ip-10-50-3-251 stonith-ng: notice: handle_request: Client crmd.947.c5e50058 wants to fence (reboot) 'ip-10-50-3-122' with device '(any)' Nov 09 14:51:39 [943] ip-10-50-3-251 stonith-ng: notice: initiate_remote_stonith_op: Initiating remote operation reboot for ip-10-50-3-122: 73629a19-c784-4951-a48c-5ec37137cc06 (0) Nov 09 14:51:39 [946] ip-10-50-3-251 pengine: warning: process_pe_message: Calculated Transition 0: /var/lib/pacemaker/pengine/pe-warn-5.bz2 Nov 09 14:51:39 [943] ip-10-50-3-251 stonith-ng: info: stonith_command: Processed st_fence from crmd.947: Operation now in progress (-115) Nov 09 14:51:39 [943] ip-10-50-3-251 stonith-ng: info: can_fence_host_with_device: ec2-fencing can not fence ip-10-50-3-122: static-list Nov 09 14:51:39 [943] ip-10-50-3-251 stonith-ng: info: stonith_command: Processed st_query from ip-10-50-3-251: OK (0) Nov 09 14:51:39 [943] ip-10-50-3-251 stonith-ng: info: stonith_command: Processed st_query reply from ip-10-50-3-251: OK (0) Nov 09 14:51:43 corosync [TOTEM ] A processor joined or left the membership and a new membership was formed. Nov 09 14:51:43 corosync [CMAN ] quorum regained, resuming activity Nov 09 14:51:43 corosync [QUORUM] This node is within the primary component and will provide service. Nov 09 14:51:43 corosync [QUORUM] Members[2]: 1 2 Nov 09 14:51:43 corosync [QUORUM] Members[2]: 1 2 Nov 09 14:51:43 [947] ip-10-50-3-251 crmd: notice: cman_event_callback: Membership 3004140: quorum acquired Nov 09 14:51:43 [947] ip-10-50-3-251 crmd: notice: crm_update_peer_state: cman_event_callback: Node ip-10-50-3-122[1] - state is now member Nov 09 14:51:43 [947] ip-10-50-3-251 crmd: info: peer_update_callback: ip-10-50-3-122 is now member (was lost) Nov 09 14:51:43 [942] ip-10-50-3-251 cib: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/39, version=1.1218.137): OK (rc=0) Nov 09 14:51:43 [947] ip-10-50-3-251 crmd: info: send_ais_text: Peer overloaded or membership in flux: Re-sending message (Attempt 1 of 20) Nov 09 14:51:43 [942] ip-10-50-3-251 cib: info: send_ais_text: Peer overloaded or membership in flux: Re-sending message (Attempt 1 of 20) Nov 09 14:51:43 corosync [CPG ] chosen downlist: sender r(0) ip(10.50.3.251) ; members(old:1 left:0) Nov 09 14:51:43 corosync [MAIN ] Completed service synchronization, ready to provide service. Nov 09 14:51:43 [943] ip-10-50-3-251 stonith-ng: info: pcmk_cpg_membership: Joined[2.0] stonith-ng.1 Nov 09 14:51:43 [943] ip-10-50-3-251 stonith-ng: info: pcmk_cpg_membership: Member[2.0] stonith-ng.1 Nov 09 14:51:43 [943] ip-10-50-3-251 stonith-ng: info: crm_update_peer_proc: pcmk_cpg_membership: Node ip-10-50-3-122[1] - corosync-cpg is now online Nov 09 14:51:43 [943] ip-10-50-3-251 stonith-ng: info: pcmk_cpg_membership: Member[2.1] stonith-ng.2 Nov 09 14:51:43 [945] ip-10-50-3-251 attrd: error: pcmk_cpg_dispatch: Connection to the CPG API failed: 2 Nov 09 14:51:43 [945] ip-10-50-3-251 attrd: crit: attrd_ais_destroy: Lost connection to Corosync service! Nov 09 14:51:43 [945] ip-10-50-3-251 attrd: notice: main: Exiting... Nov 09 14:51:43 [945] ip-10-50-3-251 attrd: notice: main: Disconnecting client 0x161d410, pid=947... Nov 09 14:51:43 [945] ip-10-50-3-251 attrd: error: attrd_cib_connection_destroy: Connection to the CIB terminated... Nov 09 14:51:43 [943] ip-10-50-3-251 stonith-ng: error: pcmk_cpg_dispatch: Connection to the CPG API failed: 2 Nov 09 14:51:43 [943] ip-10-50-3-251 stonith-ng: error: stonith_peer_ais_destroy: AIS connection terminated Nov 09 14:51:43 [943] ip-10-50-3-251 stonith-ng: info: stonith_shutdown: Terminating with 2 clients Nov 09 14:51:43 [943] ip-10-50-3-251 stonith-ng: info: cib_connection_destroy: Connection to the CIB closed. Nov 09 14:51:43 [943] ip-10-50-3-251 stonith-ng: info: qb_ipcs_us_withdraw: withdrawing server sockets Nov 09 14:51:43 [943] ip-10-50-3-251 stonith-ng: info: main: Done Nov 09 14:51:43 [943] ip-10-50-3-251 stonith-ng: info: crm_xml_cleanup: Cleaning up memory from libxml2 Nov 09 14:51:43 [944] ip-10-50-3-251 lrmd: error: crm_ipc_read: Connection to stonith-ng failed Nov 09 14:51:43 [944] ip-10-50-3-251 lrmd: error: mainloop_gio_callback: Connection to stonith-ng[0xd17db0] closed (I/O condition=17) Nov 09 14:51:43 [944] ip-10-50-3-251 lrmd: error: stonith_connection_destroy_cb: LRMD lost STONITH connection Nov 09 14:51:44 [944] ip-10-50-3-251 lrmd: notice: operation_finished: ClusterEIP_54.215.143.166_monitor_5000:29399 [ 2013/11/09_14:51:44 INFO: 54.215.143.166 is here ] Nov 09 14:51:45 [936] ip-10-50-3-251 pacemakerd: error: send_cpg_message: Sending message via cpg FAILED: (rc=2) Library error Nov 09 14:51:45 [936] ip-10-50-3-251 pacemakerd: error: cpg_connection_destroy: Connection destroyed Nov 09 14:51:45 [936] ip-10-50-3-251 pacemakerd: error: cfg_connection_destroy: Connection destroyed Nov 09 14:51:45 [936] ip-10-50-3-251 pacemakerd: info: pcmk_child_exit: Child process stonith-ng exited (pid=943, rc=0) Nov 09 14:51:45 [936] ip-10-50-3-251 pacemakerd: info: crm_ipcs_send: Event 13 failed, size=152, to=0x11ecc30[945], queue=1, retries=0, rc=-32: < Nov 09 14:51:45 [936] ip-10-50-3-251 pacemakerd: info: crm_ipcs_send: Event 14 failed, size=152, to=0x11f2200[943], queue=1, retries=0, rc=-32: < Nov 09 14:51:45 [936] ip-10-50-3-251 pacemakerd: error: send_cpg_message: Sending message via cpg FAILED: (rc=9) Bad handle Nov 09 14:51:45 [936] ip-10-50-3-251 pacemakerd: error: pcmk_child_exit: Child process attrd exited (pid=945, rc=1) Nov 09 14:51:45 [936] ip-10-50-3-251 pacemakerd: error: send_cpg_message: Sending message via cpg FAILED: (rc=9) Bad handle Nov 09 14:51:45 [936] ip-10-50-3-251 pacemakerd: notice: pcmk_shutdown_worker: Shuting down Pacemaker Nov 09 14:51:45 [936] ip-10-50-3-251 pacemakerd: notice: stop_child: Stopping crmd: Sent -15 to process 947 Nov 09 14:51:45 [942] ip-10-50-3-251 cib: error: send_ais_text: Sending message 26 via cpg: FAILED (rc=2): Library error: Connection timed out (110) Nov 09 14:51:46 [942] ip-10-50-3-251 cib: info: crm_ipcs_send: Event 106 failed, size=1052, to=0x1ced740[943], queue=1, retries=0, rc=-32: S_POLICY_ENGINE [ input=I_SHUTDOWN cause=C_SHUTDOWN origin=crm_shutdown ] Nov 09 14:51:46 [947] ip-10-50-3-251 crmd: info: do_shutdown_req: Sending shutdown request to ip-10-50-3-251 Nov 09 14:51:47 [942] ip-10-50-3-251 cib: error: send_ais_text: Sending message 27 via cpg: FAILED (rc=2): Library error: Connection timed out (110) Nov 09 14:51:48 [942] ip-10-50-3-251 cib: info: crm_ipcs_send: Event 109 failed, size=1724, to=0x1ced740[943], queue=1, retries=0, rc=-32: S_RECOVERY [ input=I_ERROR cause=C_FSA_INTERNAL origin=do_shutdown_req ] Nov 09 14:51:48 [947] ip-10-50-3-251 crmd: error: do_recover: Action A_RECOVER (0000000001000000) not supported Nov 09 14:51:48 [947] ip-10-50-3-251 crmd: warning: do_election_vote: Not voting in election, we're in state S_RECOVERY Nov 09 14:51:48 [947] ip-10-50-3-251 crmd: info: do_dc_release: DC role released Nov 09 14:51:48 [947] ip-10-50-3-251 crmd: info: pe_ipc_destroy: Connection to the Policy Engine released Nov 09 14:51:49 [942] ip-10-50-3-251 cib: error: send_ais_text: Sending message 28 via cpg: FAILED (rc=2): Library error: Connection timed out (110) Nov 09 14:51:49 [942] ip-10-50-3-251 cib: error: pcmk_cpg_dispatch: Connection to the CPG API failed: 2 Nov 09 14:51:49 [942] ip-10-50-3-251 cib: error: cib_ais_destroy: Corosync connection lost! Exiting. Nov 09 14:51:49 [942] ip-10-50-3-251 cib: info: terminate_cib: cib_ais_destroy: Exiting fast... Nov 09 14:51:49 [942] ip-10-50-3-251 cib: info: qb_ipcs_us_withdraw: withdrawing server sockets Nov 09 14:51:49 [942] ip-10-50-3-251 cib: info: qb_ipcs_us_withdraw: withdrawing server sockets Nov 09 14:51:49 [942] ip-10-50-3-251 cib: info: qb_ipcs_us_withdraw: withdrawing server sockets Nov 09 14:51:49 [942] ip-10-50-3-251 cib: info: crm_xml_cleanup: Cleaning up memory from libxml2 Nov 09 14:51:50 [936] ip-10-50-3-251 pacemakerd: error: pcmk_child_exit: Child process cib exited (pid=942, rc=64) Nov 09 14:51:50 [936] ip-10-50-3-251 pacemakerd: error: send_cpg_message: Sending message via cpg FAILED: (rc=9) Bad handle Nov 09 14:51:50 [947] ip-10-50-3-251 crmd: error: internal_ipc_get_reply: Server disconnected client cib_shm while waiting for msg id 100 Nov 09 14:51:50 [947] ip-10-50-3-251 crmd: notice: crm_ipc_send: Connection to cib_shm closed: Transport endpoint is not connected (-107) Nov 09 14:51:50 [947] ip-10-50-3-251 crmd: info: do_te_control: Transitioner is now inactive Nov 09 14:51:50 [947] ip-10-50-3-251 crmd: error: do_log: FSA: Input I_TERMINATE from do_recover() received in state S_RECOVERY Nov 09 14:51:50 [947] ip-10-50-3-251 crmd: info: do_state_transition: State transition S_RECOVERY -> S_TERMINATE [ input=I_TERMINATE cause=C_FSA_INTERNAL origin=do_recover ] Nov 09 14:51:50 [947] ip-10-50-3-251 crmd: info: do_shutdown: Disconnecting STONITH... Nov 09 14:51:50 [947] ip-10-50-3-251 crmd: info: tengine_stonith_connection_destroy: Fencing daemon disconnected Nov 09 14:51:50 [944] ip-10-50-3-251 lrmd: info: cancel_recurring_action: Cancelling operation Varnishlog_monitor_5000 Nov 09 14:51:50 [944] ip-10-50-3-251 lrmd: info: services_action_cancel: Cancelling op: ClusterEIP_54.215.143.166_monitor_5000 will occur once operation completes Nov 09 14:51:50 [944] ip-10-50-3-251 lrmd: info: services_action_cancel: Cancelling op: Varnish_monitor_5000 will occur once operation completes Nov 09 14:51:50 [944] ip-10-50-3-251 lrmd: info: cancel_recurring_action: Cancelling operation Varnishncsa_monitor_5000 Nov 09 14:51:50 [947] ip-10-50-3-251 crmd: info: lrmd_api_disconnect: Disconnecting from lrmd service Nov 09 14:51:50 [947] ip-10-50-3-251 crmd: info: lrmd_connection_destroy: connection destroyed Nov 09 14:51:50 [947] ip-10-50-3-251 crmd: info: lrm_connection_destroy: LRM Connection disconnected Nov 09 14:51:50 [947] ip-10-50-3-251 crmd: info: do_lrm_control: Disconnected from the LRM Nov 09 14:51:50 [947] ip-10-50-3-251 crmd: info: crm_cluster_disconnect: Disconnecting from cluster infrastructure: cman Nov 09 14:51:50 [947] ip-10-50-3-251 crmd: notice: terminate_cs_connection: Disconnecting from Corosync Nov 09 14:51:50 [947] ip-10-50-3-251 crmd: info: terminate_cs_connection: Disconnecting CPG Nov 09 14:51:50 [944] ip-10-50-3-251 lrmd: info: lrmd_ipc_destroy: LRMD client disconnecting 0xd0d240 - name: crmd id: fa61a725-3165-4728-a3f8-7a2ca01c88b1 Nov 09 14:51:50 [944] ip-10-50-3-251 lrmd: info: services_action_cancel: Cancelling op: ClusterEIP_54.215.143.166_monitor_5000 will occur once operation completes Nov 09 14:51:50 [944] ip-10-50-3-251 lrmd: info: services_action_cancel: Cancelling op: Varnish_monitor_5000 will occur once operation completes Nov 09 14:51:52 [947] ip-10-50-3-251 crmd: info: terminate_cs_connection: No cman connection Nov 09 14:51:52 [947] ip-10-50-3-251 crmd: info: crm_cluster_disconnect: Disconnected from cman Nov 09 14:51:52 [947] ip-10-50-3-251 crmd: info: do_ha_control: Disconnected from the cluster Nov 09 14:51:52 [947] ip-10-50-3-251 crmd: info: do_cib_control: Disconnecting CIB Nov 09 14:51:52 [947] ip-10-50-3-251 crmd: notice: crm_ipc_send: Connection to cib_shm closed Nov 09 14:51:52 [947] ip-10-50-3-251 crmd: notice: crm_ipc_send: Connection to cib_shm closed Nov 09 14:51:52 [947] ip-10-50-3-251 crmd: error: cib_native_perform_op_delegate: Couldn't perform cib_slave operation (timeout=120s): -107: Connection timed out (110) Nov 09 14:51:52 [947] ip-10-50-3-251 crmd: error: cib_native_perform_op_delegate: CIB disconnected Nov 09 14:51:52 [947] ip-10-50-3-251 crmd: info: crmd_cib_connection_destroy: Connection to the CIB terminated... Nov 09 14:51:52 [947] ip-10-50-3-251 crmd: info: qb_ipcs_us_withdraw: withdrawing server sockets Nov 09 14:51:52 [947] ip-10-50-3-251 crmd: info: do_exit: Performing A_EXIT_0 - gracefully exiting the CRMd Nov 09 14:51:52 [947] ip-10-50-3-251 crmd: error: do_exit: Could not recover from internal error Nov 09 14:51:52 [947] ip-10-50-3-251 crmd: info: do_exit: [crmd] stopped (2) Nov 09 14:51:52 [947] ip-10-50-3-251 crmd: info: crmd_exit: Dropping I_PENDING: [ state=S_TERMINATE cause=C_FSA_INTERNAL origin=do_election_vote ] Nov 09 14:51:52 [947] ip-10-50-3-251 crmd: info: crmd_exit: Dropping I_RELEASE_SUCCESS: [ state=S_TERMINATE cause=C_FSA_INTERNAL origin=do_dc_release ] Nov 09 14:51:52 [947] ip-10-50-3-251 crmd: info: crmd_exit: Dropping I_TERMINATE: [ state=S_TERMINATE cause=C_FSA_INTERNAL origin=do_stop ] Nov 09 14:51:52 [947] ip-10-50-3-251 crmd: info: lrmd_api_disconnect: Disconnecting from lrmd service Nov 09 14:51:52 [947] ip-10-50-3-251 crmd: info: crm_xml_cleanup: Cleaning up memory from libxml2 Nov 09 14:51:53 [936] ip-10-50-3-251 pacemakerd: error: pcmk_child_exit: Child process crmd exited (pid=947, rc=2) Nov 09 14:51:53 [936] ip-10-50-3-251 pacemakerd: error: send_cpg_message: Sending message via cpg FAILED: (rc=9) Bad handle Nov 09 14:51:53 [936] ip-10-50-3-251 pacemakerd: notice: stop_child: Stopping pengine: Sent -15 to process 946 Nov 09 14:51:53 [946] ip-10-50-3-251 pengine: info: crm_signal_dispatch: Invoking handler for signal 15: Terminated Nov 09 14:51:53 [946] ip-10-50-3-251 pengine: info: qb_ipcs_us_withdraw: withdrawing server sockets Nov 09 14:51:53 [946] ip-10-50-3-251 pengine: info: crm_xml_cleanup: Cleaning up memory from libxml2 Nov 09 14:51:53 [936] ip-10-50-3-251 pacemakerd: info: pcmk_child_exit: Child process pengine exited (pid=946, rc=0) Nov 09 14:51:53 [936] ip-10-50-3-251 pacemakerd: error: send_cpg_message: Sending message via cpg FAILED: (rc=9) Bad handle Nov 09 14:51:53 [936] ip-10-50-3-251 pacemakerd: notice: stop_child: Stopping lrmd: Sent -15 to process 944 Nov 09 14:51:53 [944] ip-10-50-3-251 lrmd: info: crm_signal_dispatch: Invoking handler for signal 15: Terminated Nov 09 14:51:53 [944] ip-10-50-3-251 lrmd: info: lrmd_shutdown: Terminating with 0 clients Nov 09 14:51:53 [944] ip-10-50-3-251 lrmd: info: qb_ipcs_us_withdraw: withdrawing server sockets Nov 09 14:51:53 [944] ip-10-50-3-251 lrmd: info: crm_xml_cleanup: Cleaning up memory from libxml2 Nov 09 14:51:53 [936] ip-10-50-3-251 pacemakerd: info: pcmk_child_exit: Child process lrmd exited (pid=944, rc=0) Nov 09 14:51:53 [936] ip-10-50-3-251 pacemakerd: error: send_cpg_message: Sending message via cpg FAILED: (rc=9) Bad handle Nov 09 14:51:53 [936] ip-10-50-3-251 pacemakerd: notice: pcmk_shutdown_worker: Shutdown complete Nov 09 14:51:53 [936] ip-10-50-3-251 pacemakerd: info: qb_ipcs_us_withdraw: withdrawing server sockets Nov 09 14:51:53 [936] ip-10-50-3-251 pacemakerd: info: main: Exiting pacemakerd