Dec 8 11:15:12 node1 cib: [959]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/14, version=0.451.25): ok (rc=0) Dec 8 11:15:12 node1 cib: [959]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/17, version=0.451.25): ok (rc=0) Dec 8 11:15:12 node1 crmd: [31284]: info: crm_ais_dispatch: Setting expected votes to 2 Dec 8 11:15:12 node1 crmd: [31284]: info: do_state_transition: State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_FSA_INTERNAL origin=check_join_state ] Dec 8 11:15:12 node1 crmd: [31284]: info: do_state_transition: All 2 cluster nodes responded to the join offer. Dec 8 11:15:12 node1 crmd: [31284]: info: do_dc_join_finalize: join-1: Syncing the CIB from node1 to the rest of the cluster Dec 8 11:15:12 node1 crmd: [31284]: info: te_connect_stonith: Attempting connection to fencing daemon... Dec 8 11:15:12 node1 cib: [959]: info: cib_process_request: Operation complete: op cib_sync for section 'all' (origin=local/crmd/18, version=0.451.25): ok (rc=0) Dec 8 11:15:13 node1 crmd: [31284]: info: te_connect_stonith: Connected Dec 8 11:15:13 node1 cib: [959]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/19, version=0.451.25): ok (rc=0) Dec 8 11:15:13 node1 cib: [959]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/20, version=0.451.25): ok (rc=0) Dec 8 11:15:14 node1 cib: [959]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='node1']/transient_attributes (origin=local/crmd/21, version=0.451.26): ok (rc=0) Dec 8 11:15:14 node1 crmd: [31284]: info: update_attrd: Connecting to attrd... Dec 8 11:15:14 node1 crmd: [31284]: info: erase_xpath_callback: Deletion of "//node_state[@uname='node1']/transient_attributes": ok (rc=0) Dec 8 11:15:14 node1 crmd: [31284]: info: do_dc_join_ack: join-1: Updating node state to member for node2 Dec 8 11:15:14 node1 crmd: [31284]: info: do_dc_join_ack: join-1: Updating node state to member for node1 Dec 8 11:15:14 node1 cib: [959]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='node2']/lrm (origin=local/crmd/22, version=0.451.27): ok (rc=0) Dec 8 11:15:14 node1 crmd: [31284]: info: erase_xpath_callback: Deletion of "//node_state[@uname='node2']/lrm": ok (rc=0) Dec 8 11:15:14 node1 crmd: [31284]: info: do_state_transition: State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state ] Dec 8 11:15:14 node1 crmd: [31284]: info: do_state_transition: All 2 cluster nodes are eligible to run resources. Dec 8 11:15:14 node1 crmd: [31284]: info: do_dc_join_final: Ensuring DC, quorum and node attributes are up-to-date Dec 8 11:15:14 node1 crmd: [31284]: info: crm_update_quorum: Updating quorum status to true (call=28) Dec 8 11:15:14 node1 crmd: [31284]: info: abort_transition_graph: do_te_invoke:191 - Triggered transition abort (complete=1) : Peer Cancelled Dec 8 11:15:14 node1 crmd: [31284]: info: do_pe_invoke: Query 29: Requesting the current CIB: S_POLICY_ENGINE Dec 8 11:15:14 node1 cib: [959]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='node1']/lrm (origin=local/crmd/24, version=0.451.29): ok (rc=0) Dec 8 11:15:14 node1 crmd: [31284]: info: abort_transition_graph: te_update_diff:267 - Triggered transition abort (complete=1, tag=lrm_rsc_op, id=resNagios_monitor_0, magic=0:7;13:0:7:8af45238-18ae-42c6-9f55-d8c8178ac5ca, cib=0.451.29) : Resource op removal Dec 8 11:15:14 node1 crmd: [31284]: info: erase_xpath_callback: Deletion of "//node_state[@uname='node1']/lrm": ok (rc=0) Dec 8 11:15:14 node1 crmd: [31284]: info: do_pe_invoke: Query 30: Requesting the current CIB: S_POLICY_ENGINE Dec 8 11:15:14 node1 crmd: [31284]: info: te_update_diff: Detected LRM refresh - 17 resources updated: Skipping all resource events Dec 8 11:15:14 node1 crmd: [31284]: info: abort_transition_graph: te_update_diff:227 - Triggered transition abort (complete=1, tag=diff, id=(null), magic=NA, cib=0.451.30) : LRM Refresh Dec 8 11:15:14 node1 crmd: [31284]: info: do_pe_invoke: Query 31: Requesting the current CIB: S_POLICY_ENGINE Dec 8 11:15:14 node1 cib: [959]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/26, version=0.451.30): ok (rc=0) Dec 8 11:15:14 node1 cib: [959]: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/28, version=0.451.30): ok (rc=0) Dec 8 11:15:14 node1 kernel: [ 2114.432829] pengine[31285]: segfault at 10 ip 00007f19d72aa8ec sp 00007fff69517440 error 4 in libpengine.so.3.0.0[7f19d729e000+36000] Dec 8 11:15:14 node1 corosync[898]: [pcmk ] info: pcmk_ipc_exit: Client crmd (conn=0x7f8a3c002310, async-conn=0x7f8a3c002310) left Dec 8 11:15:14 node1 attrd: [961]: info: attrd_local_callback: Sending full refresh (origin=crmd) Dec 8 11:15:14 node1 crmd: [31284]: info: do_pe_invoke_callback: Invoking the PE: query=31, ref=pe_calc-dc-1291803314-9, seq=736, quorate=1 Dec 8 11:15:14 node1 pengine: [31285]: notice: unpack_config: On loss of CCM Quorum: Ignore Dec 8 11:15:14 node1 attrd: [961]: info: attrd_trigger_update: Sending flush op to all hosts for: master-resSyslog:0 () Dec 8 11:15:14 node1 crmd: [31284]: info: abort_transition_graph: te_update_diff:146 - Triggered transition abort (complete=1, tag=transient_attributes, id=node1, magic=NA, cib=0.451.31) : Transient attribute: update Dec 8 11:15:14 node1 pengine: [31285]: info: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0 Dec 8 11:15:14 node1 attrd: [961]: info: attrd_trigger_update: Sending flush op to all hosts for: master-resDRBD0:0 () Dec 8 11:15:14 node1 crmd: [31284]: info: do_pe_invoke: Query 32: Requesting the current CIB: S_POLICY_ENGINE Dec 8 11:15:14 node1 pengine: [31285]: info: determine_online_status: Node node2 is online Dec 8 11:15:14 node1 attrd: [961]: info: attrd_trigger_update: Sending flush op to all hosts for: master-resDRBD0:1 () Dec 8 11:15:14 node1 crmd: [31284]: ERROR: send_ipc_message: IPC Channel to 31285 is not connected Dec 8 11:15:14 node1 pengine: [31285]: notice: unpack_rsc_op: Operation resSyslog:0_monitor_0 found resource resSyslog:0 active on node2 Dec 8 11:15:14 node1 attrd: [961]: info: attrd_trigger_update: Sending flush op to all hosts for: master-resDRBD1:0 () Dec 8 11:15:14 node1 crmd: [31284]: ERROR: do_pe_invoke_callback: Could not contact the pengine Dec 8 11:15:14 node1 pengine: [31285]: info: determine_online_status: Node node1 is online Dec 8 11:15:14 node1 attrd: [961]: info: attrd_trigger_update: Sending flush op to all hosts for: fail-count-resSendmail () Dec 8 11:15:14 node1 crmd: [31284]: info: do_pe_invoke_callback: Invoking the PE: query=32, ref=pe_calc-dc-1291803314-10, seq=736, quorate=1 Dec 8 11:15:14 node1 pengine: [31285]: info: find_clone: Internally renamed resSyslog:0 on node1 to resSyslog:1 Dec 8 11:15:14 node1 attrd: [961]: info: attrd_trigger_update: Sending flush op to all hosts for: master-resDRBD1:1 () Dec 8 11:15:14 node1 crmd: [31284]: info: pe_msg_dispatch: Received HUP from pengine:[31285] Dec 8 11:15:14 node1 pengine: [31285]: notice: native_print: resIP0#011(ocf::heartbeat:IPaddr2):#011Started node2 Dec 8 11:15:14 node1 attrd: [961]: info: attrd_trigger_update: Sending flush op to all hosts for: master-resSyslog:1 () Dec 8 11:15:14 node1 crmd: [31284]: CRIT: pe_connection_destroy: Connection to the Policy Engine failed (pid=31285, uuid=2525f074-89f6-468e-8900-14d278808c31) Dec 8 11:15:14 node1 pengine: [31285]: notice: native_print: resIP1#011(ocf::heartbeat:IPaddr2):#011Started node2 Dec 8 11:15:14 node1 attrd: [961]: info: attrd_trigger_update: Sending flush op to all hosts for: terminate () Dec 8 11:15:14 node1 crmd: [31284]: ERROR: do_log: FSA: Input I_ERROR from do_pe_invoke_callback() received in state S_POLICY_ENGINE Dec 8 11:15:14 node1 pengine: [31285]: notice: clone_print: Master/Slave Set: msDRBD0 Dec 8 11:15:14 node1 attrd: [961]: info: attrd_trigger_update: Sending flush op to all hosts for: last-failure-resSendmail () Dec 8 11:15:14 node1 crmd: [31284]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_RECOVERY [ input=I_ERROR cause=C_FSA_INTERNAL origin=do_pe_invoke_callback ] Dec 8 11:15:14 node1 pengine: [31285]: notice: short_print: Masters: [ node2 ] Dec 8 11:15:14 node1 attrd: [961]: info: attrd_trigger_update: Sending flush op to all hosts for: last-failure-resVMPS () Dec 8 11:15:14 node1 crmd: [31284]: ERROR: do_recover: Action A_RECOVER (0000000001000000) not supported Dec 8 11:15:14 node1 pengine: [31285]: notice: short_print: Stopped: [ resDRBD0:0 ] Dec 8 11:15:14 node1 attrd: [961]: info: attrd_trigger_update: Sending flush op to all hosts for: fail-count-resSyslog:0 () Dec 8 11:15:14 node1 crmd: [31284]: WARN: do_election_vote: Not voting in election, we're in state S_RECOVERY Dec 8 11:15:14 node1 pengine: [31285]: notice: clone_print: Master/Slave Set: msDRBD1 Dec 8 11:15:14 node1 attrd: [961]: info: attrd_trigger_update: Sending flush op to all hosts for: master-Sys () Dec 8 11:15:14 node1 crmd: [31284]: info: do_dc_release: DC role released Dec 8 11:15:14 node1 pengine: [31285]: notice: short_print: Masters: [ node2 ] Dec 8 11:15:14 node1 attrd: [961]: info: attrd_trigger_update: Sending flush op to all hosts for: master-rsys () Dec 8 11:15:14 node1 crmd: [31284]: info: do_te_control: Transitioner is now inactive Dec 8 11:15:14 node1 pengine: [31285]: notice: short_print: Stopped: [ resDRBD1:0 ] Dec 8 11:15:14 node1 attrd: [961]: info: attrd_trigger_update: Sending flush op to all hosts for: fail-count-resSyslog:1 () Dec 8 11:15:14 node1 crmd: [31284]: info: do_te_control: Disconnecting STONITH... Dec 8 11:15:14 node1 pengine: [31285]: notice: native_print: resFSys0#011(ocf::heartbeat:Filesystem):#011Started node2 Dec 8 11:15:14 node1 attrd: [961]: info: attrd_trigger_update: Sending flush op to all hosts for: shutdown () Dec 8 11:15:14 node1 crmd: [31284]: info: tengine_stonith_connection_destroy: Fencing daemon disconnected Dec 8 11:15:14 node1 pengine: [31285]: notice: native_print: resFSys1#011(ocf::heartbeat:Filesystem):#011Started node2 Dec 8 11:15:14 node1 attrd: [961]: info: attrd_trigger_update: Sending flush op to all hosts for: fail-count-resVMPS () Dec 8 11:15:14 node1 crmd: [31284]: notice: Not currently connected. Dec 8 11:15:14 node1 pengine: [31285]: notice: native_print: resDHCP#011(ocf::T-Systems:dhcp3):#011Started node2 Dec 8 11:15:14 node1 attrd: [961]: info: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true) Dec 8 11:15:14 node1 crmd: [31284]: ERROR: do_log: FSA: Input I_TERMINATE from do_recover() received in state S_RECOVERY Dec 8 11:15:14 node1 pengine: [31285]: notice: native_print: resMySQL#011(ocf::heartbeat:mysql):#011Started node2 Dec 8 11:15:14 node1 attrd: [961]: info: attrd_trigger_update: Sending flush op to all hosts for: last-failure-resSyslog:0 () Dec 8 11:15:14 node1 crmd: [31284]: info: do_state_transition: State transition S_RECOVERY -> S_TERMINATE [ input=I_TERMINATE cause=C_FSA_INTERNAL origin=do_recover ] Dec 8 11:15:14 node1 pengine: [31285]: notice: group_print: Resource Group: groupNagiosApache Dec 8 11:15:14 node1 attrd: [961]: info: attrd_trigger_update: Sending flush op to all hosts for: last-failure-resSyslog:1 () Dec 8 11:15:14 node1 crmd: [31284]: info: do_lrm_control: Disconnected from the LRM Dec 8 11:15:14 node1 pengine: [31285]: notice: native_print: resApache#011(ocf::heartbeat:apache):#011Started node2 Dec 8 11:15:14 node1 crmd: [31284]: info: do_ha_control: Disconnected from OpenAIS Dec 8 11:15:14 node1 pengine: [31285]: notice: native_print: resNagios#011(lsb:nagios3):#011Started node2 Dec 8 11:15:14 node1 crmd: [31284]: info: do_cib_control: Disconnecting CIB Dec 8 11:15:14 node1 pengine: [31285]: notice: group_print: Resource Group: groupIPVPN Dec 8 11:15:14 node1 crmd: [31284]: info: crmd_cib_connection_destroy: Connection to the CIB terminated... Dec 8 11:15:14 node1 pengine: [31285]: notice: native_print: resIPVM#011(ocf::heartbeat:IPaddr2):#011Started node2 Dec 8 11:15:14 node1 crmd: [31284]: info: do_exit: Performing A_EXIT_0 - gracefully exiting the CRMd Dec 8 11:15:14 node1 pengine: [31285]: notice: native_print: resVPN#011(ocf::T-Systems:OpenVPN):#011Started node2 Dec 8 11:15:14 node1 crmd: [31284]: ERROR: do_exit: Could not recover from internal error Dec 8 11:15:14 node1 pengine: [31285]: notice: native_print: resSquid#011(ocf::heartbeat:Squid):#011Started node2 Dec 8 11:15:14 node1 crmd: [31284]: info: free_mem: Dropping I_PENDING: [ state=S_TERMINATE cause=C_FSA_INTERNAL origin=do_election_vote ] Dec 8 11:15:14 node1 pengine: [31285]: notice: native_print: resVMPS#011(ocf::T-Systems:OpenVMPS):#011Started node2 Dec 8 11:15:14 node1 crmd: [31284]: info: free_mem: Dropping I_RELEASE_SUCCESS: [ state=S_TERMINATE cause=C_FSA_INTERNAL origin=do_dc_release ] Dec 8 11:15:14 node1 pengine: [31285]: notice: native_print: resSendmail#011(lsb:sendmail):#011Started node2 Dec 8 11:15:14 node1 crmd: [31284]: info: free_mem: Dropping I_TERMINATE: [ state=S_TERMINATE cause=C_FSA_INTERNAL origin=do_stop ] Dec 8 11:15:14 node1 pengine: [31285]: notice: clone_print: Master/Slave Set: msSyslog Dec 8 11:15:14 node1 crmd: [31284]: info: do_exit: [crmd] stopped (2) Dec 8 11:15:14 node1 pengine: [31285]: notice: short_print: Masters: [ node2 ] Dec 8 11:15:14 node1 pengine: [31285]: notice: short_print: Stopped: [ resSyslog:1 ] Dec 8 11:15:14 node1 pengine: [31285]: info: native_merge_weights: resIP0: Rolling back scores from resIPVM Dec 8 11:15:14 node1 pengine: [31285]: info: native_merge_weights: resIP0: Rolling back scores from resApache Dec 8 11:15:14 node1 pengine: [31285]: info: native_merge_weights: msDRBD0: Rolling back scores from resFSys0 Dec 8 11:15:14 node1 pengine: [31285]: info: native_merge_weights: msDRBD0: Rolling back scores from resIPVM Dec 8 11:15:14 node1 pengine: [31285]: info: native_merge_weights: msDRBD0: Rolling back scores from resFSys0 Dec 8 11:15:14 node1 pengine: [31285]: info: native_merge_weights: msDRBD0: Rolling back scores from resApache Dec 8 11:15:14 node1 pengine: [31285]: info: native_merge_weights: msDRBD0: Rolling back scores from resFSys0 Dec 8 11:15:14 node1 pengine: last message repeated 2 times Dec 8 11:15:14 node1 pengine: [31285]: info: native_merge_weights: msDRBD0: Rolling back scores from resIPVM Dec 8 11:15:14 node1 pengine: [31285]: info: native_merge_weights: msDRBD0: Rolling back scores from resFSys0 Dec 8 11:15:14 node1 pengine: [31285]: info: native_merge_weights: msDRBD0: Rolling back scores from resApache Dec 8 11:15:14 node1 pengine: [31285]: info: native_merge_weights: msDRBD0: Rolling back scores from resFSys0 Dec 8 11:15:14 node1 pengine: [31285]: info: native_merge_weights: msDRBD0: Rolling back scores from resFSys0 Dec 8 11:15:14 node1 pengine: [31285]: info: master_color: Promoting resDRBD0:1 (Master node2) Dec 8 11:15:14 node1 pengine: [31285]: info: master_color: msDRBD0: Promoted 1 instances of a possible 1 to master Dec 8 11:15:14 node1 pengine: [31285]: info: native_merge_weights: msDRBD1: Rolling back scores from resFSys1 Dec 8 11:15:14 node1 pengine: last message repeated 7 times Dec 8 11:15:14 node1 pengine: [31285]: info: master_color: Promoting resDRBD1:1 (Master node2) Dec 8 11:15:14 node1 pengine: [31285]: info: master_color: msDRBD1: Promoted 1 instances of a possible 1 to master Dec 8 11:15:14 node1 pengine: [31285]: info: master_color: Promoting resDRBD0:1 (Master node2) Dec 8 11:15:14 node1 pengine: [31285]: info: master_color: msDRBD0: Promoted 1 instances of a possible 1 to master Dec 8 11:15:14 node1 pengine: [31285]: info: native_merge_weights: resFSys0: Rolling back scores from resIPVM Dec 8 11:15:14 node1 pengine: [31285]: info: native_merge_weights: resFSys0: Rolling back scores from resApache Dec 8 11:15:14 node1 pengine: [31285]: info: master_color: Promoting resDRBD1:1 (Master node2) Dec 8 11:15:14 node1 pengine: [31285]: info: master_color: msDRBD1: Promoted 1 instances of a possible 1 to master Dec 8 11:15:14 node1 pengine: [31285]: info: native_color: Resource resMySQL cannot run anywhere Dec 8 11:15:14 node1 pengine: [31285]: info: native_merge_weights: resApache: Rolling back scores from resNagios Dec 8 11:15:14 node1 pengine: [31285]: info: native_color: Resource resApache cannot run anywhere Dec 8 11:15:14 node1 pengine: [31285]: info: native_color: Resource resNagios cannot run anywhere Dec 8 11:15:14 node1 pengine: [31285]: info: native_merge_weights: resIPVM: Rolling back scores from resVPN Dec 8 11:15:14 node1 pengine: [31285]: info: native_merge_weights: resIPVM: Rolling back scores from msSyslog Dec 8 11:15:14 node1 pengine: [31285]: info: native_color: Resource resIPVM cannot run anywhere Dec 8 11:15:14 node1 pengine: [31285]: info: native_color: Resource resVPN cannot run anywhere Dec 8 11:15:14 node1 pengine: [31285]: info: native_merge_weights: msSyslog: Rolling back scores from resIPVM Dec 8 11:15:14 node1 pengine: [31285]: info: native_merge_weights: msSyslog: Rolling back scores from resVPN Dec 8 11:15:14 node1 pengine: [31285]: info: native_merge_weights: msSyslog: Rolling back scores from resIPVM Dec 8 11:15:14 node1 cib: [959]: WARN: send_ipc_message: IPC Channel to 31284 is not connected Dec 8 11:15:14 node1 cib: [959]: WARN: send_via_callback_channel: Delivery of reply to client 31284/c7dd7e23-1202-4d83-b43b-990ce742eae9 failed Dec 8 11:15:14 node1 cib: [959]: WARN: do_local_notify: A-Sync reply to crmd failed: reply failed Dec 8 11:15:14 node1 cib: [959]: info: cib_process_readwrite: We are now in R/O mode Dec 8 11:15:14 node1 cib: [959]: WARN: send_ipc_message: IPC Channel to 31284 is not connected Dec 8 11:15:14 node1 cib: [959]: WARN: send_via_callback_channel: Delivery of reply to client 31284/c7dd7e23-1202-4d83-b43b-990ce742eae9 failed Dec 8 11:15:14 node1 cib: [959]: WARN: do_local_notify: A-Sync reply to crmd failed: reply failed Dec 8 11:15:15 node1 corosync[898]: [pcmk ] ERROR: pcmk_wait_dispatch: Child process crmd exited (pid=31284, rc=2) Dec 8 11:15:15 node1 corosync[898]: [pcmk ] notice: pcmk_wait_dispatch: Respawning failed child process: crmd Dec 8 11:15:15 node1 corosync[898]: [pcmk ] info: spawn_child: Forked child 31386 for process crmd Dec 8 11:15:15 node1 corosync[898]: [pcmk ] ERROR: pcmk_wait_dispatch: Child process pengine terminated with signal 11 (pid=31285, core=false) Dec 8 11:15:15 node1 corosync[898]: [pcmk ] notice: pcmk_wait_dispatch: Respawning failed child process: pengine Dec 8 11:15:15 node1 corosync[898]: [pcmk ] info: spawn_child: Forked child 31387 for process pengine Dec 8 11:15:15 node1 corosync[898]: [pcmk ] info: pcmk_ipc: Recorded connection 0x7f8a3c002250 for crmd/31386 Dec 8 11:15:15 node1 corosync[898]: [pcmk ] info: pcmk_ipc: Sending membership update 736 to crmd Dec 8 11:15:15 node1 crmd: [31386]: info: Invoked: /usr/lib/heartbeat/crmd Dec 8 11:15:15 node1 pengine: [31387]: info: Invoked: /usr/lib/heartbeat/pengine Dec 8 11:15:15 node1 crmd: [31386]: info: main: CRM Hg Version: 042548a451fce8400660f6031f4da6f0223dd5dd Dec 8 11:15:15 node1 pengine: [31387]: info: main: Starting pengine Dec 8 11:15:15 node1 crmd: [31386]: info: crmd_init: Starting crmd Dec 8 11:15:15 node1 crmd: [31386]: info: G_main_add_SignalHandler: Added signal handler for signal 17 Dec 8 11:15:15 node1 crmd: [31386]: info: do_cib_control: CIB connection established Dec 8 11:15:15 node1 crmd: [31386]: info: crm_cluster_connect: Connecting to OpenAIS Dec 8 11:15:15 node1 crmd: [31386]: info: init_ais_connection: Creating connection to our AIS plugin Dec 8 11:15:15 node1 crmd: [31386]: info: init_ais_connection: AIS connection established Dec 8 11:15:15 node1 crmd: [31386]: info: get_ais_nodeid: Server details: id=174368960 uname=node1 cname=pcmk Dec 8 11:15:15 node1 crmd: [31386]: info: crm_new_peer: Node node1 now has id: 174368960 Dec 8 11:15:15 node1 crmd: [31386]: info: crm_new_peer: Node 174368960 is now known as node1 Dec 8 11:15:15 node1 crmd: [31386]: info: do_ha_control: Connected to the cluster Dec 8 11:15:15 node1 crmd: [31386]: info: do_started: Delaying start, CCM (0000000000100000) not connected Dec 8 11:15:15 node1 crmd: [31386]: info: crmd_init: Starting crmd's mainloop Dec 8 11:15:15 node1 crmd: [31386]: notice: ais_dispatch: Membership 736: quorum acquired Dec 8 11:15:15 node1 crmd: [31386]: info: crm_update_peer: Node node1: id=174368960 state=member (new) addr=r(0) ip(192.168.100.10) (new) votes=1 (new) born=728 seen=736 proc=00000000000000000000000000013312 (new) Dec 8 11:15:15 node1 crmd: [31386]: info: crm_new_peer: Node node2 now has id: 342141120 Dec 8 11:15:15 node1 crmd: [31386]: info: crm_new_peer: Node 342141120 is now known as node2 Dec 8 11:15:15 node1 crmd: [31386]: info: crm_update_peer: Node node2: id=342141120 state=member (new) addr=r(0) ip(192.168.100.20) votes=1 born=736 seen=736 proc=00000000000000000000000000013312 Dec 8 11:15:15 node1 crmd: [31386]: info: do_started: Delaying start, Config not read (0000000000000040) Dec 8 11:15:15 node1 crmd: [31386]: info: do_started: Delaying start, Config not read (0000000000000040) Dec 8 11:15:15 node1 crmd: [31386]: info: config_query_callback: Checking for expired actions every 900000ms Dec 8 11:15:15 node1 crmd: [31386]: info: config_query_callback: Sending expected-votes=2 to corosync Dec 8 11:15:15 node1 crmd: [31386]: info: do_started: The local CRM is operational Dec 8 11:15:15 node1 crmd: [31386]: info: do_state_transition: State transition S_STARTING -> S_PENDING [ input=I_PENDING cause=C_FSA_INTERNAL origin=do_started ] Dec 8 11:15:16 node1 crmd: [31386]: info: ais_dispatch: Membership 736: quorum retained