Oct 30 17:24:23 server2 lrmd: [2321]: debug: rsc:ping:0:18: monitor Oct 30 17:24:23 server2 lrmd: [4183]: debug: perform_ra_op: resetting scheduler class to SCHED_OTHER Oct 30 17:24:27 server2 attrd_updater: [4194]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining Oct 30 17:24:27 server2 attrd_updater: [4194]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd Oct 30 17:24:27 server2 attrd: [2322]: debug: attrd_local_callback: update message from attrd_updater: pingd=1000 Oct 30 17:24:27 server2 attrd: [2322]: debug: attrd_local_callback: Supplied: 1000, Current: 1000, Stored: 1000 Oct 30 17:24:27 server2 attrd_updater: [4194]: debug: attrd_update_delegate: Sent update: pingd=1000 for localhost Oct 30 17:24:47 server2 rsyslogd-2177: imuxsock lost 81 messages from pid 2311 due to rate-limiting Oct 30 17:24:47 server2 corosync[2311]: [TOTEM ] Received ringid(192.168.26.1:396) seq 174 Oct 30 17:24:47 server2 corosync[2311]: [TOTEM ] Delivering 173 to 174 Oct 30 17:24:47 server2 corosync[2311]: [TOTEM ] Delivering MCAST message with seq 174 to pending delivery queue Oct 30 17:24:47 server2 crmd: [2324]: info: handle_shutdown_request: Creating shutdown request for server1 (state=S_IDLE) Oct 30 17:24:47 server2 crmd: [2324]: debug: attrd_update_delegate: Sent update: shutdown=1319991887 for server1 Oct 30 17:24:47 server2 corosync[2311]: [TOTEM ] releasing messages up to and including 174 Oct 30 17:24:47 server2 crmd: [2324]: debug: te_update_diff: Processing diff (cib_modify): 0.161.21 -> 0.161.22 (S_IDLE) Oct 30 17:24:47 server2 crmd: [2324]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff Oct 30 17:24:47 server2 corosync[2311]: [TOTEM ] mcasted message added to pending queue Oct 30 17:24:47 server2 corosync[2311]: [TOTEM ] mcasted message added to pending queue Oct 30 17:24:47 server2 corosync[2311]: [TOTEM ] Delivering 174 to 176 Oct 30 17:24:47 server2 corosync[2311]: [TOTEM ] Delivering MCAST message with seq 175 to pending delivery queue Oct 30 17:24:47 server2 corosync[2311]: [TOTEM ] Delivering MCAST message with seq 176 to pending delivery queue Oct 30 17:24:47 server2 corosync[2311]: [TOTEM ] Received ringid(192.168.26.1:396) seq 176 Oct 30 17:24:47 server2 corosync[2311]: [TOTEM ] Received ringid(192.168.26.1:396) seq 175 Oct 30 17:24:47 server2 corosync[2311]: [TOTEM ] releasing messages up to and including 176 Oct 30 17:24:47 server2 corosync[2311]: [TOTEM ] Received ringid(192.168.26.1:396) seq 177 Oct 30 17:24:47 server2 corosync[2311]: [TOTEM ] Delivering 176 to 177 Oct 30 17:24:47 server2 corosync[2311]: [TOTEM ] Delivering MCAST message with seq 177 to pending delivery queue Oct 30 17:24:47 server2 cib: [2320]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='server2']//transient_attributes//nvpair[@name='shutdown'] does not exist Oct 30 17:24:47 server2 corosync[2311]: [TOTEM ] releasing messages up to and including 177 Oct 30 17:24:47 server2 attrd: [2322]: debug: attrd_cib_callback: Update -22 for shutdown=(null) passed Oct 30 17:24:47 server2 corosync[2311]: [TOTEM ] Received ringid(192.168.26.1:396) seq 178 Oct 30 17:24:47 server2 corosync[2311]: [TOTEM ] Delivering 177 to 178 Oct 30 17:24:47 server2 corosync[2311]: [TOTEM ] Delivering MCAST message with seq 178 to pending delivery queue Oct 30 17:24:47 server2 crmd: [2324]: debug: te_update_diff: Processing diff (cib_modify): 0.161.22 -> 0.161.23 (S_IDLE) Oct 30 17:24:47 server2 crmd: [2324]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff Oct 30 17:24:47 server2 crmd: [2324]: info: abort_transition_graph: te_update_diff:149 - Triggered transition abort (complete=1, tag=nvpair, id=status-server1-shutdown, magic=NA, cib=0.161.23) : Transient attribute: update Oct 30 17:24:47 server2 corosync[2311]: [TOTEM ] releasing messages up to and including 178 Oct 30 17:24:47 server2 crmd: [2324]: debug: abort_transition_graphCause Oct 30 17:24:47 server2 corosync[2311]: [TOTEM ] mcasted message added to pending queue Oct 30 17:24:47 server2 crmd: [2324]: debug: s_crmd_fsa: Processing I_PE_CALC: [ state=S_IDLE cause=C_FSA_INTERNAL origin=abort_transition_graph ] Oct 30 17:24:47 server2 corosync[2311]: [TOTEM ] mcasted message added to pending queue Oct 30 17:24:47 server2 crmd: [2324]: info: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ] Oct 30 17:24:47 server2 corosync[2311]: [TOTEM ] Delivering 178 to 17a Oct 30 17:24:47 server2 crmd: [2324]: info: do_state_transition: All 2 cluster nodes are eligible to run resources. Oct 30 17:24:47 server2 corosync[2311]: [TOTEM ] Delivering MCAST message with seq 179 to pending delivery queue Oct 30 17:24:47 server2 crmd: [2324]: debug: do_fsa_action: actions:trace: #011// A_DC_TIMER_STOP Oct 30 17:24:47 server2 corosync[2311]: [TOTEM ] Delivering MCAST message with seq 17a to pending delivery queue Oct 30 17:24:47 server2 crmd: [2324]: debug: do_fsa_action: actions:trace: #011// A_INTEGRATE_TIMER_STOP Oct 30 17:24:47 server2 corosync[2311]: [TOTEM ] Received ringid(192.168.26.1:396) seq 17a Oct 30 17:24:47 server2 crmd: [2324]: debug: do_fsa_action: actions:trace: #011// A_FINALIZE_TIMER_STOP Oct 30 17:24:47 server2 corosync[2311]: [TOTEM ] Received ringid(192.168.26.1:396) seq 179 Oct 30 17:24:47 server2 crmd: [2324]: debug: do_fsa_action: actions:trace: #011// A_PE_INVOKE Oct 30 17:24:47 server2 crmd: [2324]: info: do_pe_invoke: Query 94: Requesting the current CIB: S_POLICY_ENGINE Oct 30 17:24:47 server2 crmd: [2324]: info: do_pe_invoke_callback: Invoking the PE: query=94, ref=pe_calc-dc-1319991887-65, seq=396, quorate=1 Oct 30 17:24:47 server2 pengine: [2323]: info: unpack_config: Startup probes: enabled Oct 30 17:24:47 server2 corosync[2311]: [TOTEM ] releasing messages up to and including 17a Oct 30 17:24:47 server2 pengine: [2323]: debug: unpack_config: STONITH timeout: 60000 Oct 30 17:24:47 server2 pengine: [2323]: debug: unpack_config: STONITH of failed nodes is disabled Oct 30 17:24:47 server2 pengine: [2323]: debug: unpack_config: Stop all active resources: false Oct 30 17:24:47 server2 pengine: [2323]: debug: unpack_config: Cluster is symmetric - resources can run anywhere by default Oct 30 17:24:47 server2 pengine: [2323]: debug: unpack_config: Default stickiness: 0 Oct 30 17:24:47 server2 pengine: [2323]: notice: unpack_config: On loss of CCM Quorum: Ignore Oct 30 17:24:47 server2 pengine: [2323]: info: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0 Oct 30 17:24:47 server2 pengine: [2323]: info: unpack_domains: Unpacking domains Oct 30 17:24:47 server2 pengine: [2323]: info: determine_online_status: Node server1 is shutting down Oct 30 17:24:47 server2 pengine: [2323]: info: determine_online_status: Node server2 is online Oct 30 17:24:47 server2 pengine: [2323]: debug: get_node_score: Rule drdb_on_connected_node-rule: node server2 had value 1000 for pingd Oct 30 17:24:47 server2 pengine: [2323]: debug: get_node_score: Rule drdb_on_connected_node-rule: node server1 had value 1000 for pingd Oct 30 17:24:47 server2 pengine: [2323]: debug: get_node_score: Rule drdb_on_connected_node-rule: node server1 had value 1000 for pingd Oct 30 17:24:47 server2 pengine: [2323]: debug: get_node_score: Rule drdb_on_connected_node-rule: node server2 had value 1000 for pingd Oct 30 17:24:47 server2 pengine: [2323]: notice: group_print: Resource Group: apache_group Oct 30 17:24:47 server2 pengine: [2323]: notice: native_print: apache_ip#011(ocf::heartbeat:IPaddr2):#011Stopped Oct 30 17:24:47 server2 pengine: [2323]: notice: native_print: apache2#011(ocf::heartbeat:apache):#011Stopped Oct 30 17:24:47 server2 pengine: [2323]: notice: native_print: fs_drbd#011(ocf::heartbeat:Filesystem):#011Stopped Oct 30 17:24:47 server2 pengine: [2323]: notice: clone_print: Master/Slave Set: ms_drbd [drbd_disk] Oct 30 17:24:47 server2 pengine: [2323]: debug: native_active: Resource drbd_disk:0 active on server2 Oct 30 17:24:47 server2 pengine: [2323]: debug: native_active: Resource drbd_disk:0 active on server2 Oct 30 17:24:47 server2 pengine: [2323]: debug: native_active: Resource drbd_disk:1 active on server1 Oct 30 17:24:47 server2 pengine: [2323]: debug: native_active: Resource drbd_disk:1 active on server1 Oct 30 17:24:47 server2 pengine: [2323]: notice: short_print: Masters: [ server1 ] Oct 30 17:24:47 server2 pengine: [2323]: notice: short_print: Slaves: [ server2 ] Oct 30 17:24:47 server2 pengine: [2323]: notice: clone_print: Clone Set: pingclone [ping] Oct 30 17:24:47 server2 pengine: [2323]: debug: native_active: Resource ping:0 active on server2 Oct 30 17:24:47 server2 pengine: [2323]: debug: native_active: Resource ping:0 active on server2 Oct 30 17:24:47 server2 pengine: [2323]: debug: native_active: Resource ping:1 active on server1 Oct 30 17:24:47 server2 pengine: [2323]: debug: native_active: Resource ping:1 active on server1 Oct 30 17:24:47 server2 pengine: [2323]: notice: short_print: Started: [ server2 server1 ] Oct 30 17:24:47 server2 pengine: [2323]: notice: clone_print: Master/Slave Set: ms_drbd_mysql [drbd_mysql_disk] Oct 30 17:24:47 server2 pengine: [2323]: notice: short_print: Stopped: [ drbd_mysql_disk:0 drbd_mysql_disk:1 ] Oct 30 17:24:47 server2 pengine: [2323]: notice: clone_print: Master/Slave Set: ms_drbd_pgsql [drbd_pgsql_disk] Oct 30 17:24:47 server2 pengine: [2323]: notice: short_print: Stopped: [ drbd_pgsql_disk:0 drbd_pgsql_disk:1 ] Oct 30 17:24:47 server2 pengine: [2323]: notice: group_print: Resource Group: mysql_group Oct 30 17:24:47 server2 pengine: [2323]: notice: native_print: mysql_ip#011(ocf::heartbeat:IPaddr2):#011Stopped Oct 30 17:24:47 server2 pengine: [2323]: notice: native_print: mysql_service#011(ocf::heartbeat:mysql):#011Stopped Oct 30 17:24:47 server2 pengine: [2323]: notice: native_print: fs_drbd_mysql#011(ocf::heartbeat:Filesystem):#011Stopped Oct 30 17:24:47 server2 pengine: [2323]: notice: group_print: Resource Group: pgsql_group Oct 30 17:24:47 server2 pengine: [2323]: notice: native_print: pgsql_ip#011(ocf::heartbeat:IPaddr2):#011Stopped Oct 30 17:24:47 server2 pengine: [2323]: notice: native_print: fs_drbd_pgsql#011(ocf::heartbeat:Filesystem):#011Stopped Oct 30 17:24:47 server2 pengine: [2323]: debug: common_apply_stickiness: Resource drbd_disk:1: preferring current location (node=server1, weight=100) Oct 30 17:24:47 server2 pengine: [2323]: debug: common_apply_stickiness: Resource ping:1: preferring current location (node=server1, weight=100) Oct 30 17:24:47 server2 pengine: [2323]: debug: common_apply_stickiness: Resource drbd_disk:0: preferring current location (node=server2, weight=100) Oct 30 17:24:47 server2 pengine: [2323]: debug: common_apply_stickiness: Resource ping:0: preferring current location (node=server2, weight=100) Oct 30 17:24:47 server2 pengine: [2323]: debug: native_assign_node: Assigning server2 to apache_ip Oct 30 17:24:47 server2 pengine: [2323]: debug: native_assign_node: Assigning server2 to apache2 Oct 30 17:24:47 server2 pengine: [2323]: debug: native_assign_node: Assigning server2 to drbd_disk:0 Oct 30 17:24:47 server2 pengine: [2323]: info: native_color: drbd_disk:1: Rolling back scores from apache_ip Oct 30 17:24:47 server2 pengine: [2323]: info: native_color: drbd_disk:1: Rolling back scores from apache2 Oct 30 17:24:47 server2 pengine: [2323]: debug: native_assign_node: All nodes for resource drbd_disk:1 are unavailable, unclean or shutting down (server1: 0, -1000000) Oct 30 17:24:47 server2 pengine: [2323]: debug: native_assign_node: Could not allocate a node for drbd_disk:1 Oct 30 17:24:47 server2 pengine: [2323]: info: native_color: Resource drbd_disk:1 cannot run anywhere Oct 30 17:24:47 server2 pengine: [2323]: debug: clone_color: Allocated 1 ms_drbd instances of a possible 2 Oct 30 17:24:47 server2 pengine: [2323]: info: rsc_merge_weights: ms_drbd: Breaking dependency loop at ms_drbd Oct 30 17:24:47 pengine: last message repeated 3 times Oct 30 17:24:47 server2 pengine: [2323]: debug: master_color: drbd_disk:0 master score: 10000 Oct 30 17:24:47 server2 pengine: [2323]: info: master_color: Promoting drbd_disk:0 (Slave server2) Oct 30 17:24:47 server2 pengine: [2323]: debug: master_color: drbd_disk:1 master score: 0 Oct 30 17:24:47 server2 pengine: [2323]: info: master_color: ms_drbd: Promoted 1 instances of a possible 1 to master Oct 30 17:24:47 server2 pengine: [2323]: debug: native_assign_node: Assigning server2 to fs_drbd Oct 30 17:24:47 server2 pengine: [2323]: debug: master_color: drbd_disk:0 master score: 10000 Oct 30 17:24:47 server2 pengine: [2323]: info: master_color: Promoting drbd_disk:0 (Slave server2) Oct 30 17:24:47 server2 pengine: [2323]: debug: master_color: drbd_disk:1 master score: 0 Oct 30 17:24:47 server2 pengine: [2323]: info: master_color: ms_drbd: Promoted 1 instances of a possible 1 to master Oct 30 17:24:47 server2 pengine: [2323]: debug: native_assign_node: Assigning server2 to ping:0 Oct 30 17:24:47 server2 pengine: [2323]: debug: native_assign_node: All nodes for resource ping:1 are unavailable, unclean or shutting down (server1: 0, -1000000) Oct 30 17:24:47 server2 pengine: [2323]: debug: native_assign_node: Could not allocate a node for ping:1 Oct 30 17:24:47 server2 pengine: [2323]: info: native_color: Resource ping:1 cannot run anywhere Oct 30 17:24:47 server2 pengine: [2323]: debug: clone_color: Allocated 1 pingclone instances of a possible 2 Oct 30 17:24:47 server2 pengine: [2323]: debug: native_color: Escalating allocation of mysql_service to its parent: mysql_group Oct 30 17:24:47 server2 pengine: [2323]: debug: native_assign_node: Assigning server2 to mysql_ip Oct 30 17:24:47 server2 pengine: [2323]: debug: native_assign_node: Assigning server2 to mysql_service Oct 30 17:24:47 server2 pengine: [2323]: debug: clone_color: Dependency loop detected involving ms_drbd_mysql Oct 30 17:24:47 server2 pengine: [2323]: info: rsc_merge_weights: ms_drbd_mysql: Breaking dependency loop at ms_drbd_mysql Oct 30 17:24:47 pengine: last message repeated 3 times Oct 30 17:24:47 server2 pengine: [2323]: debug: master_color: drbd_mysql_disk:0 master score: 0 Oct 30 17:24:47 server2 pengine: [2323]: debug: master_color: drbd_mysql_disk:1 master score: 0 Oct 30 17:24:47 server2 pengine: [2323]: info: master_color: ms_drbd_mysql: Promoted 0 instances of a possible 1 to master Oct 30 17:24:47 server2 pengine: [2323]: debug: native_assign_node: Assigning server2 to fs_drbd_mysql Oct 30 17:24:47 server2 pengine: [2323]: debug: native_color: Escalating allocation of mysql_ip to its parent: mysql_group Oct 30 17:24:47 server2 pengine: [2323]: debug: native_assign_node: All nodes for resource drbd_mysql_disk:0 are unavailable, unclean or shutting down (server1: 0, -1000000) Oct 30 17:24:47 server2 pengine: [2323]: debug: native_assign_node: Could not allocate a node for drbd_mysql_disk:0 Oct 30 17:24:47 server2 pengine: [2323]: info: native_color: Resource drbd_mysql_disk:0 cannot run anywhere Oct 30 17:24:47 server2 pengine: [2323]: debug: native_color: Escalating allocation of mysql_service to its parent: mysql_group Oct 30 17:24:47 server2 pengine: [2323]: debug: native_color: Escalating allocation of mysql_ip to its parent: mysql_group Oct 30 17:24:47 server2 pengine: [2323]: debug: native_assign_node: All nodes for resource drbd_mysql_disk:1 are unavailable, unclean or shutting down (server1: 0, -1000000) Oct 30 17:24:47 server2 pengine: [2323]: debug: native_assign_node: Could not allocate a node for drbd_mysql_disk:1 Oct 30 17:24:47 server2 pengine: [2323]: info: native_color: Resource drbd_mysql_disk:1 cannot run anywhere Oct 30 17:24:47 server2 pengine: [2323]: debug: clone_color: Allocated 0 ms_drbd_mysql instances of a possible 2 Oct 30 17:24:47 server2 pengine: [2323]: debug: master_color: drbd_mysql_disk:0 master score: 0 Oct 30 17:24:47 server2 pengine: [2323]: debug: master_color: drbd_mysql_disk:1 master score: 0 Oct 30 17:24:47 server2 pengine: [2323]: info: master_color: ms_drbd_mysql: Promoted 0 instances of a possible 1 to master Oct 30 17:24:47 server2 pengine: [2323]: debug: native_color: Escalating allocation of pgsql_ip to its parent: pgsql_group Oct 30 17:24:47 server2 pengine: [2323]: debug: native_assign_node: Assigning server2 to pgsql_ip Oct 30 17:24:47 server2 pengine: [2323]: debug: clone_color: Dependency loop detected involving ms_drbd_pgsql Oct 30 17:24:47 server2 pengine: [2323]: info: rsc_merge_weights: ms_drbd_pgsql: Breaking dependency loop at ms_drbd_pgsql Oct 30 17:24:47 server2 pengine: [2323]: debug: master_color: drbd_pgsql_disk:0 master score: 0 Oct 30 17:24:47 server2 pengine: [2323]: debug: master_color: drbd_pgsql_disk:1 master score: 0 Oct 30 17:24:47 server2 pengine: [2323]: info: master_color: ms_drbd_pgsql: Promoted 0 instances of a possible 1 to master Oct 30 17:24:47 server2 pengine: [2323]: debug: native_assign_node: Assigning server2 to fs_drbd_pgsql Oct 30 17:24:47 server2 pengine: [2323]: debug: native_assign_node: All nodes for resource drbd_pgsql_disk:0 are unavailable, unclean or shutting down (server1: 0, -1000000) Oct 30 17:24:47 server2 pengine: [2323]: debug: native_assign_node: Could not allocate a node for drbd_pgsql_disk:0 Oct 30 17:24:47 server2 pengine: [2323]: info: native_color: Resource drbd_pgsql_disk:0 cannot run anywhere Oct 30 17:24:47 server2 pengine: [2323]: debug: native_color: Escalating allocation of pgsql_ip to its parent: pgsql_group Oct 30 17:24:47 server2 pengine: [2323]: debug: native_assign_node: All nodes for resource drbd_pgsql_disk:1 are unavailable, unclean or shutting down (server1: 0, -1000000) Oct 30 17:24:47 server2 pengine: [2323]: debug: native_assign_node: Could not allocate a node for drbd_pgsql_disk:1 Oct 30 17:24:47 server2 pengine: [2323]: info: native_color: Resource drbd_pgsql_disk:1 cannot run anywhere Oct 30 17:24:47 server2 pengine: [2323]: debug: clone_color: Allocated 0 ms_drbd_pgsql instances of a possible 2 Oct 30 17:24:47 server2 pengine: [2323]: debug: master_color: drbd_pgsql_disk:0 master score: 0 Oct 30 17:24:47 server2 pengine: [2323]: debug: master_color: drbd_pgsql_disk:1 master score: 0 Oct 30 17:24:47 server2 pengine: [2323]: info: master_color: ms_drbd_pgsql: Promoted 0 instances of a possible 1 to master Oct 30 17:24:47 server2 pengine: [2323]: notice: RecurringOp: Start recurring monitor (30s) for apache_ip on server2 Oct 30 17:24:47 server2 pengine: [2323]: notice: RecurringOp: Start recurring monitor (30s) for apache2 on server2 Oct 30 17:24:47 server2 pengine: [2323]: debug: master_create_actions: Creating actions for ms_drbd Oct 30 17:24:47 server2 pengine: [2323]: notice: RecurringOp: Start recurring monitor (50s) for drbd_disk:0 on server2 Oct 30 17:24:47 server2 pengine: [2323]: info: RecurringOp: Cancelling action drbd_disk:0_monitor_60000 (Slave vs. Master) Oct 30 17:24:47 server2 pengine: [2323]: notice: RecurringOp: Start recurring monitor (50s) for drbd_disk:0 on server2 Oct 30 17:24:47 server2 pengine: [2323]: info: RecurringOp: Cancelling action drbd_disk:0_monitor_60000 (Slave vs. Master) Oct 30 17:24:47 server2 pengine: [2323]: debug: master_create_actions: Creating actions for ms_drbd_mysql Oct 30 17:24:47 server2 pengine: [2323]: debug: master_create_actions: Creating actions for ms_drbd_pgsql Oct 30 17:24:47 server2 pengine: [2323]: notice: RecurringOp: Start recurring monitor (30s) for mysql_ip on server2 Oct 30 17:24:47 server2 pengine: [2323]: notice: RecurringOp: Start recurring monitor (40s) for mysql_service on server2 Oct 30 17:24:47 server2 pengine: [2323]: notice: RecurringOp: Start recurring monitor (30s) for pgsql_ip on server2 Oct 30 17:24:47 server2 pengine: [2323]: notice: stage6: Scheduling Node server1 for shutdown Oct 30 17:24:47 server2 pengine: [2323]: notice: LogActions: Start apache_ip#011(server2) Oct 30 17:24:47 server2 pengine: [2323]: notice: LogActions: Start apache2#011(server2) Oct 30 17:24:47 server2 pengine: [2323]: notice: LogActions: Start fs_drbd#011(server2) Oct 30 17:24:47 server2 pengine: [2323]: notice: LogActions: Promote drbd_disk:0#011(Slave -> Master server2) Oct 30 17:24:47 server2 pengine: [2323]: notice: LogActions: Demote drbd_disk:1#011(Master -> Stopped server1) Oct 30 17:24:47 server2 pengine: [2323]: notice: LogActions: Leave ping:0#011(Started server2) Oct 30 17:24:47 server2 pengine: [2323]: notice: LogActions: Stop ping:1#011(server1) Oct 30 17:24:47 server2 pengine: [2323]: notice: LogActions: Leave drbd_mysql_disk:0#011(Stopped) Oct 30 17:24:47 server2 pengine: [2323]: notice: LogActions: Leave drbd_mysql_disk:1#011(Stopped) Oct 30 17:24:47 server2 pengine: [2323]: notice: LogActions: Leave drbd_pgsql_disk:0#011(Stopped) Oct 30 17:24:47 server2 pengine: [2323]: notice: LogActions: Leave drbd_pgsql_disk:1#011(Stopped) Oct 30 17:24:47 server2 pengine: [2323]: notice: LogActions: Start mysql_ip#011(server2) Oct 30 17:24:47 server2 pengine: [2323]: notice: LogActions: Start mysql_service#011(server2) Oct 30 17:24:47 server2 pengine: [2323]: notice: LogActions: Start fs_drbd_mysql#011(server2) Oct 30 17:24:47 server2 pengine: [2323]: notice: LogActions: Start pgsql_ip#011(server2) Oct 30 17:24:47 server2 pengine: [2323]: notice: LogActions: Start fs_drbd_pgsql#011(server2) Oct 30 17:24:47 server2 crmd: [2324]: debug: s_crmd_fsa: Processing I_PE_SUCCESS: [ state=S_POLICY_ENGINE cause=C_IPC_MESSAGE origin=handle_response ] Oct 30 17:24:47 server2 crmd: [2324]: debug: do_fsa_action: actions:trace: #011// A_LOG Oct 30 17:24:47 server2 crmd: [2324]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ] Oct 30 17:24:47 server2 crmd: [2324]: debug: do_fsa_action: actions:trace: #011// A_DC_TIMER_STOP Oct 30 17:24:47 server2 crmd: [2324]: debug: do_fsa_action: actions:trace: #011// A_INTEGRATE_TIMER_STOP Oct 30 17:24:47 server2 crmd: [2324]: debug: do_fsa_action: actions:trace: #011// A_FINALIZE_TIMER_STOP Oct 30 17:24:47 server2 crmd: [2324]: debug: do_fsa_action: actions:trace: #011// A_TE_INVOKE Oct 30 17:24:47 server2 crmd: [2324]: info: unpack_graph: Unpacked transition 6: 46 actions in 46 synapses Oct 30 17:24:47 server2 crmd: [2324]: info: do_te_invoke: Processing graph 6 (ref=pe_calc-dc-1319991887-65) derived from /var/lib/pengine/pe-input-327.bz2 Oct 30 17:24:47 server2 crmd: [2324]: info: te_pseudo_action: Pseudo action 14 fired and confirmed Oct 30 17:24:47 server2 crmd: [2324]: info: te_rsc_command: Initiating action 4: cancel drbd_disk:0_monitor_60000 on server2 (local) Oct 30 17:24:47 server2 crmd: [2324]: debug: do_lrm_invoke: PE requested op drbd_disk:0_monitor_60000 (call=NA) be cancelled Oct 30 17:24:47 server2 pengine: [2323]: info: process_pe_message: Transition 6: PEngine Input stored in: /var/lib/pengine/pe-input-327.bz2 Oct 30 17:24:47 server2 crmd: [2324]: debug: cancel_op: Scheduling drbd_disk:0:34 for removal Oct 30 17:24:47 server2 crmd: [2324]: debug: cancel_op: Cancelling op 34 for drbd_disk:0 (drbd_disk:0:34) Oct 30 17:24:47 server2 lrmd: [2321]: info: cancel_op: operation monitor[34] on ocf::drbd::drbd_disk:0 for client 2324, its parameters: drbd_resource=[disk0] CRM_meta_role=[Slave] CRM_meta_notify_stop_resource=[ ] CRM_meta_notify_start_resource=[drbd_disk:0 ] CRM_meta_notify_inactive_resource=[drbd_disk:0 ] CRM_meta_notify_master_uname=[server1 ] CRM_meta_timeout=[30000] CRM_meta_name=[monitor] CRM_meta_notify_demote_resource=[ ] CRM_meta_notify_promote_uname=[ ] crm_feature_set=[3.0.5] CRM_meta_notify=[true] CRM_meta_notify_start_uname=[server2 ] CRM_meta_clo cancelled Oct 30 17:24:47 server2 lrmd: [2321]: debug: on_msg_cancel_op: operation 34 cancelled Oct 30 17:24:47 server2 crmd: [2324]: debug: cancel_op: Op 34 for drbd_disk:0 (drbd_disk:0:34): cancelled Oct 30 17:24:47 server2 crmd: [2324]: debug: create_operation_update: send_direct_ack: Updating resouce drbd_disk:0 after complete monitor op (interval=60000) Oct 30 17:24:47 server2 crmd: [2324]: info: send_direct_ack: ACK'ing resource op drbd_disk:0_monitor_60000 from 4:6:0:b2aab0af-262c-445f-ad7e-b6c20cd36732: lrm_invoke-lrmd-1319991887-67 Oct 30 17:24:47 server2 crmd: [2324]: info: process_te_message: Processing (N)ACK lrm_invoke-lrmd-1319991887-67 from server2 Oct 30 17:24:47 server2 crmd: [2324]: info: match_graph_event: Action drbd_disk:0_monitor_60000 (4) confirmed on server2 (rc=0) Oct 30 17:24:47 server2 crmd: [2324]: info: te_pseudo_action: Pseudo action 45 fired and confirmed Oct 30 17:24:47 server2 crmd: [2324]: info: te_pseudo_action: Pseudo action 54 fired and confirmed Oct 30 17:24:47 server2 crmd: [2324]: info: te_pseudo_action: Pseudo action 109 fired and confirmed Oct 30 17:24:47 server2 crmd: [2324]: info: te_pseudo_action: Pseudo action 116 fired and confirmed Oct 30 17:24:47 server2 crmd: [2324]: debug: run_graph: Transition 6 (Complete=0, Pending=0, Fired=6, Skipped=0, Incomplete=40, Source=/var/lib/pengine/pe-input-327.bz2): In-progress Oct 30 17:24:47 server2 crmd: [2324]: debug: delete_op_entry: async: Sending delete op for drbd_disk:0_monitor_60000 (call=34) Oct 30 17:24:47 server2 crmd: [2324]: info: process_lrm_event: LRM operation drbd_disk:0_monitor_60000 (call=34, status=1, cib-update=0, confirmed=true) Cancelled Oct 30 17:24:47 server2 corosync[2311]: [TOTEM ] mcasted message added to pending queue Oct 30 17:24:47 server2 crmd: [2324]: info: te_rsc_command: Initiating action 145: notify drbd_disk:0_pre_notify_demote_0 on server2 (local) Oct 30 17:24:47 server2 crmd: [2324]: info: do_lrm_rsc_op: Performing key=145:6:0:b2aab0af-262c-445f-ad7e-b6c20cd36732 op=drbd_disk:0_notify_0 ) Oct 30 17:24:47 server2 lrmd: [2321]: debug: on_msg_perform_op: add an operation operation notify[35] on ocf::drbd::drbd_disk:0 for client 2324, its parameters: drbd_resource=[disk0] CRM_meta_notify_stop_resource=[drbd_disk:1 ] CRM_meta_notify_start_resource=[ ] CRM_meta_notify_inactive_resource=[ ] CRM_meta_notify_master_uname=[server1 ] CRM_meta_notify_promote_uname=[server2 ] CRM_meta_timeout=[20000] CRM_meta_notify_demote_resource=[drbd_disk:1 ] CRM_meta_notify_start_uname=[ ] CRM_meta_clone_node_max=[1] CRM_meta_notify=[true] CRM_meta_notify_promote_resource=[drbd_disk:0 ] CRM_met to the operation list. Oct 30 17:24:47 server2 lrmd: [2321]: info: rsc:drbd_disk:0:35: notify Oct 30 17:24:47 server2 lrmd: [4195]: debug: perform_ra_op: resetting scheduler class to SCHED_OTHER Oct 30 17:24:47 server2 crmd: [2324]: info: te_rsc_command: Initiating action 147: notify drbd_disk:1_pre_notify_demote_0 on server1 Oct 30 17:24:47 server2 corosync[2311]: [TOTEM ] mcasted message added to pending queue Oct 30 17:24:47 server2 crmd: [2324]: info: te_rsc_command: Initiating action 51: stop ping:1_stop_0 on server1 Oct 30 17:24:47 server2 corosync[2311]: [TOTEM ] Delivering 17a to 17c Oct 30 17:24:47 server2 crmd: [2324]: debug: run_graph: Transition 6 (Complete=6, Pending=3, Fired=3, Skipped=0, Incomplete=37, Source=/var/lib/pengine/pe-input-327.bz2): In-progress Oct 30 17:24:47 server2 corosync[2311]: [TOTEM ] Delivering MCAST message with seq 17b to pending delivery queue Oct 30 17:24:47 server2 crmd: [2324]: debug: te_update_diff: Processing diff (cib_delete): 0.161.23 -> 0.161.24 (S_TRANSITION_ENGINE) Oct 30 17:24:47 server2 corosync[2311]: [TOTEM ] Delivering MCAST message with seq 17c to pending delivery queue Oct 30 17:24:47 server2 crmd: [2324]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff Oct 30 17:24:47 server2 corosync[2311]: [TOTEM ] Received ringid(192.168.26.1:396) seq 17b Oct 30 17:24:47 server2 crmd: [2324]: debug: te_update_diff: Deleted lrm_rsc_op drbd_disk:0_monitor_60000 on server2 was for graph event 4 Oct 30 17:24:47 server2 corosync[2311]: [TOTEM ] Received ringid(192.168.26.1:396) seq 17c Oct 30 17:24:47 server2 corosync[2311]: [TOTEM ] releasing messages up to and including 17c Oct 30 17:24:47 server2 corosync[2311]: [TOTEM ] mcasted message added to pending queue Oct 30 17:24:47 server2 corosync[2311]: [TOTEM ] mcasted message added to pending queue Oct 30 17:24:47 server2 corosync[2311]: [TOTEM ] Delivering 17c to 17e Oct 30 17:24:47 server2 corosync[2311]: [TOTEM ] Delivering MCAST message with seq 17d to pending delivery queue Oct 30 17:24:47 server2 corosync[2311]: [TOTEM ] Delivering MCAST message with seq 17e to pending delivery queue Oct 30 17:24:47 server2 corosync[2311]: [TOTEM ] Received ringid(192.168.26.1:396) seq 17d Oct 30 17:24:47 server2 corosync[2311]: [TOTEM ] Received ringid(192.168.26.1:396) seq 17e Oct 30 17:24:47 server2 corosync[2311]: [TOTEM ] releasing messages up to and including 17e Oct 30 17:24:47 server2 corosync[2311]: [TOTEM ] mcasted message added to pending queue Oct 30 17:24:47 server2 corosync[2311]: [TOTEM ] mcasted message added to pending queue Oct 30 17:24:47 server2 corosync[2311]: [TOTEM ] Delivering 17e to 180 Oct 30 17:24:47 server2 corosync[2311]: [TOTEM ] Delivering MCAST message with seq 17f to pending delivery queue Oct 30 17:24:47 server2 corosync[2311]: [TOTEM ] Delivering MCAST message with seq 180 to pending delivery queue Oct 30 17:24:47 server2 corosync[2311]: [TOTEM ] Received ringid(192.168.26.1:396) seq 17f Oct 30 17:24:47 server2 corosync[2311]: [TOTEM ] Received ringid(192.168.26.1:396) seq 180 Oct 30 17:24:47 server2 corosync[2311]: [TOTEM ] releasing messages up to and including 180 Oct 30 17:24:47 server2 lrmd: [2321]: info: Managed drbd_disk:0:notify process 4195 exited with return code 0. Oct 30 17:24:47 server2 crmd: [2324]: debug: create_operation_update: do_update_resource: Updating resouce drbd_disk:0 after complete notify op (interval=0) Oct 30 17:24:47 server2 crmd: [2324]: info: process_lrm_event: LRM operation drbd_disk:0_notify_0 (call=35, rc=0, cib-update=96, confirmed=true) ok Oct 30 17:24:47 server2 crmd: [2324]: debug: te_update_diff: Processing diff (cib_modify): 0.161.24 -> 0.161.25 (S_TRANSITION_ENGINE) Oct 30 17:24:47 server2 crmd: [2324]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff Oct 30 17:24:47 server2 crmd: [2324]: info: match_graph_event: Action drbd_disk:0_pre_notify_demote_0 (145) confirmed on server2 (rc=0) Oct 30 17:24:47 server2 crmd: [2324]: debug: run_graph: Transition 6 (Complete=7, Pending=2, Fired=0, Skipped=0, Incomplete=37, Source=/var/lib/pengine/pe-input-327.bz2): In-progress Oct 30 17:24:47 server2 corosync[2311]: [TOTEM ] mcasted message added to pending queue Oct 30 17:24:47 server2 corosync[2311]: [TOTEM ] mcasted message added to pending queue Oct 30 17:24:47 server2 corosync[2311]: [TOTEM ] Received ringid(192.168.26.1:396) seq 182 Oct 30 17:24:47 server2 corosync[2311]: [TOTEM ] Delivering 180 to 182 Oct 30 17:24:47 server2 corosync[2311]: [TOTEM ] Received ringid(192.168.26.1:396) seq 181 Oct 30 17:24:47 server2 crmd: [2324]: debug: te_update_diff: Processing diff (cib_modify): 0.161.25 -> 0.161.26 (S_TRANSITION_ENGINE) Oct 30 17:24:47 server2 corosync[2311]: [TOTEM ] Delivering 180 to 182 Oct 30 17:24:47 server2 crmd: [2324]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff Oct 30 17:24:47 server2 corosync[2311]: [TOTEM ] Delivering MCAST message with seq 181 to pending delivery queue Oct 30 17:24:47 server2 crmd: [2324]: info: match_graph_event: Action ping:1_stop_0 (51) confirmed on server1 (rc=0) Oct 30 17:24:47 server2 corosync[2311]: [TOTEM ] Delivering MCAST message with seq 182 to pending delivery queue Oct 30 17:24:47 server2 crmd: [2324]: info: te_pseudo_action: Pseudo action 55 fired and confirmed Oct 30 17:24:47 server2 corosync[2311]: [TOTEM ] mcasted message added to pending queue Oct 30 17:24:47 server2 crmd: [2324]: debug: run_graph: Transition 6 (Complete=8, Pending=1, Fired=1, Skipped=0, Incomplete=36, Source=/var/lib/pengine/pe-input-327.bz2): In-progress Oct 30 17:24:47 server2 corosync[2311]: [TOTEM ] Delivering 182 to 185 Oct 30 17:24:47 server2 crmd: [2324]: debug: run_graph: Transition 6 (Complete=9, Pending=1, Fired=0, Skipped=0, Incomplete=36, Source=/var/lib/pengine/pe-input-327.bz2): In-progress Oct 30 17:24:47 server2 corosync[2311]: [TOTEM ] Delivering MCAST message with seq 183 to pending delivery queue Oct 30 17:24:47 server2 corosync[2311]: [TOTEM ] Delivering MCAST message with seq 184 to pending delivery queue Oct 30 17:24:47 server2 corosync[2311]: [TOTEM ] Delivering MCAST message with seq 185 to pending delivery queue Oct 30 17:24:47 server2 corosync[2311]: [TOTEM ] Received ringid(192.168.26.1:396) seq 183 Oct 30 17:24:47 server2 corosync[2311]: [TOTEM ] Received ringid(192.168.26.1:396) seq 184 Oct 30 17:24:47 server2 corosync[2311]: [TOTEM ] Received ringid(192.168.26.1:396) seq 185 Oct 30 17:24:47 server2 corosync[2311]: [TOTEM ] mcasted message added to pending queue Oct 30 17:24:47 corosync[2311]: last message repeated 2 times Oct 30 17:24:47 server2 corosync[2311]: [TOTEM ] releasing messages up to and including 182 Oct 30 17:24:47 server2 corosync[2311]: [TOTEM ] Delivering 185 to 188 Oct 30 17:24:47 server2 corosync[2311]: [TOTEM ] Delivering MCAST message with seq 186 to pending delivery queue Oct 30 17:24:47 server2 corosync[2311]: [TOTEM ] Delivering MCAST message with seq 187 to pending delivery queue Oct 30 17:24:47 server2 corosync[2311]: [TOTEM ] Delivering MCAST message with seq 188 to pending delivery queue Oct 30 17:24:47 server2 corosync[2311]: [TOTEM ] Received ringid(192.168.26.1:396) seq 187 Oct 30 17:24:47 server2 corosync[2311]: [TOTEM ] Received ringid(192.168.26.1:396) seq 186 Oct 30 17:24:47 server2 corosync[2311]: [TOTEM ] Received ringid(192.168.26.1:396) seq 188 Oct 30 17:24:47 server2 corosync[2311]: [TOTEM ] releasing messages up to and including 185 Oct 30 17:24:47 server2 corosync[2311]: [TOTEM ] releasing messages up to and including 188 Oct 30 17:24:48 server2 corosync[2311]: [TOTEM ] Received ringid(192.168.26.1:396) seq 189 Oct 30 17:24:48 server2 corosync[2311]: [TOTEM ] Delivering 188 to 189 Oct 30 17:24:48 server2 corosync[2311]: [TOTEM ] Delivering MCAST message with seq 189 to pending delivery queue Oct 30 17:24:48 server2 corosync[2311]: [TOTEM ] Received ringid(192.168.26.1:396) seq 18a Oct 30 17:24:48 server2 corosync[2311]: [TOTEM ] Delivering 189 to 18a Oct 30 17:24:48 server2 corosync[2311]: [TOTEM ] Delivering MCAST message with seq 18a to pending delivery queue Oct 30 17:24:48 server2 crmd: [2324]: debug: te_update_diff: Processing diff (cib_modify): 0.161.26 -> 0.161.27 (S_TRANSITION_ENGINE) Oct 30 17:24:48 server2 crmd: [2324]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff Oct 30 17:24:48 server2 crmd: [2324]: info: match_graph_event: Action drbd_disk:1_pre_notify_demote_0 (147) confirmed on server1 (rc=0) Oct 30 17:24:48 server2 crmd: [2324]: info: te_pseudo_action: Pseudo action 46 fired and confirmed Oct 30 17:24:48 server2 crmd: [2324]: info: te_pseudo_action: Pseudo action 43 fired and confirmed Oct 30 17:24:48 server2 crmd: [2324]: debug: run_graph: Transition 6 (Complete=10, Pending=0, Fired=2, Skipped=0, Incomplete=34, Source=/var/lib/pengine/pe-input-327.bz2): In-progress Oct 30 17:24:48 server2 corosync[2311]: [TOTEM ] releasing messages up to and including 18a Oct 30 17:24:48 server2 crmd: [2324]: info: te_rsc_command: Initiating action 23: demote drbd_disk:1_demote_0 on server1 Oct 30 17:24:48 server2 corosync[2311]: [TOTEM ] mcasted message added to pending queue Oct 30 17:24:48 server2 crmd: [2324]: debug: run_graph: Transition 6 (Complete=12, Pending=1, Fired=1, Skipped=0, Incomplete=33, Source=/var/lib/pengine/pe-input-327.bz2): In-progress Oct 30 17:24:48 server2 corosync[2311]: [TOTEM ] mcasted message added to pending queue Oct 30 17:24:48 server2 corosync[2311]: [TOTEM ] mcasted message added to pending queue Oct 30 17:24:48 server2 corosync[2311]: [TOTEM ] Delivering 18a to 18d Oct 30 17:24:48 server2 corosync[2311]: [TOTEM ] Delivering MCAST message with seq 18b to pending delivery queue Oct 30 17:24:48 server2 corosync[2311]: [TOTEM ] Delivering MCAST message with seq 18c to pending delivery queue Oct 30 17:24:48 server2 corosync[2311]: [TOTEM ] Delivering MCAST message with seq 18d to pending delivery queue Oct 30 17:24:48 server2 corosync[2311]: [TOTEM ] Received ringid(192.168.26.1:396) seq 18b Oct 30 17:24:48 server2 corosync[2311]: [TOTEM ] Received ringid(192.168.26.1:396) seq 18c Oct 30 17:24:48 server2 corosync[2311]: [TOTEM ] Received ringid(192.168.26.1:396) seq 18d Oct 30 17:24:48 server2 corosync[2311]: [TOTEM ] mcasted message added to pending queue Oct 30 17:24:48 server2 corosync[2311]: [TOTEM ] mcasted message added to pending queue Oct 30 17:24:48 server2 corosync[2311]: [TOTEM ] releasing messages up to and including 18d Oct 30 17:24:48 server2 corosync[2311]: [TOTEM ] Delivering 18d to 18f Oct 30 17:24:48 server2 corosync[2311]: [TOTEM ] Delivering MCAST message with seq 18e to pending delivery queue Oct 30 17:24:48 server2 corosync[2311]: [TOTEM ] Delivering MCAST message with seq 18f to pending delivery queue Oct 30 17:24:48 server2 corosync[2311]: [TOTEM ] Received ringid(192.168.26.1:396) seq 18f Oct 30 17:24:48 server2 corosync[2311]: [TOTEM ] Received ringid(192.168.26.1:396) seq 18e Oct 30 17:24:48 server2 corosync[2311]: [TOTEM ] releasing messages up to and including 18f Oct 30 17:24:48 server2 kernel: [ 1139.254473] block drbd0: peer( Primary -> Secondary ) Oct 30 17:24:48 server2 corosync[2311]: [TOTEM ] Received ringid(192.168.26.1:396) seq 190 Oct 30 17:24:48 server2 corosync[2311]: [TOTEM ] Delivering 18f to 190 Oct 30 17:24:48 server2 corosync[2311]: [TOTEM ] Delivering MCAST message with seq 190 to pending delivery queue Oct 30 17:24:48 server2 corosync[2311]: [TOTEM ] Received ringid(192.168.26.1:396) seq 191 Oct 30 17:24:48 server2 corosync[2311]: [TOTEM ] Delivering 190 to 191 Oct 30 17:24:48 server2 corosync[2311]: [TOTEM ] Delivering MCAST message with seq 191 to pending delivery queue Oct 30 17:24:48 server2 crmd: [2324]: debug: te_update_diff: Processing diff (cib_modify): 0.161.27 -> 0.161.28 (S_TRANSITION_ENGINE) Oct 30 17:24:48 server2 crmd: [2324]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff Oct 30 17:24:48 server2 crmd: [2324]: info: match_graph_event: Action drbd_disk:1_demote_0 (23) confirmed on server1 (rc=0) Oct 30 17:24:48 server2 crmd: [2324]: info: te_pseudo_action: Pseudo action 44 fired and confirmed Oct 30 17:24:48 server2 crmd: [2324]: debug: run_graph: Transition 6 (Complete=13, Pending=0, Fired=1, Skipped=0, Incomplete=32, Source=/var/lib/pengine/pe-input-327.bz2): In-progress Oct 30 17:24:48 server2 crmd: [2324]: info: te_pseudo_action: Pseudo action 47 fired and confirmed Oct 30 17:24:48 server2 corosync[2311]: [TOTEM ] mcasted message added to pending queue Oct 30 17:24:48 server2 crmd: [2324]: debug: run_graph: Transition 6 (Complete=14, Pending=0, Fired=1, Skipped=0, Incomplete=31, Source=/var/lib/pengine/pe-input-327.bz2): In-progress Oct 30 17:24:48 server2 corosync[2311]: [TOTEM ] mcasted message added to pending queue Oct 30 17:24:48 server2 crmd: [2324]: info: te_rsc_command: Initiating action 146: notify drbd_disk:0_post_notify_demote_0 on server2 (local) Oct 30 17:24:48 server2 corosync[2311]: [TOTEM ] mcasted message added to pending queue Oct 30 17:24:48 server2 crmd: [2324]: info: do_lrm_rsc_op: Performing key=146:6:0:b2aab0af-262c-445f-ad7e-b6c20cd36732 op=drbd_disk:0_notify_0 ) Oct 30 17:24:48 server2 corosync[2311]: [TOTEM ] releasing messages up to and including 191 Oct 30 17:24:48 server2 lrmd: [2321]: debug: on_msg_perform_op: add an operation operation notify[36] on ocf::drbd::drbd_disk:0 for client 2324, its parameters: drbd_resource=[disk0] CRM_meta_notify_stop_resource=[drbd_disk:1 ] CRM_meta_notify_start_resource=[ ] CRM_meta_notify_inactive_resource=[ ] CRM_meta_notify_master_uname=[server1 ] CRM_meta_notify_promote_uname=[server2 ] CRM_meta_timeout=[20000] CRM_meta_notify_demote_resource=[drbd_disk:1 ] CRM_meta_notify_start_uname=[ ] CRM_meta_clone_node_max=[1] CRM_meta_notify=[true] CRM_meta_notify_promote_resource=[drbd_disk:0 ] CRM_met to the operation list. Oct 30 17:24:48 server2 corosync[2311]: [TOTEM ] Delivering 191 to 194 Oct 30 17:24:48 server2 lrmd: [2321]: info: rsc:drbd_disk:0:36: notify Oct 30 17:24:48 server2 corosync[2311]: [TOTEM ] Delivering MCAST message with seq 192 to pending delivery queue Oct 30 17:24:48 server2 lrmd: [4218]: debug: perform_ra_op: resetting scheduler class to SCHED_OTHER Oct 30 17:24:48 server2 crmd: [2324]: info: te_rsc_command: Initiating action 148: notify drbd_disk:1_post_notify_demote_0 on server1 Oct 30 17:24:48 server2 corosync[2311]: [TOTEM ] Delivering MCAST message with seq 193 to pending delivery queue Oct 30 17:24:48 server2 crmd: [2324]: debug: run_graph: Transition 6 (Complete=15, Pending=2, Fired=2, Skipped=0, Incomplete=29, Source=/var/lib/pengine/pe-input-327.bz2): In-progress Oct 30 17:24:48 server2 corosync[2311]: [TOTEM ] Delivering MCAST message with seq 194 to pending delivery queue Oct 30 17:24:48 server2 corosync[2311]: [TOTEM ] Received ringid(192.168.26.1:396) seq 193 Oct 30 17:24:48 server2 corosync[2311]: [TOTEM ] Received ringid(192.168.26.1:396) seq 192 Oct 30 17:24:48 server2 corosync[2311]: [TOTEM ] Received ringid(192.168.26.1:396) seq 194 Oct 30 17:24:48 server2 corosync[2311]: [TOTEM ] releasing messages up to and including 194 Oct 30 17:24:48 server2 corosync[2311]: [TOTEM ] mcasted message added to pending queue Oct 30 17:24:48 server2 corosync[2311]: [TOTEM ] mcasted message added to pending queue Oct 30 17:24:48 server2 corosync[2311]: [TOTEM ] Delivering 194 to 196 Oct 30 17:24:48 server2 corosync[2311]: [TOTEM ] Delivering MCAST message with seq 195 to pending delivery queue Oct 30 17:24:48 server2 corosync[2311]: [TOTEM ] Delivering MCAST message with seq 196 to pending delivery queue Oct 30 17:24:48 server2 corosync[2311]: [TOTEM ] Received ringid(192.168.26.1:396) seq 195 Oct 30 17:24:48 server2 corosync[2311]: [TOTEM ] Received ringid(192.168.26.1:396) seq 196 Oct 30 17:24:48 server2 corosync[2311]: [TOTEM ] releasing messages up to and including 196 Oct 30 17:24:48 server2 drbd[4218]: DEBUG: disk0: Calling /usr/sbin/crm_master -Q -l reboot -v 10000 Oct 30 17:24:48 server2 corosync[2311]: [TOTEM ] Received ringid(192.168.26.1:396) seq 197 Oct 30 17:24:48 server2 corosync[2311]: [TOTEM ] Delivering 196 to 197 Oct 30 17:24:48 server2 corosync[2311]: [TOTEM ] Delivering MCAST message with seq 197 to pending delivery queue Oct 30 17:24:48 server2 corosync[2311]: [TOTEM ] Received ringid(192.168.26.1:396) seq 198 Oct 30 17:24:48 server2 corosync[2311]: [TOTEM ] Delivering 197 to 198 Oct 30 17:24:48 server2 corosync[2311]: [TOTEM ] Delivering MCAST message with seq 198 to pending delivery queue Oct 30 17:24:48 server2 crmd: [2324]: debug: te_update_diff: Processing diff (cib_modify): 0.161.28 -> 0.161.29 (S_TRANSITION_ENGINE) Oct 30 17:24:48 server2 crmd: [2324]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff Oct 30 17:24:48 server2 crmd: [2324]: info: match_graph_event: Action drbd_disk:1_post_notify_demote_0 (148) confirmed on server1 (rc=0) Oct 30 17:24:48 server2 crmd: [2324]: debug: run_graph: Transition 6 (Complete=16, Pending=1, Fired=0, Skipped=0, Incomplete=29, Source=/var/lib/pengine/pe-input-327.bz2): In-progress Oct 30 17:24:48 server2 corosync[2311]: [TOTEM ] releasing messages up to and including 198 Oct 30 17:24:48 server2 corosync[2311]: [TOTEM ] mcasted message added to pending queue Oct 30 17:24:48 corosync[2311]: last message repeated 2 times Oct 30 17:24:48 server2 corosync[2311]: [TOTEM ] Delivering 198 to 19b Oct 30 17:24:48 server2 corosync[2311]: [TOTEM ] Delivering MCAST message with seq 199 to pending delivery queue Oct 30 17:24:48 server2 corosync[2311]: [TOTEM ] Delivering MCAST message with seq 19a to pending delivery queue Oct 30 17:24:48 server2 corosync[2311]: [TOTEM ] Delivering MCAST message with seq 19b to pending delivery queue Oct 30 17:24:48 server2 corosync[2311]: [TOTEM ] Received ringid(192.168.26.1:396) seq 199 Oct 30 17:24:48 server2 corosync[2311]: [TOTEM ] Received ringid(192.168.26.1:396) seq 19a Oct 30 17:24:48 server2 corosync[2311]: [TOTEM ] Received ringid(192.168.26.1:396) seq 19b Oct 30 17:24:48 server2 corosync[2311]: [TOTEM ] releasing messages up to and including 19b Oct 30 17:24:48 server2 crm_attribute: [4249]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_rw Oct 30 17:24:48 server2 crm_attribute: [4249]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_callback Oct 30 17:24:48 server2 crm_attribute: [4249]: debug: cib_native_signon_raw: Connection to CIB successful Oct 30 17:24:48 server2 crm_attribute: [4249]: debug: query_node_uuidResult section Oct 30 17:24:48 server2 crm_attribute: [4249]: debug: query_node_uuidResult section Oct 30 17:24:48 server2 crm_attribute: [4249]: debug: query_node_uuidResult section Oct 30 17:24:48 server2 crm_attribute: [4249]: debug: query_node_uuidResult section Oct 30 17:24:48 server2 crm_attribute: [4249]: info: determine_host: Mapped server2 to server2 Oct 30 17:24:48 server2 crm_attribute: [4249]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining Oct 30 17:24:48 server2 crm_attribute: [4249]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd Oct 30 17:24:48 server2 attrd: [2322]: debug: attrd_local_callback: update message from crm_attribute: master-drbd_disk:0=10000 Oct 30 17:24:48 server2 attrd: [2322]: debug: attrd_local_callback: Supplied: 10000, Current: 10000, Stored: 10000 Oct 30 17:24:48 server2 crm_attribute: [4249]: debug: attrd_update_delegate: Sent update: master-drbd_disk:0=10000 for server2 Oct 30 17:24:48 server2 crm_attribute: [4249]: info: main: Update master-drbd_disk:0=10000 sent via attrd Oct 30 17:24:48 server2 crm_attribute: [4249]: debug: cib_native_signoff: Signing out of the CIB Service Oct 30 17:24:48 server2 crm_attribute: [4249]: info: crm_xml_cleanup: Cleaning up memory from libxml2 Oct 30 17:24:48 server2 drbd[4218]: DEBUG: disk0: Exit code 0 Oct 30 17:24:48 server2 drbd[4218]: DEBUG: disk0: Command output: Oct 30 17:24:48 server2 lrmd: [2321]: info: RA output: (drbd_disk:0:notify:stdout) Oct 30 17:24:48 server2 lrmd: [2321]: info: Managed drbd_disk:0:notify process 4218 exited with return code 0. Oct 30 17:24:48 server2 crmd: [2324]: debug: create_operation_update: do_update_resource: Updating resouce drbd_disk:0 after complete notify op (interval=0) Oct 30 17:24:48 server2 crmd: [2324]: info: process_lrm_event: LRM operation drbd_disk:0_notify_0 (call=36, rc=0, cib-update=97, confirmed=true) ok Oct 30 17:24:48 server2 crmd: [2324]: debug: te_update_diff: Processing diff (cib_modify): 0.161.29 -> 0.161.30 (S_TRANSITION_ENGINE) Oct 30 17:24:48 server2 crmd: [2324]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff Oct 30 17:24:48 server2 crmd: [2324]: info: match_graph_event: Action drbd_disk:0_post_notify_demote_0 (146) confirmed on server2 (rc=0) Oct 30 17:24:48 server2 crmd: [2324]: info: te_pseudo_action: Pseudo action 48 fired and confirmed Oct 30 17:24:48 server2 crmd: [2324]: info: te_pseudo_action: Pseudo action 33 fired and confirmed Oct 30 17:24:48 server2 crmd: [2324]: debug: run_graph: Transition 6 (Complete=17, Pending=0, Fired=2, Skipped=0, Incomplete=27, Source=/var/lib/pengine/pe-input-327.bz2): In-progress Oct 30 17:24:48 server2 crmd: [2324]: info: te_rsc_command: Initiating action 140: notify drbd_disk:0_pre_notify_stop_0 on server2 (local) Oct 30 17:24:48 server2 corosync[2311]: [TOTEM ] mcasted message added to pending queue Oct 30 17:24:48 server2 corosync[2311]: [TOTEM ] mcasted message added to pending queue Oct 30 17:24:48 server2 crmd: [2324]: info: do_lrm_rsc_op: Performing key=140:6:0:b2aab0af-262c-445f-ad7e-b6c20cd36732 op=drbd_disk:0_notify_0 ) Oct 30 17:24:48 server2 lrmd: [2321]: debug: on_msg_perform_op: add an operation operation notify[37] on ocf::drbd::drbd_disk:0 for client 2324, its parameters: drbd_resource=[disk0] CRM_meta_notify_stop_resource=[drbd_disk:1 ] CRM_meta_notify_start_resource=[ ] CRM_meta_notify_inactive_resource=[ ] CRM_meta_notify_master_uname=[server1 ] CRM_meta_notify_promote_uname=[server2 ] CRM_meta_timeout=[20000] CRM_meta_notify_demote_resource=[drbd_disk:1 ] CRM_meta_notify_start_uname=[ ] CRM_meta_clone_node_max=[1] CRM_meta_notify=[true] CRM_meta_notify_promote_resource=[drbd_disk:0 ] CRM_met to the operation list. Oct 30 17:24:48 server2 lrmd: [2321]: info: rsc:drbd_disk:0:37: notify Oct 30 17:24:48 server2 corosync[2311]: [TOTEM ] mcasted message added to pending queue Oct 30 17:24:48 server2 lrmd: [4258]: debug: perform_ra_op: resetting scheduler class to SCHED_OTHER Oct 30 17:24:48 server2 crmd: [2324]: info: te_rsc_command: Initiating action 142: notify drbd_disk:1_pre_notify_stop_0 on server1 Oct 30 17:24:48 server2 corosync[2311]: [TOTEM ] Delivering 19b to 19e Oct 30 17:24:48 server2 crmd: [2324]: debug: run_graph: Transition 6 (Complete=19, Pending=2, Fired=2, Skipped=0, Incomplete=25, Source=/var/lib/pengine/pe-input-327.bz2): In-progress Oct 30 17:24:48 server2 corosync[2311]: [TOTEM ] Delivering MCAST message with seq 19c to pending delivery queue Oct 30 17:24:48 server2 corosync[2311]: [TOTEM ] Delivering MCAST message with seq 19d to pending delivery queue Oct 30 17:24:48 server2 corosync[2311]: [TOTEM ] Delivering MCAST message with seq 19e to pending delivery queue Oct 30 17:24:48 server2 corosync[2311]: [TOTEM ] Received ringid(192.168.26.1:396) seq 19c Oct 30 17:24:48 server2 corosync[2311]: [TOTEM ] Received ringid(192.168.26.1:396) seq 19d Oct 30 17:24:48 server2 corosync[2311]: [TOTEM ] Received ringid(192.168.26.1:396) seq 19e Oct 30 17:24:48 server2 corosync[2311]: [TOTEM ] releasing messages up to and including 19e Oct 30 17:24:48 server2 corosync[2311]: [TOTEM ] mcasted message added to pending queue Oct 30 17:24:48 server2 corosync[2311]: [TOTEM ] mcasted message added to pending queue Oct 30 17:24:48 server2 corosync[2311]: [TOTEM ] Delivering 19e to 1a0 Oct 30 17:24:48 server2 corosync[2311]: [TOTEM ] Delivering MCAST message with seq 19f to pending delivery queue Oct 30 17:24:48 server2 corosync[2311]: [TOTEM ] Delivering MCAST message with seq 1a0 to pending delivery queue Oct 30 17:24:48 server2 corosync[2311]: [TOTEM ] Received ringid(192.168.26.1:396) seq 1a0 Oct 30 17:24:48 server2 corosync[2311]: [TOTEM ] Received ringid(192.168.26.1:396) seq 19f Oct 30 17:24:48 server2 corosync[2311]: [TOTEM ] releasing messages up to and including 1a0 Oct 30 17:24:49 server2 lrmd: [2321]: info: Managed drbd_disk:0:notify process 4258 exited with return code 0. Oct 30 17:24:49 server2 corosync[2311]: [TOTEM ] Received ringid(192.168.26.1:396) seq 1a1 Oct 30 17:24:49 server2 corosync[2311]: [TOTEM ] Delivering 1a0 to 1a1 Oct 30 17:24:49 server2 corosync[2311]: [TOTEM ] Delivering MCAST message with seq 1a1 to pending delivery queue Oct 30 17:24:49 server2 corosync[2311]: [TOTEM ] Received ringid(192.168.26.1:396) seq 1a2 Oct 30 17:24:49 server2 corosync[2311]: [TOTEM ] Delivering 1a1 to 1a2 Oct 30 17:24:49 server2 corosync[2311]: [TOTEM ] Delivering MCAST message with seq 1a2 to pending delivery queue Oct 30 17:24:49 server2 crmd: [2324]: debug: create_operation_update: do_update_resource: Updating resouce drbd_disk:0 after complete notify op (interval=0) Oct 30 17:24:49 server2 crmd: [2324]: info: process_lrm_event: LRM operation drbd_disk:0_notify_0 (call=37, rc=0, cib-update=98, confirmed=true) ok Oct 30 17:24:49 server2 crmd: [2324]: debug: te_update_diff: Processing diff (cib_modify): 0.161.30 -> 0.161.31 (S_TRANSITION_ENGINE) Oct 30 17:24:49 server2 crmd: [2324]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff Oct 30 17:24:49 server2 crmd: [2324]: info: match_graph_event: Action drbd_disk:1_pre_notify_stop_0 (142) confirmed on server1 (rc=0) Oct 30 17:24:49 server2 corosync[2311]: [TOTEM ] releasing messages up to and including 1a2 Oct 30 17:24:49 server2 crmd: [2324]: debug: te_update_diff: Processing diff (cib_modify): 0.161.31 -> 0.161.32 (S_TRANSITION_ENGINE) Oct 30 17:24:49 server2 corosync[2311]: [TOTEM ] mcasted message added to pending queue Oct 30 17:24:49 server2 crmd: [2324]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff Oct 30 17:24:49 server2 corosync[2311]: [TOTEM ] mcasted message added to pending queue Oct 30 17:24:49 server2 crmd: [2324]: info: match_graph_event: Action drbd_disk:0_pre_notify_stop_0 (140) confirmed on server2 (rc=0) Oct 30 17:24:49 server2 corosync[2311]: [TOTEM ] mcasted message added to pending queue Oct 30 17:24:49 server2 crmd: [2324]: info: te_pseudo_action: Pseudo action 34 fired and confirmed Oct 30 17:24:49 server2 corosync[2311]: [TOTEM ] Delivering 1a2 to 1a5 Oct 30 17:24:49 server2 crmd: [2324]: info: te_pseudo_action: Pseudo action 31 fired and confirmed Oct 30 17:24:49 server2 corosync[2311]: [TOTEM ] Delivering MCAST message with seq 1a3 to pending delivery queue Oct 30 17:24:49 server2 crmd: [2324]: debug: run_graph: Transition 6 (Complete=21, Pending=0, Fired=2, Skipped=0, Incomplete=23, Source=/var/lib/pengine/pe-input-327.bz2): In-progress Oct 30 17:24:49 server2 corosync[2311]: [TOTEM ] Delivering MCAST message with seq 1a4 to pending delivery queue Oct 30 17:24:49 server2 crmd: [2324]: info: te_rsc_command: Initiating action 24: stop drbd_disk:1_stop_0 on server1 Oct 30 17:24:49 server2 corosync[2311]: [TOTEM ] Delivering MCAST message with seq 1a5 to pending delivery queue Oct 30 17:24:49 server2 crmd: [2324]: debug: run_graph: Transition 6 (Complete=23, Pending=1, Fired=1, Skipped=0, Incomplete=22, Source=/var/lib/pengine/pe-input-327.bz2): In-progress Oct 30 17:24:49 server2 corosync[2311]: [TOTEM ] Received ringid(192.168.26.1:396) seq 1a4 Oct 30 17:24:49 server2 corosync[2311]: [TOTEM ] Received ringid(192.168.26.1:396) seq 1a3 Oct 30 17:24:49 server2 corosync[2311]: [TOTEM ] Received ringid(192.168.26.1:396) seq 1a5 Oct 30 17:24:49 server2 corosync[2311]: [TOTEM ] mcasted message added to pending queue Oct 30 17:24:49 corosync[2311]: last message repeated 3 times Oct 30 17:24:49 server2 corosync[2311]: [TOTEM ] releasing messages up to and including 1a5 Oct 30 17:24:49 server2 corosync[2311]: [TOTEM ] Delivering 1a5 to 1a9 Oct 30 17:24:49 server2 corosync[2311]: [TOTEM ] Delivering MCAST message with seq 1a6 to pending delivery queue Oct 30 17:24:49 server2 corosync[2311]: [TOTEM ] Delivering MCAST message with seq 1a7 to pending delivery queue Oct 30 17:24:49 server2 corosync[2311]: [TOTEM ] Delivering MCAST message with seq 1a8 to pending delivery queue Oct 30 17:24:49 server2 corosync[2311]: [TOTEM ] Delivering MCAST message with seq 1a9 to pending delivery queue Oct 30 17:24:49 server2 corosync[2311]: [TOTEM ] Received ringid(192.168.26.1:396) seq 1a6 Oct 30 17:24:49 server2 corosync[2311]: [TOTEM ] Received ringid(192.168.26.1:396) seq 1a7 Oct 30 17:24:49 server2 corosync[2311]: [TOTEM ] Received ringid(192.168.26.1:396) seq 1a8 Oct 30 17:24:49 server2 rsyslogd-2177: imuxsock begins to drop messages from pid 2311 due to rate-limiting Oct 30 17:24:49 server2 kernel: [ 1140.309089] block drbd0: peer( Secondary -> Unknown ) conn( Connected -> TearDown ) pdsk( UpToDate -> DUnknown ) Oct 30 17:24:49 server2 kernel: [ 1140.313250] block drbd0: asender terminated Oct 30 17:24:49 server2 kernel: [ 1140.313254] block drbd0: Terminating drbd0_asender Oct 30 17:24:49 server2 kernel: [ 1140.313805] block drbd0: Connection closed Oct 30 17:24:49 server2 kernel: [ 1140.313810] block drbd0: conn( TearDown -> Unconnected ) Oct 30 17:24:49 server2 kernel: [ 1140.313814] block drbd0: receiver terminated Oct 30 17:24:49 server2 kernel: [ 1140.313816] block drbd0: Restarting drbd0_receiver Oct 30 17:24:49 server2 kernel: [ 1140.313818] block drbd0: receiver (re)started Oct 30 17:24:49 server2 kernel: [ 1140.313821] block drbd0: conn( Unconnected -> WFConnection ) Oct 30 17:24:49 server2 cib: [2320]: debug: cib_process_xpath: cib_query: //cib/status//node_state[@id='server2']//transient_attributes//nvpair[@name='master-drbd_disk:1'] does not exist Oct 30 17:24:49 server2 attrd: [2322]: debug: attrd_cib_callback: Update -22 for master-drbd_disk:1=(null) passed Oct 30 17:24:49 server2 crmd: [2324]: debug: te_update_diff: Processing diff (cib_delete): 0.161.32 -> 0.161.33 (S_TRANSITION_ENGINE) Oct 30 17:24:49 server2 crmd: [2324]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff Oct 30 17:24:49 server2 crmd: [2324]: info: abort_transition_graph: te_update_diff:163 - Triggered transition abort (complete=0, tag=transient_attributes, id=server1, magic=NA, cib=0.161.33) : Transient attribute: removal Oct 30 17:24:49 server2 crmd: [2324]: debug: abort_transition_graphCause Oct 30 17:24:49 server2 crmd: [2324]: debug: abort_transition_graphCause Oct 30 17:24:49 server2 crmd: [2324]: debug: abort_transition_graphCause Oct 30 17:24:49 server2 crmd: [2324]: debug: abort_transition_graphCause Oct 30 17:24:49 server2 crmd: [2324]: debug: abort_transition_graphCause Oct 30 17:24:49 server2 crmd: [2324]: info: update_abort_priority: Abort priority upgraded from 0 to 1000000 Oct 30 17:24:49 server2 crmd: [2324]: info: update_abort_priority: Abort action done superceeded by restart Oct 30 17:24:49 server2 crmd: [2324]: debug: te_update_diff: Processing diff (cib_delete): 0.161.33 -> 0.161.34 (S_TRANSITION_ENGINE) Oct 30 17:24:49 server2 crmd: [2324]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff Oct 30 17:24:49 server2 crmd: [2324]: debug: run_graph: Transition 6 (Complete=23, Pending=1, Fired=0, Skipped=14, Incomplete=8, Source=/var/lib/pengine/pe-input-327.bz2): In-progress Oct 30 17:24:49 server2 crmd: [2324]: debug: te_update_diff: Processing diff (cib_modify): 0.161.34 -> 0.161.35 (S_TRANSITION_ENGINE) Oct 30 17:24:49 server2 crmd: [2324]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff Oct 30 17:24:49 server2 crmd: [2324]: info: match_graph_event: Action drbd_disk:1_stop_0 (24) confirmed on server1 (rc=0) Oct 30 17:24:49 server2 crmd: [2324]: info: te_pseudo_action: Pseudo action 32 fired and confirmed Oct 30 17:24:49 server2 crmd: [2324]: debug: run_graph: Transition 6 (Complete=24, Pending=0, Fired=1, Skipped=14, Incomplete=7, Source=/var/lib/pengine/pe-input-327.bz2): In-progress Oct 30 17:24:49 server2 crmd: [2324]: info: te_pseudo_action: Pseudo action 35 fired and confirmed Oct 30 17:24:49 server2 crmd: [2324]: debug: run_graph: Transition 6 (Complete=25, Pending=0, Fired=1, Skipped=14, Incomplete=6, Source=/var/lib/pengine/pe-input-327.bz2): In-progress Oct 30 17:24:49 server2 crmd: [2324]: info: te_rsc_command: Initiating action 141: notify drbd_disk:0_post_notify_stop_0 on server2 (local) Oct 30 17:24:49 server2 crmd: [2324]: info: do_lrm_rsc_op: Performing key=141:6:0:b2aab0af-262c-445f-ad7e-b6c20cd36732 op=drbd_disk:0_notify_0 ) Oct 30 17:24:49 server2 lrmd: [2321]: debug: on_msg_perform_op: add an operation operation notify[38] on ocf::drbd::drbd_disk:0 for client 2324, its parameters: drbd_resource=[disk0] CRM_meta_notify_stop_resource=[drbd_disk:1 ] CRM_meta_notify_start_resource=[ ] CRM_meta_notify_inactive_resource=[ ] CRM_meta_notify_master_uname=[server1 ] CRM_meta_notify_promote_uname=[server2 ] CRM_meta_timeout=[20000] CRM_meta_notify_demote_resource=[drbd_disk:1 ] CRM_meta_notify_start_uname=[ ] CRM_meta_clone_node_max=[1] CRM_meta_notify=[true] CRM_meta_notify_promote_resource=[drbd_disk:0 ] CRM_met to the operation list. Oct 30 17:24:49 server2 lrmd: [2321]: info: rsc:drbd_disk:0:38: notify Oct 30 17:24:49 server2 lrmd: [4281]: debug: perform_ra_op: resetting scheduler class to SCHED_OTHER Oct 30 17:24:49 server2 crmd: [2324]: debug: run_graph: Transition 6 (Complete=26, Pending=1, Fired=1, Skipped=14, Incomplete=5, Source=/var/lib/pengine/pe-input-327.bz2): In-progress Oct 30 17:24:49 server2 drbd[4281]: DEBUG: disk0: Calling /usr/sbin/crm_master -Q -l reboot -v 1000 Oct 30 17:24:49 server2 crm_attribute: [4312]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_rw Oct 30 17:24:49 server2 crm_attribute: [4312]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_callback Oct 30 17:24:49 server2 crm_attribute: [4312]: debug: cib_native_signon_raw: Connection to CIB successful Oct 30 17:24:49 server2 crm_attribute: [4312]: debug: query_node_uuidResult section Oct 30 17:24:49 server2 crm_attribute: [4312]: debug: query_node_uuidResult section Oct 30 17:24:49 server2 crm_attribute: [4312]: debug: query_node_uuidResult section Oct 30 17:24:49 server2 crm_attribute: [4312]: debug: query_node_uuidResult section Oct 30 17:24:49 server2 crm_attribute: [4312]: info: determine_host: Mapped server2 to server2 Oct 30 17:24:49 server2 crm_attribute: [4312]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining Oct 30 17:24:49 server2 crm_attribute: [4312]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd Oct 30 17:24:49 server2 attrd: [2322]: debug: attrd_local_callback: update message from crm_attribute: master-drbd_disk:0=1000 Oct 30 17:24:49 server2 attrd: [2322]: debug: attrd_local_callback: Supplied: 1000, Current: 10000, Stored: 10000 Oct 30 17:24:49 server2 attrd: [2322]: debug: attrd_local_callback: New value of master-drbd_disk:0 is 1000 Oct 30 17:24:49 server2 attrd: [2322]: info: attrd_trigger_update: Sending flush op to all hosts for: master-drbd_disk:0 (1000) Oct 30 17:24:49 server2 cib: [2320]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='server2']//transient_attributes//nvpair[@name='master-drbd_disk:0'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair[2]) Oct 30 17:24:49 server2 attrd: [2322]: debug: find_nvpair_attr_delegateMatch Oct 30 17:24:49 server2 attrd: [2322]: info: attrd_perform_update: Sent update 86: master-drbd_disk:0=1000 Oct 30 17:24:49 server2 crmd: [2324]: debug: te_update_diff: Processing diff (cib_modify): 0.161.35 -> 0.161.36 (S_TRANSITION_ENGINE) Oct 30 17:24:49 server2 crmd: [2324]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff Oct 30 17:24:49 server2 crmd: [2324]: info: abort_transition_graph: te_update_diff:149 - Triggered transition abort (complete=0, tag=nvpair, id=status-server2-master-drbd_disk:0, magic=NA, cib=0.161.36) : Transient attribute: update Oct 30 17:24:49 server2 crmd: [2324]: debug: abort_transition_graphCause Oct 30 17:24:49 server2 crmd: [2324]: debug: run_graph: Transition 6 (Complete=26, Pending=1, Fired=0, Skipped=14, Incomplete=5, Source=/var/lib/pengine/pe-input-327.bz2): In-progress Oct 30 17:24:49 server2 attrd: [2322]: debug: attrd_cib_callback: Update 86 for master-drbd_disk:0=1000 passed Oct 30 17:24:49 server2 crm_attribute: [4312]: debug: attrd_update_delegate: Sent update: master-drbd_disk:0=1000 for server2 Oct 30 17:24:49 server2 crm_attribute: [4312]: info: main: Update master-drbd_disk:0=1000 sent via attrd Oct 30 17:24:49 server2 crm_attribute: [4312]: debug: cib_native_signoff: Signing out of the CIB Service Oct 30 17:24:49 server2 crm_attribute: [4312]: info: crm_xml_cleanup: Cleaning up memory from libxml2 Oct 30 17:24:49 server2 drbd[4281]: DEBUG: disk0: Exit code 0 Oct 30 17:24:49 server2 drbd[4281]: DEBUG: disk0: Command output: Oct 30 17:24:49 server2 lrmd: [2321]: info: RA output: (drbd_disk:0:notify:stdout) Oct 30 17:24:49 server2 lrmd: [2321]: info: Managed drbd_disk:0:notify process 4281 exited with return code 0. Oct 30 17:24:49 server2 crmd: [2324]: debug: create_operation_update: do_update_resource: Updating resouce drbd_disk:0 after complete notify op (interval=0) Oct 30 17:24:49 server2 crmd: [2324]: info: process_lrm_event: LRM operation drbd_disk:0_notify_0 (call=38, rc=0, cib-update=99, confirmed=true) ok Oct 30 17:24:49 server2 crmd: [2324]: debug: te_update_diff: Processing diff (cib_modify): 0.161.36 -> 0.161.37 (S_TRANSITION_ENGINE) Oct 30 17:24:49 server2 crmd: [2324]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff Oct 30 17:24:49 server2 crmd: [2324]: info: match_graph_event: Action drbd_disk:0_post_notify_stop_0 (141) confirmed on server2 (rc=0) Oct 30 17:24:49 server2 crmd: [2324]: info: te_pseudo_action: Pseudo action 36 fired and confirmed Oct 30 17:24:49 server2 crmd: [2324]: debug: run_graph: Transition 6 (Complete=27, Pending=0, Fired=1, Skipped=14, Incomplete=4, Source=/var/lib/pengine/pe-input-327.bz2): In-progress Oct 30 17:24:49 server2 crmd: [2324]: info: run_graph: ==================================================== Oct 30 17:24:49 server2 crmd: [2324]: notice: run_graph: Transition 6 (Complete=28, Pending=0, Fired=0, Skipped=14, Incomplete=4, Source=/var/lib/pengine/pe-input-327.bz2): Stopped Oct 30 17:24:49 server2 crmd: [2324]: info: te_graph_trigger: Transition 6 is now complete Oct 30 17:24:49 server2 crmd: [2324]: debug: notify_crmd: Processing transition completion in state S_TRANSITION_ENGINE Oct 30 17:24:49 server2 crmd: [2324]: debug: notify_crmd: Transition 6 status: restart - Transient attribute: removal Oct 30 17:24:49 server2 crmd: [2324]: debug: s_crmd_fsa: Processing I_PE_CALC: [ state=S_TRANSITION_ENGINE cause=C_FSA_INTERNAL origin=notify_crmd ] Oct 30 17:24:49 server2 crmd: [2324]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_crmd ] Oct 30 17:24:49 server2 crmd: [2324]: info: do_state_transition: All 2 cluster nodes are eligible to run resources. Oct 30 17:24:49 server2 crmd: [2324]: debug: do_fsa_action: actions:trace: #011// A_DC_TIMER_STOP Oct 30 17:24:49 server2 crmd: [2324]: debug: do_fsa_action: actions:trace: #011// A_INTEGRATE_TIMER_STOP Oct 30 17:24:49 server2 crmd: [2324]: debug: do_fsa_action: actions:trace: #011// A_FINALIZE_TIMER_STOP Oct 30 17:24:49 server2 crmd: [2324]: debug: do_fsa_action: actions:trace: #011// A_PE_INVOKE Oct 30 17:24:49 server2 crmd: [2324]: info: do_pe_invoke: Query 100: Requesting the current CIB: S_POLICY_ENGINE Oct 30 17:24:49 server2 crmd: [2324]: info: do_pe_invoke_callback: Invoking the PE: query=100, ref=pe_calc-dc-1319991889-78, seq=396, quorate=1 Oct 30 17:24:50 server2 pengine: [2323]: info: unpack_config: Startup probes: enabled Oct 30 17:24:50 server2 pengine: [2323]: debug: unpack_config: STONITH timeout: 60000 Oct 30 17:24:50 server2 pengine: [2323]: debug: unpack_config: STONITH of failed nodes is disabled Oct 30 17:24:50 server2 pengine: [2323]: debug: unpack_config: Stop all active resources: false Oct 30 17:24:50 server2 pengine: [2323]: debug: unpack_config: Cluster is symmetric - resources can run anywhere by default Oct 30 17:24:50 server2 pengine: [2323]: debug: unpack_config: Default stickiness: 0 Oct 30 17:24:50 server2 pengine: [2323]: notice: unpack_config: On loss of CCM Quorum: Ignore Oct 30 17:24:50 server2 pengine: [2323]: info: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0 Oct 30 17:24:50 server2 pengine: [2323]: info: unpack_domains: Unpacking domains Oct 30 17:24:50 server2 pengine: [2323]: info: determine_online_status: Node server1 is shutting down Oct 30 17:24:50 server2 pengine: [2323]: info: determine_online_status: Node server2 is online Oct 30 17:24:50 server2 pengine: [2323]: debug: get_node_score: Rule drdb_on_connected_node-rule: node server2 had value 1000 for pingd Oct 30 17:24:50 server2 pengine: [2323]: debug: get_node_score: Rule drdb_on_connected_node-rule: node server1 had value 1000 for pingd Oct 30 17:24:50 server2 pengine: [2323]: debug: get_node_score: Rule drdb_on_connected_node-rule: node server1 had value 1000 for pingd Oct 30 17:24:50 server2 pengine: [2323]: debug: get_node_score: Rule drdb_on_connected_node-rule: node server2 had value 1000 for pingd Oct 30 17:24:50 server2 pengine: [2323]: notice: group_print: Resource Group: apache_group Oct 30 17:24:50 server2 pengine: [2323]: notice: native_print: apache_ip#011(ocf::heartbeat:IPaddr2):#011Stopped Oct 30 17:24:50 server2 pengine: [2323]: notice: native_print: apache2#011(ocf::heartbeat:apache):#011Stopped Oct 30 17:24:50 server2 pengine: [2323]: notice: native_print: fs_drbd#011(ocf::heartbeat:Filesystem):#011Stopped Oct 30 17:24:50 server2 pengine: [2323]: notice: clone_print: Master/Slave Set: ms_drbd [drbd_disk] Oct 30 17:24:50 server2 pengine: [2323]: debug: native_active: Resource drbd_disk:0 active on server2 Oct 30 17:24:50 server2 pengine: [2323]: debug: native_active: Resource drbd_disk:0 active on server2 Oct 30 17:24:50 server2 pengine: [2323]: notice: short_print: Slaves: [ server2 ] Oct 30 17:24:50 server2 pengine: [2323]: notice: short_print: Stopped: [ drbd_disk:1 ] Oct 30 17:24:50 server2 pengine: [2323]: notice: clone_print: Clone Set: pingclone [ping] Oct 30 17:24:50 server2 pengine: [2323]: debug: native_active: Resource ping:0 active on server2 Oct 30 17:24:50 server2 pengine: [2323]: debug: native_active: Resource ping:0 active on server2 Oct 30 17:24:50 server2 pengine: [2323]: notice: short_print: Started: [ server2 ] Oct 30 17:24:50 server2 pengine: [2323]: notice: short_print: Stopped: [ ping:1 ] Oct 30 17:24:50 server2 pengine: [2323]: notice: clone_print: Master/Slave Set: ms_drbd_mysql [drbd_mysql_disk] Oct 30 17:24:50 server2 pengine: [2323]: notice: short_print: Stopped: [ drbd_mysql_disk:0 drbd_mysql_disk:1 ] Oct 30 17:24:50 server2 pengine: [2323]: notice: clone_print: Master/Slave Set: ms_drbd_pgsql [drbd_pgsql_disk] Oct 30 17:24:50 server2 pengine: [2323]: notice: short_print: Stopped: [ drbd_pgsql_disk:0 drbd_pgsql_disk:1 ] Oct 30 17:24:50 server2 pengine: [2323]: notice: group_print: Resource Group: mysql_group Oct 30 17:24:50 server2 pengine: [2323]: notice: native_print: mysql_ip#011(ocf::heartbeat:IPaddr2):#011Stopped Oct 30 17:24:50 server2 pengine: [2323]: notice: native_print: mysql_service#011(ocf::heartbeat:mysql):#011Stopped Oct 30 17:24:50 server2 pengine: [2323]: notice: native_print: fs_drbd_mysql#011(ocf::heartbeat:Filesystem):#011Stopped Oct 30 17:24:50 server2 pengine: [2323]: notice: group_print: Resource Group: pgsql_group Oct 30 17:24:50 server2 pengine: [2323]: notice: native_print: pgsql_ip#011(ocf::heartbeat:IPaddr2):#011Stopped Oct 30 17:24:50 server2 pengine: [2323]: notice: native_print: fs_drbd_pgsql#011(ocf::heartbeat:Filesystem):#011Stopped Oct 30 17:24:50 server2 pengine: [2323]: debug: common_apply_stickiness: Resource drbd_disk:0: preferring current location (node=server2, weight=100) Oct 30 17:24:50 server2 pengine: [2323]: debug: common_apply_stickiness: Resource ping:0: preferring current location (node=server2, weight=100) Oct 30 17:24:50 server2 pengine: [2323]: debug: native_assign_node: Assigning server2 to apache_ip Oct 30 17:24:50 server2 pengine: [2323]: debug: native_assign_node: Assigning server2 to apache2 Oct 30 17:24:50 server2 pengine: [2323]: debug: native_assign_node: Assigning server2 to drbd_disk:0 Oct 30 17:24:50 server2 pengine: [2323]: info: native_color: drbd_disk:1: Rolling back scores from apache_ip Oct 30 17:24:50 server2 pengine: [2323]: info: native_color: drbd_disk:1: Rolling back scores from apache2 Oct 30 17:24:50 server2 pengine: [2323]: debug: native_assign_node: All nodes for resource drbd_disk:1 are unavailable, unclean or shutting down (server1: 0, -1000000) Oct 30 17:24:50 server2 pengine: [2323]: debug: native_assign_node: Could not allocate a node for drbd_disk:1 Oct 30 17:24:50 server2 pengine: [2323]: info: native_color: Resource drbd_disk:1 cannot run anywhere Oct 30 17:24:50 server2 pengine: [2323]: debug: clone_color: Allocated 1 ms_drbd instances of a possible 2 Oct 30 17:24:50 server2 pengine: [2323]: info: rsc_merge_weights: ms_drbd: Breaking dependency loop at ms_drbd Oct 30 17:24:50 server2 rsyslogd-2177: imuxsock begins to drop messages from pid 2323 due to rate-limiting Oct 30 17:24:50 server2 crmd: [2324]: debug: s_crmd_fsa: Processing I_PE_SUCCESS: [ state=S_POLICY_ENGINE cause=C_IPC_MESSAGE origin=handle_response ] Oct 30 17:24:50 server2 crmd: [2324]: debug: do_fsa_action: actions:trace: #011// A_LOG Oct 30 17:24:50 server2 crmd: [2324]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ] Oct 30 17:24:50 server2 crmd: [2324]: debug: do_fsa_action: actions:trace: #011// A_DC_TIMER_STOP Oct 30 17:24:50 server2 crmd: [2324]: debug: do_fsa_action: actions:trace: #011// A_INTEGRATE_TIMER_STOP Oct 30 17:24:50 server2 crmd: [2324]: debug: do_fsa_action: actions:trace: #011// A_FINALIZE_TIMER_STOP Oct 30 17:24:50 server2 crmd: [2324]: debug: do_fsa_action: actions:trace: #011// A_TE_INVOKE Oct 30 17:24:50 server2 crmd: [2324]: WARN: destroy_action: Cancelling timer for action 4 (src=129) Oct 30 17:24:50 server2 crmd: [2324]: info: unpack_graph: Unpacked transition 7: 20 actions in 20 synapses Oct 30 17:24:50 server2 crmd: [2324]: info: do_te_invoke: Processing graph 7 (ref=pe_calc-dc-1319991889-78) derived from /var/lib/pengine/pe-input-328.bz2 Oct 30 17:24:50 server2 crmd: [2324]: info: te_pseudo_action: Pseudo action 11 fired and confirmed Oct 30 17:24:50 server2 crmd: [2324]: info: te_pseudo_action: Pseudo action 34 fired and confirmed Oct 30 17:24:50 server2 crmd: [2324]: info: te_pseudo_action: Pseudo action 103 fired and confirmed Oct 30 17:24:50 server2 crmd: [2324]: info: te_pseudo_action: Pseudo action 110 fired and confirmed Oct 30 17:24:50 server2 crmd: [2324]: info: te_crm_command: Executing crm-event (116): do_shutdown on server1 Oct 30 17:24:50 server2 crmd: [2324]: debug: run_graph: Transition 7 (Complete=0, Pending=0, Fired=5, Skipped=0, Incomplete=15, Source=/var/lib/pengine/pe-input-328.bz2): In-progress Oct 30 17:24:50 server2 crmd: [2324]: info: te_rsc_command: Initiating action 137: notify drbd_disk:0_pre_notify_promote_0 on server2 (local) Oct 30 17:24:50 server2 crmd: [2324]: info: do_lrm_rsc_op: Performing key=137:7:0:b2aab0af-262c-445f-ad7e-b6c20cd36732 op=drbd_disk:0_notify_0 ) Oct 30 17:24:50 server2 lrmd: [2321]: debug: on_msg_perform_op: add an operation operation notify[39] on ocf::drbd::drbd_disk:0 for client 2324, its parameters: drbd_resource=[disk0] CRM_meta_notify_stop_resource=[ ] CRM_meta_notify_start_resource=[ ] CRM_meta_notify_inactive_resource=[drbd_disk:1 ] CRM_meta_notify_master_uname=[ ] CRM_meta_notify_promote_uname=[server2 ] CRM_meta_timeout=[20000] CRM_meta_notify_demote_resource=[ ] CRM_meta_notify_start_uname=[ ] CRM_meta_clone_node_max=[1] CRM_meta_notify=[true] CRM_meta_notify_promote_resource=[drbd_disk:0 ] CRM_meta_notify_active_un to the operation list. Oct 30 17:24:50 server2 lrmd: [2321]: info: rsc:drbd_disk:0:39: notify Oct 30 17:24:50 server2 lrmd: [4321]: debug: perform_ra_op: resetting scheduler class to SCHED_OTHER Oct 30 17:24:50 server2 crmd: [2324]: debug: run_graph: Transition 7 (Complete=5, Pending=1, Fired=1, Skipped=0, Incomplete=14, Source=/var/lib/pengine/pe-input-328.bz2): In-progress Oct 30 17:24:50 server2 cib: [2320]: info: cib_process_shutdown_req: Shutdown REQ from server1 Oct 30 17:24:50 server2 cib: [2320]: info: cib_process_request: Operation complete: op cib_shutdown_req for section 'all' (origin=server1/server1/(null), version=0.161.37): ok (rc=0) Oct 30 17:24:50 server2 lrmd: [2321]: info: Managed drbd_disk:0:notify process 4321 exited with return code 0. Oct 30 17:24:50 server2 crmd: [2324]: debug: create_operation_update: do_update_resource: Updating resouce drbd_disk:0 after complete notify op (interval=0) Oct 30 17:24:50 server2 crmd: [2324]: info: process_lrm_event: LRM operation drbd_disk:0_notify_0 (call=39, rc=0, cib-update=101, confirmed=true) ok Oct 30 17:24:50 server2 crmd: [2324]: debug: te_update_diff: Processing diff (cib_modify): 0.161.37 -> 0.161.38 (S_TRANSITION_ENGINE) Oct 30 17:24:50 server2 crmd: [2324]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff Oct 30 17:24:50 server2 crmd: [2324]: info: match_graph_event: Action drbd_disk:0_pre_notify_promote_0 (137) confirmed on server2 (rc=0) Oct 30 17:24:50 server2 crmd: [2324]: info: te_pseudo_action: Pseudo action 35 fired and confirmed Oct 30 17:24:50 server2 crmd: [2324]: info: te_pseudo_action: Pseudo action 32 fired and confirmed Oct 30 17:24:50 server2 crmd: [2324]: debug: run_graph: Transition 7 (Complete=6, Pending=0, Fired=2, Skipped=0, Incomplete=12, Source=/var/lib/pengine/pe-input-328.bz2): In-progress Oct 30 17:24:50 server2 crmd: [2324]: info: te_rsc_command: Initiating action 18: promote drbd_disk:0_promote_0 on server2 (local) Oct 30 17:24:50 server2 crmd: [2324]: info: do_lrm_rsc_op: Performing key=18:7:0:b2aab0af-262c-445f-ad7e-b6c20cd36732 op=drbd_disk:0_promote_0 ) Oct 30 17:24:50 server2 lrmd: [2321]: debug: on_msg_perform_op: add an operation operation promote[40] on ocf::drbd::drbd_disk:0 for client 2324, its parameters: drbd_resource=[disk0] CRM_meta_notify_stop_resource=[ ] CRM_meta_notify_start_resource=[ ] CRM_meta_notify_inactive_resource=[drbd_disk:1 ] CRM_meta_notify_master_uname=[ ] CRM_meta_notify_promote_uname=[server2 ] CRM_meta_timeout=[20000] CRM_meta_notify_demote_resource=[ ] CRM_meta_notify_start_uname=[ ] CRM_meta_clone_node_max=[1] CRM_meta_notify=[true] CRM_meta_notify_promote_resource=[drbd_disk:0 ] CRM_meta_notify_active_u to the operation list. Oct 30 17:24:50 server2 lrmd: [2321]: info: rsc:drbd_disk:0:40: promote Oct 30 17:24:50 server2 lrmd: [4344]: debug: perform_ra_op: resetting scheduler class to SCHED_OTHER Oct 30 17:24:50 server2 crmd: [2324]: debug: run_graph: Transition 7 (Complete=8, Pending=1, Fired=1, Skipped=0, Incomplete=11, Source=/var/lib/pengine/pe-input-328.bz2): In-progress Oct 30 17:24:50 server2 drbd[4344]: DEBUG: disk0: Calling drbdadm -c /etc/drbd.conf primary disk0 Oct 30 17:24:50 server2 kernel: [ 1141.397704] block drbd0: helper command: /sbin/drbdadm fence-peer minor-0 Oct 30 17:24:51 server2 crm-fence-peer.sh[4379]: invoked for disk0 Oct 30 17:24:52 server2 cib: [2320]: debug: activateCibXml: Triggering CIB write for cib_create op Oct 30 17:24:52 server2 cib: [2320]: info: cib:diff- Oct 30 17:24:52 server2 cib: [2320]: info: cib:diff+ Oct 30 17:24:52 server2 cib: [2320]: info: cib:diff+ Oct 30 17:24:52 server2 cib: [2320]: info: cib:diff+ Oct 30 17:24:52 server2 cib: [2320]: info: cib:diff+ Oct 30 17:24:52 server2 cib: [2320]: info: cib:diff+ Oct 30 17:24:52 server2 cib: [2320]: info: cib:diff+ Oct 30 17:24:52 server2 cib: [2320]: info: cib:diff+ Oct 30 17:24:52 server2 cib: [2320]: info: cib:diff+ Oct 30 17:24:52 server2 cib: [2320]: info: cib:diff+ Oct 30 17:24:52 server2 cib: [2320]: info: cib:diff+ Oct 30 17:24:52 server2 crmd: [2324]: debug: te_update_diff: Processing diff (cib_create): 0.161.38 -> 0.162.1 (S_TRANSITION_ENGINE) Oct 30 17:24:52 server2 cib: [2320]: info: cib:diff+ Oct 30 17:24:52 server2 crmd: [2324]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff Oct 30 17:24:52 server2 cib: [2320]: info: cib_process_request: Operation complete: op cib_create for section constraints (origin=local/cibadmin/2, version=0.162.1): ok (rc=0) Oct 30 17:24:52 server2 crmd: [2324]: info: abort_transition_graph: te_update_diff:131 - Triggered transition abort (complete=0, tag=diff, id=(null), magic=NA, cib=0.162.1) : Non-status change Oct 30 17:24:53 server2 rsyslogd-2177: imuxsock lost 94 messages from pid 2311 due to rate-limiting Oct 30 17:24:52 server2 corosync[2311]: [TOTEM ] mcasted message added to pending queue Oct 30 17:24:52 server2 cib: [2320]: debug: Forking temp process write_cib_contents Oct 30 17:24:52 server2 crmd: [2324]: debug: abort_transition_graphCause Oct 30 17:24:52 server2 corosync[2311]: [TOTEM ] mcasted message added to pending queue Oct 30 17:24:52 server2 crmd: [2324]: debug: abort_transition_graphCause Oct 30 17:24:52 server2 corosync[2311]: [TOTEM ] Delivering 1c1 to 1c3 Oct 30 17:24:52 server2 crmd: [2324]: debug: abort_transition_graphCause Oct 30 17:24:52 server2 corosync[2311]: [TOTEM ] Delivering MCAST message with seq 1c2 to pending delivery queue Oct 30 17:24:52 server2 crmd: [2324]: debug: abort_transition_graphCause Oct 30 17:24:52 server2 corosync[2311]: [TOTEM ] Delivering MCAST message with seq 1c3 to pending delivery queue Oct 30 17:24:52 server2 crmd: [2324]: debug: abort_transition_graphCause Oct 30 17:24:52 server2 corosync[2311]: [TOTEM ] Received ringid(192.168.26.1:396) seq 1c3 Oct 30 17:24:52 server2 crmd: [2324]: debug: abort_transition_graphCause Oct 30 17:24:52 server2 corosync[2311]: [TOTEM ] Received ringid(192.168.26.1:396) seq 1c2 Oct 30 17:24:52 server2 crmd: [2324]: debug: abort_transition_graphCause Oct 30 17:24:52 server2 corosync[2311]: [TOTEM ] releasing messages up to and including 1c3 Oct 30 17:24:52 server2 crmd: [2324]: debug: abort_transition_graphCause Oct 30 17:24:52 server2 crmd: [2324]: debug: abort_transition_graphCause Oct 30 17:24:52 server2 crmd: [2324]: debug: abort_transition_graphCause Oct 30 17:24:52 server2 crmd: [2324]: debug: abort_transition_graphCause Oct 30 17:24:52 server2 crmd: [2324]: debug: abort_transition_graphCause Oct 30 17:24:52 server2 crmd: [2324]: debug: abort_transition_graphCause Oct 30 17:24:52 server2 crmd: [2324]: debug: abort_transition_graphCause Oct 30 17:24:52 server2 crmd: [2324]: debug: abort_transition_graphCause Oct 30 17:24:52 server2 crmd: [2324]: debug: abort_transition_graphCause Oct 30 17:24:52 server2 crmd: [2324]: debug: abort_transition_graphCause Oct 30 17:24:52 server2 crmd: [2324]: debug: abort_transition_graphCause Oct 30 17:24:52 server2 crmd: [2324]: info: update_abort_priority: Abort priority upgraded from 0 to 1000000 Oct 30 17:24:52 server2 crmd: [2324]: info: update_abort_priority: Abort action done superceeded by restart Oct 30 17:24:52 server2 crmd: [2324]: debug: run_graph: Transition 7 (Complete=8, Pending=1, Fired=0, Skipped=7, Incomplete=4, Source=/var/lib/pengine/pe-input-328.bz2): In-progress Oct 30 17:24:53 server2 crm-fence-peer.sh[4379]: INFO peer is reachable, my disk is UpToDate: placed constraint 'drbd-fence-by-handler-ms_drbd' Oct 30 17:24:53 server2 kernel: [ 1143.872142] block drbd0: helper command: /sbin/drbdadm fence-peer minor-0 exit code 4 (0x400) Oct 30 17:24:53 server2 kernel: [ 1143.872147] block drbd0: fence-peer helper returned 4 (peer was fenced) Oct 30 17:24:53 server2 kernel: [ 1143.872162] block drbd0: role( Secondary -> Primary ) pdsk( DUnknown -> Outdated ) Oct 30 17:24:53 server2 kernel: [ 1143.933482] block drbd0: new current UUID BE5933F3D9137EFF:F75BB9A4C70FC89C:21F36A82D9FDE258:21F26A82D9FDE258 Oct 30 17:24:53 server2 cib: [2320]: info: Managed write_cib_contents process 4417 exited with return code 0. Oct 30 17:24:53 server2 drbd[4344]: DEBUG: disk0: Exit code 0 Oct 30 17:24:53 server2 drbd[4344]: DEBUG: disk0: Command output: Oct 30 17:24:53 server2 lrmd: [2321]: info: RA output: (drbd_disk:0:promote:stdout) Oct 30 17:24:53 server2 lrmd: [2321]: info: Managed drbd_disk:0:promote process 4344 exited with return code 0. Oct 30 17:24:53 server2 crmd: [2324]: debug: create_operation_update: do_update_resource: Updating resouce drbd_disk:0 after complete promote op (interval=0) Oct 30 17:24:53 server2 crmd: [2324]: info: process_lrm_event: LRM operation drbd_disk:0_promote_0 (call=40, rc=0, cib-update=102, confirmed=true) ok Oct 30 17:24:53 server2 crmd: [2324]: debug: te_update_diff: Processing diff (cib_modify): 0.162.1 -> 0.162.2 (S_TRANSITION_ENGINE) Oct 30 17:24:53 server2 crmd: [2324]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff Oct 30 17:24:53 server2 crmd: [2324]: info: match_graph_event: Action drbd_disk:0_promote_0 (18) confirmed on server2 (rc=0) Oct 30 17:24:53 server2 crmd: [2324]: info: te_pseudo_action: Pseudo action 33 fired and confirmed Oct 30 17:24:53 server2 crmd: [2324]: debug: run_graph: Transition 7 (Complete=9, Pending=0, Fired=1, Skipped=7, Incomplete=3, Source=/var/lib/pengine/pe-input-328.bz2): In-progress Oct 30 17:24:53 server2 crmd: [2324]: info: te_pseudo_action: Pseudo action 36 fired and confirmed Oct 30 17:24:53 server2 crmd: [2324]: debug: run_graph: Transition 7 (Complete=10, Pending=0, Fired=1, Skipped=7, Incomplete=2, Source=/var/lib/pengine/pe-input-328.bz2): In-progress Oct 30 17:24:53 server2 crmd: [2324]: info: te_rsc_command: Initiating action 138: notify drbd_disk:0_post_notify_promote_0 on server2 (local) Oct 30 17:24:53 server2 corosync[2311]: [TOTEM ] mcasted message added to pending queue Oct 30 17:24:53 server2 crmd: [2324]: info: do_lrm_rsc_op: Performing key=138:7:0:b2aab0af-262c-445f-ad7e-b6c20cd36732 op=drbd_disk:0_notify_0 ) Oct 30 17:24:53 server2 corosync[2311]: [TOTEM ] mcasted message added to pending queue Oct 30 17:24:53 server2 lrmd: [2321]: debug: on_msg_perform_op: add an operation operation notify[41] on ocf::drbd::drbd_disk:0 for client 2324, its parameters: drbd_resource=[disk0] CRM_meta_notify_stop_resource=[ ] CRM_meta_notify_start_resource=[ ] CRM_meta_notify_inactive_resource=[drbd_disk:1 ] CRM_meta_notify_master_uname=[ ] CRM_meta_notify_promote_uname=[server2 ] CRM_meta_timeout=[20000] CRM_meta_notify_demote_resource=[ ] CRM_meta_notify_start_uname=[ ] CRM_meta_clone_node_max=[1] CRM_meta_notify=[true] CRM_meta_notify_promote_resource=[drbd_disk:0 ] CRM_meta_notify_active_un to the operation list. Oct 30 17:24:53 server2 corosync[2311]: [TOTEM ] mcasted message added to pending queue Oct 30 17:24:53 server2 lrmd: [2321]: info: rsc:drbd_disk:0:41: notify Oct 30 17:24:53 server2 corosync[2311]: [TOTEM ] Delivering 1c3 to 1c6 Oct 30 17:24:53 server2 lrmd: [4428]: debug: perform_ra_op: resetting scheduler class to SCHED_OTHER Oct 30 17:24:53 server2 crmd: [2324]: debug: run_graph: Transition 7 (Complete=11, Pending=1, Fired=1, Skipped=7, Incomplete=1, Source=/var/lib/pengine/pe-input-328.bz2): In-progress Oct 30 17:24:53 server2 corosync[2311]: [TOTEM ] Delivering MCAST message with seq 1c4 to pending delivery queue Oct 30 17:24:53 server2 corosync[2311]: [TOTEM ] Delivering MCAST message with seq 1c5 to pending delivery queue Oct 30 17:24:53 server2 corosync[2311]: [TOTEM ] Delivering MCAST message with seq 1c6 to pending delivery queue Oct 30 17:24:53 server2 corosync[2311]: [TOTEM ] Received ringid(192.168.26.1:396) seq 1c5 Oct 30 17:24:53 server2 corosync[2311]: [TOTEM ] Received ringid(192.168.26.1:396) seq 1c4 Oct 30 17:24:53 server2 corosync[2311]: [TOTEM ] Received ringid(192.168.26.1:396) seq 1c6 Oct 30 17:24:53 server2 corosync[2311]: [TOTEM ] releasing messages up to and including 1c6 Oct 30 17:24:53 server2 drbd[4428]: DEBUG: disk0: Calling /usr/sbin/crm_master -Q -l reboot -v 10000 Oct 30 17:24:53 server2 crm_attribute: [4465]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_rw Oct 30 17:24:53 server2 crm_attribute: [4465]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_callback Oct 30 17:24:53 server2 crm_attribute: [4465]: debug: cib_native_signon_raw: Connection to CIB successful Oct 30 17:24:53 server2 crm_attribute: [4465]: debug: query_node_uuidResult section Oct 30 17:24:53 server2 crm_attribute: [4465]: debug: query_node_uuidResult section Oct 30 17:24:53 server2 crm_attribute: [4465]: debug: query_node_uuidResult section Oct 30 17:24:53 server2 crm_attribute: [4465]: debug: query_node_uuidResult section Oct 30 17:24:53 server2 crm_attribute: [4465]: info: determine_host: Mapped server2 to server2 Oct 30 17:24:53 server2 crm_attribute: [4465]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining Oct 30 17:24:53 server2 crm_attribute: [4465]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd Oct 30 17:24:53 server2 attrd: [2322]: debug: attrd_local_callback: update message from crm_attribute: master-drbd_disk:0=10000 Oct 30 17:24:53 server2 attrd: [2322]: debug: attrd_local_callback: Supplied: 10000, Current: 1000, Stored: 1000 Oct 30 17:24:53 server2 attrd: [2322]: debug: attrd_local_callback: New value of master-drbd_disk:0 is 10000 Oct 30 17:24:53 server2 attrd: [2322]: info: attrd_trigger_update: Sending flush op to all hosts for: master-drbd_disk:0 (10000) Oct 30 17:24:53 server2 cib: [2320]: debug: cib_process_xpath: Processing cib_query op for //cib/status//node_state[@id='server2']//transient_attributes//nvpair[@name='master-drbd_disk:0'] (/cib/status/node_state[2]/transient_attributes/instance_attributes/nvpair[2]) Oct 30 17:24:53 server2 attrd: [2322]: debug: find_nvpair_attr_delegateMatch Oct 30 17:24:53 server2 attrd: [2322]: info: attrd_perform_update: Sent update 88: master-drbd_disk:0=10000 Oct 30 17:24:53 server2 crmd: [2324]: debug: te_update_diff: Processing diff (cib_modify): 0.162.2 -> 0.162.3 (S_TRANSITION_ENGINE) Oct 30 17:24:53 server2 crmd: [2324]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff Oct 30 17:24:53 server2 crmd: [2324]: info: abort_transition_graph: te_update_diff:149 - Triggered transition abort (complete=0, tag=nvpair, id=status-server2-master-drbd_disk:0, magic=NA, cib=0.162.3) : Transient attribute: update Oct 30 17:24:53 server2 crmd: [2324]: debug: abort_transition_graphCause Oct 30 17:24:53 server2 crmd: [2324]: debug: run_graph: Transition 7 (Complete=11, Pending=1, Fired=0, Skipped=7, Incomplete=1, Source=/var/lib/pengine/pe-input-328.bz2): In-progress Oct 30 17:24:53 server2 attrd: [2322]: debug: attrd_cib_callback: Update 88 for master-drbd_disk:0=10000 passed Oct 30 17:24:53 server2 corosync[2311]: [TOTEM ] mcasted message added to pending queue Oct 30 17:24:53 corosync[2311]: last message repeated 2 times Oct 30 17:24:53 server2 corosync[2311]: [TOTEM ] Delivering 1c6 to 1c9 Oct 30 17:24:53 server2 corosync[2311]: [TOTEM ] Delivering MCAST message with seq 1c7 to pending delivery queue Oct 30 17:24:53 server2 corosync[2311]: [TOTEM ] Delivering MCAST message with seq 1c8 to pending delivery queue Oct 30 17:24:53 server2 corosync[2311]: [TOTEM ] Delivering MCAST message with seq 1c9 to pending delivery queue Oct 30 17:24:53 server2 corosync[2311]: [TOTEM ] Received ringid(192.168.26.1:396) seq 1c7 Oct 30 17:24:53 server2 corosync[2311]: [TOTEM ] Received ringid(192.168.26.1:396) seq 1c9 Oct 30 17:24:53 server2 corosync[2311]: [TOTEM ] Received ringid(192.168.26.1:396) seq 1c8 Oct 30 17:24:53 server2 crm_attribute: [4465]: debug: attrd_update_delegate: Sent update: master-drbd_disk:0=10000 for server2 Oct 30 17:24:53 server2 crm_attribute: [4465]: info: main: Update master-drbd_disk:0=10000 sent via attrd Oct 30 17:24:53 server2 crm_attribute: [4465]: debug: cib_native_signoff: Signing out of the CIB Service Oct 30 17:24:53 server2 crm_attribute: [4465]: info: crm_xml_cleanup: Cleaning up memory from libxml2 Oct 30 17:24:53 server2 corosync[2311]: [TOTEM ] releasing messages up to and including 1c9 Oct 30 17:24:53 server2 drbd[4428]: DEBUG: disk0: Exit code 0 Oct 30 17:24:53 server2 drbd[4428]: DEBUG: disk0: Command output: Oct 30 17:24:53 server2 lrmd: [2321]: info: RA output: (drbd_disk:0:notify:stdout) Oct 30 17:24:53 server2 lrmd: [2321]: info: Managed drbd_disk:0:notify process 4428 exited with return code 0. Oct 30 17:24:53 server2 crmd: [2324]: debug: create_operation_update: do_update_resource: Updating resouce drbd_disk:0 after complete notify op (interval=0) Oct 30 17:24:53 server2 crmd: [2324]: info: process_lrm_event: LRM operation drbd_disk:0_notify_0 (call=41, rc=0, cib-update=103, confirmed=true) ok Oct 30 17:24:53 server2 crmd: [2324]: debug: te_update_diff: Processing diff (cib_modify): 0.162.3 -> 0.162.4 (S_TRANSITION_ENGINE) Oct 30 17:24:53 server2 crmd: [2324]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff Oct 30 17:24:53 server2 crmd: [2324]: info: match_graph_event: Action drbd_disk:0_post_notify_promote_0 (138) confirmed on server2 (rc=0) Oct 30 17:24:53 server2 crmd: [2324]: info: te_pseudo_action: Pseudo action 37 fired and confirmed Oct 30 17:24:53 server2 crmd: [2324]: debug: run_graph: Transition 7 (Complete=12, Pending=0, Fired=1, Skipped=7, Incomplete=0, Source=/var/lib/pengine/pe-input-328.bz2): In-progress Oct 30 17:24:53 server2 crmd: [2324]: info: run_graph: ==================================================== Oct 30 17:24:53 server2 crmd: [2324]: notice: run_graph: Transition 7 (Complete=13, Pending=0, Fired=0, Skipped=7, Incomplete=0, Source=/var/lib/pengine/pe-input-328.bz2): Stopped Oct 30 17:24:53 server2 crmd: [2324]: info: te_graph_trigger: Transition 7 is now complete Oct 30 17:24:53 server2 crmd: [2324]: debug: notify_crmd: Processing transition completion in state S_TRANSITION_ENGINE Oct 30 17:24:53 server2 corosync[2311]: [TOTEM ] mcasted message added to pending queue Oct 30 17:24:53 server2 crmd: [2324]: debug: notify_crmd: Transition 7 status: restart - Non-status change Oct 30 17:24:53 server2 corosync[2311]: [TOTEM ] mcasted message added to pending queue Oct 30 17:24:53 server2 crmd: [2324]: debug: s_crmd_fsa: Processing I_PE_CALC: [ state=S_TRANSITION_ENGINE cause=C_FSA_INTERNAL origin=notify_crmd ] Oct 30 17:24:53 server2 corosync[2311]: [TOTEM ] mcasted message added to pending queue Oct 30 17:24:53 server2 crmd: [2324]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_crmd ] Oct 30 17:24:53 server2 corosync[2311]: [TOTEM ] Delivering 1c9 to 1cc Oct 30 17:24:53 server2 crmd: [2324]: info: do_state_transition: All 2 cluster nodes are eligible to run resources. Oct 30 17:24:53 server2 corosync[2311]: [TOTEM ] Delivering MCAST message with seq 1ca to pending delivery queue Oct 30 17:24:53 server2 crmd: [2324]: debug: do_fsa_action: actions:trace: #011// A_DC_TIMER_STOP Oct 30 17:24:53 server2 corosync[2311]: [TOTEM ] Delivering MCAST message with seq 1cb to pending delivery queue Oct 30 17:24:53 server2 crmd: [2324]: debug: do_fsa_action: actions:trace: #011// A_INTEGRATE_TIMER_STOP Oct 30 17:24:53 server2 corosync[2311]: [TOTEM ] Delivering MCAST message with seq 1cc to pending delivery queue Oct 30 17:24:53 server2 crmd: [2324]: debug: do_fsa_action: actions:trace: #011// A_FINALIZE_TIMER_STOP Oct 30 17:24:53 server2 corosync[2311]: [TOTEM ] Received ringid(192.168.26.1:396) seq 1ca Oct 30 17:24:53 server2 crmd: [2324]: debug: do_fsa_action: actions:trace: #011// A_PE_INVOKE Oct 30 17:24:53 server2 corosync[2311]: [TOTEM ] Received ringid(192.168.26.1:396) seq 1cb Oct 30 17:24:53 server2 crmd: [2324]: info: do_pe_invoke: Query 104: Requesting the current CIB: S_POLICY_ENGINE Oct 30 17:24:53 server2 corosync[2311]: [TOTEM ] Received ringid(192.168.26.1:396) seq 1cc Oct 30 17:24:53 server2 crmd: [2324]: info: do_pe_invoke_callback: Invoking the PE: query=104, ref=pe_calc-dc-1319991893-83, seq=396, quorate=1 Oct 30 17:24:53 server2 rsyslogd-2177: imuxsock lost 89 messages from pid 2323 due to rate-limiting Oct 30 17:24:53 server2 pengine: [2323]: info: unpack_config: Startup probes: enabled Oct 30 17:24:53 server2 corosync[2311]: [TOTEM ] releasing messages up to and including 1cc Oct 30 17:24:53 server2 pengine: [2323]: debug: unpack_config: STONITH timeout: 60000 Oct 30 17:24:53 server2 pengine: [2323]: debug: unpack_config: STONITH of failed nodes is disabled Oct 30 17:24:53 server2 pengine: [2323]: debug: unpack_config: Stop all active resources: false Oct 30 17:24:53 server2 pengine: [2323]: debug: unpack_config: Cluster is symmetric - resources can run anywhere by default Oct 30 17:24:53 server2 pengine: [2323]: debug: unpack_config: Default stickiness: 0 Oct 30 17:24:53 server2 pengine: [2323]: notice: unpack_config: On loss of CCM Quorum: Ignore Oct 30 17:24:53 server2 pengine: [2323]: info: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0 Oct 30 17:24:53 server2 pengine: [2323]: info: unpack_domains: Unpacking domains Oct 30 17:24:53 server2 pengine: [2323]: info: determine_online_status: Node server1 is shutting down Oct 30 17:24:53 server2 pengine: [2323]: info: determine_online_status: Node server2 is online Oct 30 17:24:53 server2 pengine: [2323]: debug: get_node_score: Rule drdb_on_connected_node-rule: node server2 had value 1000 for pingd Oct 30 17:24:53 server2 pengine: [2323]: debug: get_node_score: Rule drdb_on_connected_node-rule: node server1 had value 1000 for pingd Oct 30 17:24:53 server2 pengine: [2323]: debug: get_node_score: Rule drdb_on_connected_node-rule: node server1 had value 1000 for pingd Oct 30 17:24:53 server2 pengine: [2323]: debug: get_node_score: Rule drdb_on_connected_node-rule: node server2 had value 1000 for pingd Oct 30 17:24:53 server2 pengine: [2323]: notice: group_print: Resource Group: apache_group Oct 30 17:24:53 server2 pengine: [2323]: notice: native_print: apache_ip#011(ocf::heartbeat:IPaddr2):#011Stopped Oct 30 17:24:53 server2 pengine: [2323]: notice: native_print: apache2#011(ocf::heartbeat:apache):#011Stopped Oct 30 17:24:53 server2 pengine: [2323]: notice: native_print: fs_drbd#011(ocf::heartbeat:Filesystem):#011Stopped Oct 30 17:24:53 server2 pengine: [2323]: notice: clone_print: Master/Slave Set: ms_drbd [drbd_disk] Oct 30 17:24:53 server2 pengine: [2323]: debug: native_active: Resource drbd_disk:0 active on server2 Oct 30 17:24:53 server2 pengine: [2323]: debug: native_active: Resource drbd_disk:0 active on server2 Oct 30 17:24:53 server2 pengine: [2323]: notice: short_print: Masters: [ server2 ] Oct 30 17:24:53 server2 pengine: [2323]: notice: short_print: Stopped: [ drbd_disk:1 ] Oct 30 17:24:53 server2 pengine: [2323]: notice: clone_print: Clone Set: pingclone [ping] Oct 30 17:24:53 server2 pengine: [2323]: debug: native_active: Resource ping:0 active on server2 Oct 30 17:24:53 server2 pengine: [2323]: debug: native_active: Resource ping:0 active on server2 Oct 30 17:24:53 server2 pengine: [2323]: notice: short_print: Started: [ server2 ] Oct 30 17:24:53 server2 pengine: [2323]: notice: short_print: Stopped: [ ping:1 ] Oct 30 17:24:53 server2 pengine: [2323]: notice: clone_print: Master/Slave Set: ms_drbd_mysql [drbd_mysql_disk] Oct 30 17:24:53 server2 pengine: [2323]: notice: short_print: Stopped: [ drbd_mysql_disk:0 drbd_mysql_disk:1 ] Oct 30 17:24:53 server2 pengine: [2323]: notice: clone_print: Master/Slave Set: ms_drbd_pgsql [drbd_pgsql_disk] Oct 30 17:24:53 server2 pengine: [2323]: notice: short_print: Stopped: [ drbd_pgsql_disk:0 drbd_pgsql_disk:1 ] Oct 30 17:24:53 server2 pengine: [2323]: notice: group_print: Resource Group: mysql_group Oct 30 17:24:53 server2 pengine: [2323]: notice: native_print: mysql_ip#011(ocf::heartbeat:IPaddr2):#011Stopped Oct 30 17:24:53 server2 pengine: [2323]: notice: native_print: mysql_service#011(ocf::heartbeat:mysql):#011Stopped Oct 30 17:24:53 server2 pengine: [2323]: notice: native_print: fs_drbd_mysql#011(ocf::heartbeat:Filesystem):#011Stopped Oct 30 17:24:53 server2 pengine: [2323]: notice: group_print: Resource Group: pgsql_group Oct 30 17:24:53 server2 pengine: [2323]: notice: native_print: pgsql_ip#011(ocf::heartbeat:IPaddr2):#011Stopped Oct 30 17:24:53 server2 pengine: [2323]: notice: native_print: fs_drbd_pgsql#011(ocf::heartbeat:Filesystem):#011Stopped Oct 30 17:24:53 server2 pengine: [2323]: debug: native_rsc_location: Constraint (drbd-fence-by-handler-rule-ms_drbd) is not active (role : Master) Oct 30 17:24:53 pengine: last message repeated 2 times Oct 30 17:24:53 server2 pengine: [2323]: debug: common_apply_stickiness: Resource drbd_disk:0: preferring current location (node=server2, weight=100) Oct 30 17:24:53 server2 pengine: [2323]: debug: common_apply_stickiness: Resource ping:0: preferring current location (node=server2, weight=100) Oct 30 17:24:53 server2 pengine: [2323]: debug: native_assign_node: Assigning server2 to apache_ip Oct 30 17:24:53 server2 pengine: [2323]: debug: native_assign_node: Assigning server2 to apache2 Oct 30 17:24:53 server2 pengine: [2323]: debug: native_assign_node: Assigning server2 to drbd_disk:0 Oct 30 17:24:53 server2 pengine: [2323]: info: native_color: drbd_disk:1: Rolling back scores from apache_ip Oct 30 17:24:53 server2 pengine: [2323]: info: native_color: drbd_disk:1: Rolling back scores from apache2 Oct 30 17:24:53 server2 pengine: [2323]: debug: native_assign_node: All nodes for resource drbd_disk:1 are unavailable, unclean or shutting down (server1: 0, -1000000) Oct 30 17:24:53 server2 pengine: [2323]: debug: native_assign_node: Could not allocate a node for drbd_disk:1 Oct 30 17:24:53 server2 pengine: [2323]: info: native_color: Resource drbd_disk:1 cannot run anywhere Oct 30 17:24:53 server2 pengine: [2323]: debug: clone_color: Allocated 1 ms_drbd instances of a possible 2 Oct 30 17:24:53 server2 pengine: [2323]: info: rsc_merge_weights: ms_drbd: Breaking dependency loop at ms_drbd Oct 30 17:24:53 pengine: last message repeated 3 times Oct 30 17:24:53 server2 pengine: [2323]: debug: master_color: drbd_disk:0 master score: 10000 Oct 30 17:24:53 server2 pengine: [2323]: info: master_color: Promoting drbd_disk:0 (Master server2) Oct 30 17:24:53 server2 pengine: [2323]: debug: master_color: drbd_disk:1 master score: 0 Oct 30 17:24:53 server2 pengine: [2323]: info: master_color: ms_drbd: Promoted 1 instances of a possible 1 to master Oct 30 17:24:53 server2 pengine: [2323]: debug: native_assign_node: Assigning server2 to fs_drbd Oct 30 17:24:53 server2 pengine: [2323]: debug: master_color: drbd_disk:0 master score: 10000 Oct 30 17:24:53 server2 pengine: [2323]: info: master_color: Promoting drbd_disk:0 (Master server2) Oct 30 17:24:53 server2 pengine: [2323]: debug: master_color: drbd_disk:1 master score: 0 Oct 30 17:24:53 server2 pengine: [2323]: info: master_color: ms_drbd: Promoted 1 instances of a possible 1 to master Oct 30 17:24:53 server2 pengine: [2323]: debug: native_assign_node: Assigning server2 to ping:0 Oct 30 17:24:53 server2 pengine: [2323]: debug: native_assign_node: All nodes for resource ping:1 are unavailable, unclean or shutting down (server1: 0, -1000000) Oct 30 17:24:53 server2 pengine: [2323]: debug: native_assign_node: Could not allocate a node for ping:1 Oct 30 17:24:53 server2 pengine: [2323]: info: native_color: Resource ping:1 cannot run anywhere Oct 30 17:24:53 server2 pengine: [2323]: debug: clone_color: Allocated 1 pingclone instances of a possible 2 Oct 30 17:24:53 server2 pengine: [2323]: debug: native_color: Escalating allocation of mysql_service to its parent: mysql_group Oct 30 17:24:53 server2 pengine: [2323]: debug: native_assign_node: Assigning server2 to mysql_ip Oct 30 17:24:53 server2 pengine: [2323]: debug: native_assign_node: Assigning server2 to mysql_service Oct 30 17:24:53 server2 pengine: [2323]: debug: clone_color: Dependency loop detected involving ms_drbd_mysql Oct 30 17:24:53 server2 pengine: [2323]: info: rsc_merge_weights: ms_drbd_mysql: Breaking dependency loop at ms_drbd_mysql Oct 30 17:24:53 pengine: last message repeated 3 times Oct 30 17:24:53 server2 pengine: [2323]: debug: master_color: drbd_mysql_disk:0 master score: 0 Oct 30 17:24:53 server2 pengine: [2323]: debug: master_color: drbd_mysql_disk:1 master score: 0 Oct 30 17:24:53 server2 pengine: [2323]: info: master_color: ms_drbd_mysql: Promoted 0 instances of a possible 1 to master Oct 30 17:24:53 server2 pengine: [2323]: debug: native_assign_node: Assigning server2 to fs_drbd_mysql Oct 30 17:24:53 server2 pengine: [2323]: debug: native_color: Escalating allocation of mysql_ip to its parent: mysql_group Oct 30 17:24:53 server2 pengine: [2323]: debug: native_assign_node: All nodes for resource drbd_mysql_disk:0 are unavailable, unclean or shutting down (server1: 0, -1000000) Oct 30 17:24:53 server2 pengine: [2323]: debug: native_assign_node: Could not allocate a node for drbd_mysql_disk:0 Oct 30 17:24:53 server2 pengine: [2323]: info: native_color: Resource drbd_mysql_disk:0 cannot run anywhere Oct 30 17:24:53 server2 pengine: [2323]: debug: native_color: Escalating allocation of mysql_service to its parent: mysql_group Oct 30 17:24:53 server2 pengine: [2323]: debug: native_color: Escalating allocation of mysql_ip to its parent: mysql_group Oct 30 17:24:53 server2 pengine: [2323]: debug: native_assign_node: All nodes for resource drbd_mysql_disk:1 are unavailable, unclean or shutting down (server1: 0, -1000000) Oct 30 17:24:53 server2 pengine: [2323]: debug: native_assign_node: Could not allocate a node for drbd_mysql_disk:1 Oct 30 17:24:53 server2 pengine: [2323]: info: native_color: Resource drbd_mysql_disk:1 cannot run anywhere Oct 30 17:24:53 server2 pengine: [2323]: debug: clone_color: Allocated 0 ms_drbd_mysql instances of a possible 2 Oct 30 17:24:53 server2 pengine: [2323]: debug: master_color: drbd_mysql_disk:0 master score: 0 Oct 30 17:24:53 server2 pengine: [2323]: debug: master_color: drbd_mysql_disk:1 master score: 0 Oct 30 17:24:53 server2 pengine: [2323]: info: master_color: ms_drbd_mysql: Promoted 0 instances of a possible 1 to master Oct 30 17:24:53 server2 pengine: [2323]: debug: native_color: Escalating allocation of pgsql_ip to its parent: pgsql_group Oct 30 17:24:53 server2 pengine: [2323]: debug: native_assign_node: Assigning server2 to pgsql_ip Oct 30 17:24:53 server2 pengine: [2323]: debug: clone_color: Dependency loop detected involving ms_drbd_pgsql Oct 30 17:24:53 server2 pengine: [2323]: info: rsc_merge_weights: ms_drbd_pgsql: Breaking dependency loop at ms_drbd_pgsql Oct 30 17:24:53 server2 pengine: [2323]: debug: master_color: drbd_pgsql_disk:0 master score: 0 Oct 30 17:24:53 server2 pengine: [2323]: debug: master_color: drbd_pgsql_disk:1 master score: 0 Oct 30 17:24:53 server2 pengine: [2323]: info: master_color: ms_drbd_pgsql: Promoted 0 instances of a possible 1 to master Oct 30 17:24:53 server2 pengine: [2323]: debug: native_assign_node: Assigning server2 to fs_drbd_pgsql Oct 30 17:24:53 server2 pengine: [2323]: debug: native_assign_node: All nodes for resource drbd_pgsql_disk:0 are unavailable, unclean or shutting down (server1: 0, -1000000) Oct 30 17:24:53 server2 pengine: [2323]: debug: native_assign_node: Could not allocate a node for drbd_pgsql_disk:0 Oct 30 17:24:53 server2 pengine: [2323]: info: native_color: Resource drbd_pgsql_disk:0 cannot run anywhere Oct 30 17:24:53 server2 pengine: [2323]: debug: native_color: Escalating allocation of pgsql_ip to its parent: pgsql_group Oct 30 17:24:53 server2 pengine: [2323]: debug: native_assign_node: All nodes for resource drbd_pgsql_disk:1 are unavailable, unclean or shutting down (server1: 0, -1000000) Oct 30 17:24:53 server2 pengine: [2323]: debug: native_assign_node: Could not allocate a node for drbd_pgsql_disk:1 Oct 30 17:24:53 server2 pengine: [2323]: info: native_color: Resource drbd_pgsql_disk:1 cannot run anywhere Oct 30 17:24:53 server2 pengine: [2323]: debug: clone_color: Allocated 0 ms_drbd_pgsql instances of a possible 2 Oct 30 17:24:53 server2 pengine: [2323]: debug: master_color: drbd_pgsql_disk:0 master score: 0 Oct 30 17:24:53 server2 pengine: [2323]: debug: master_color: drbd_pgsql_disk:1 master score: 0 Oct 30 17:24:53 server2 pengine: [2323]: info: master_color: ms_drbd_pgsql: Promoted 0 instances of a possible 1 to master Oct 30 17:24:53 server2 pengine: [2323]: notice: RecurringOp: Start recurring monitor (30s) for apache_ip on server2 Oct 30 17:24:53 server2 pengine: [2323]: notice: RecurringOp: Start recurring monitor (30s) for apache2 on server2 Oct 30 17:24:53 server2 pengine: [2323]: debug: master_create_actions: Creating actions for ms_drbd Oct 30 17:24:53 server2 pengine: [2323]: notice: RecurringOp: Start recurring monitor (50s) for drbd_disk:0 on server2 Oct 30 17:24:53 server2 pengine: [2323]: notice: RecurringOp: Start recurring monitor (50s) for drbd_disk:0 on server2 Oct 30 17:24:53 server2 pengine: [2323]: debug: master_create_actions: Creating actions for ms_drbd_mysql Oct 30 17:24:53 server2 pengine: [2323]: debug: master_create_actions: Creating actions for ms_drbd_pgsql Oct 30 17:24:53 server2 pengine: [2323]: notice: RecurringOp: Start recurring monitor (30s) for mysql_ip on server2 Oct 30 17:24:53 server2 pengine: [2323]: notice: RecurringOp: Start recurring monitor (40s) for mysql_service on server2 Oct 30 17:24:53 server2 pengine: [2323]: notice: RecurringOp: Start recurring monitor (30s) for pgsql_ip on server2 Oct 30 17:24:53 server2 pengine: [2323]: notice: stage6: Scheduling Node server1 for shutdown Oct 30 17:24:53 server2 pengine: [2323]: notice: LogActions: Start apache_ip#011(server2) Oct 30 17:24:53 server2 pengine: [2323]: notice: LogActions: Start apache2#011(server2) Oct 30 17:24:53 server2 pengine: [2323]: notice: LogActions: Start fs_drbd#011(server2) Oct 30 17:24:53 server2 pengine: [2323]: notice: LogActions: Leave drbd_disk:0#011(Master server2) Oct 30 17:24:53 server2 pengine: [2323]: notice: LogActions: Leave drbd_disk:1#011(Stopped) Oct 30 17:24:53 server2 pengine: [2323]: notice: LogActions: Leave ping:0#011(Started server2) Oct 30 17:24:53 server2 pengine: [2323]: notice: LogActions: Leave ping:1#011(Stopped) Oct 30 17:24:53 server2 pengine: [2323]: notice: LogActions: Leave drbd_mysql_disk:0#011(Stopped) Oct 30 17:24:53 server2 pengine: [2323]: notice: LogActions: Leave drbd_mysql_disk:1#011(Stopped) Oct 30 17:24:53 server2 pengine: [2323]: notice: LogActions: Leave drbd_pgsql_disk:0#011(Stopped) Oct 30 17:24:53 server2 pengine: [2323]: notice: LogActions: Leave drbd_pgsql_disk:1#011(Stopped) Oct 30 17:24:53 server2 pengine: [2323]: notice: LogActions: Start mysql_ip#011(server2) Oct 30 17:24:53 server2 pengine: [2323]: notice: LogActions: Start mysql_service#011(server2) Oct 30 17:24:53 server2 pengine: [2323]: notice: LogActions: Start fs_drbd_mysql#011(server2) Oct 30 17:24:53 server2 pengine: [2323]: notice: LogActions: Start pgsql_ip#011(server2) Oct 30 17:24:53 server2 pengine: [2323]: notice: LogActions: Start fs_drbd_pgsql#011(server2) Oct 30 17:24:53 server2 crmd: [2324]: debug: s_crmd_fsa: Processing I_PE_SUCCESS: [ state=S_POLICY_ENGINE cause=C_IPC_MESSAGE origin=handle_response ] Oct 30 17:24:53 server2 crmd: [2324]: debug: do_fsa_action: actions:trace: #011// A_LOG Oct 30 17:24:53 server2 crmd: [2324]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ] Oct 30 17:24:53 server2 crmd: [2324]: debug: do_fsa_action: actions:trace: #011// A_DC_TIMER_STOP Oct 30 17:24:53 server2 crmd: [2324]: debug: do_fsa_action: actions:trace: #011// A_INTEGRATE_TIMER_STOP Oct 30 17:24:53 server2 crmd: [2324]: debug: do_fsa_action: actions:trace: #011// A_FINALIZE_TIMER_STOP Oct 30 17:24:53 server2 crmd: [2324]: debug: do_fsa_action: actions:trace: #011// A_TE_INVOKE Oct 30 17:24:53 server2 crmd: [2324]: info: unpack_graph: Unpacked transition 8: 11 actions in 11 synapses Oct 30 17:24:53 server2 pengine: [2323]: info: process_pe_message: Transition 8: PEngine Input stored in: /var/lib/pengine/pe-input-329.bz2 Oct 30 17:24:53 server2 crmd: [2324]: info: do_te_invoke: Processing graph 8 (ref=pe_calc-dc-1319991893-83) derived from /var/lib/pengine/pe-input-329.bz2 Oct 30 17:24:53 server2 crmd: [2324]: info: te_pseudo_action: Pseudo action 11 fired and confirmed Oct 30 17:24:53 server2 crmd: [2324]: info: te_rsc_command: Initiating action 19: monitor drbd_disk:0_monitor_50000 on server2 (local) Oct 30 17:24:53 server2 crmd: [2324]: info: do_lrm_rsc_op: Performing key=19:8:8:b2aab0af-262c-445f-ad7e-b6c20cd36732 op=drbd_disk:0_monitor_50000 ) Oct 30 17:24:53 server2 lrmd: [2321]: debug: on_msg_perform_op: add an operation operation monitor[42] on ocf::drbd::drbd_disk:0 for client 2324, its parameters: drbd_resource=[disk0] CRM_meta_role=[Master] CRM_meta_timeout=[30000] CRM_meta_name=[monitor] crm_feature_set=[3.0.5] CRM_meta_notify=[true] CRM_meta_clone_node_max=[1] CRM_meta_clone=[0] CRM_meta_interval=[50000] CRM_meta_master_node_max=[1] CRM_meta_clone_max=[2] CRM_meta_globally_unique=[false] CRM_meta_master_max=[1] to the operation list. Oct 30 17:24:53 server2 lrmd: [2321]: info: rsc:drbd_disk:0:42: monitor Oct 30 17:24:53 server2 lrmd: [4474]: debug: perform_ra_op: resetting scheduler class to SCHED_OTHER Oct 30 17:24:53 server2 crmd: [2324]: info: te_pseudo_action: Pseudo action 103 fired and confirmed Oct 30 17:24:53 server2 crmd: [2324]: info: te_pseudo_action: Pseudo action 110 fired and confirmed Oct 30 17:24:53 server2 crmd: [2324]: info: te_crm_command: Executing crm-event (116): do_shutdown on server1 Oct 30 17:24:53 server2 crmd: [2324]: debug: run_graph: Transition 8 (Complete=0, Pending=1, Fired=5, Skipped=0, Incomplete=6, Source=/var/lib/pengine/pe-input-329.bz2): In-progress Oct 30 17:24:53 server2 corosync[2311]: [TOTEM ] mcasted message added to pending queue Oct 30 17:24:53 server2 crmd: [2324]: debug: run_graph: Transition 8 (Complete=4, Pending=1, Fired=0, Skipped=0, Incomplete=6, Source=/var/lib/pengine/pe-input-329.bz2): In-progress Oct 30 17:24:53 server2 corosync[2311]: [TOTEM ] Delivering 1cc to 1cd Oct 30 17:24:53 server2 corosync[2311]: [TOTEM ] Delivering MCAST message with seq 1cd to pending delivery queue Oct 30 17:24:53 server2 corosync[2311]: [TOTEM ] Received ringid(192.168.26.1:396) seq 1cd Oct 30 17:24:53 server2 corosync[2311]: [TOTEM ] releasing messages up to and including 1cd Oct 30 17:24:54 server2 drbd[4474]: DEBUG: disk0: Calling /usr/sbin/crm_master -Q -l reboot -v 10000 Oct 30 17:24:54 server2 crm_attribute: [4505]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_rw Oct 30 17:24:54 server2 crm_attribute: [4505]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/cib_callback Oct 30 17:24:54 server2 crm_attribute: [4505]: debug: cib_native_signon_raw: Connection to CIB successful Oct 30 17:24:54 server2 crm_attribute: [4505]: debug: query_node_uuidResult section Oct 30 17:24:54 server2 crm_attribute: [4505]: debug: query_node_uuidResult section Oct 30 17:24:54 server2 crm_attribute: [4505]: debug: query_node_uuidResult section Oct 30 17:24:54 server2 crm_attribute: [4505]: debug: query_node_uuidResult section Oct 30 17:24:54 server2 crm_attribute: [4505]: info: determine_host: Mapped server2 to server2 Oct 30 17:24:54 server2 crm_attribute: [4505]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining Oct 30 17:24:54 server2 crm_attribute: [4505]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd Oct 30 17:24:54 server2 attrd: [2322]: debug: attrd_local_callback: update message from crm_attribute: master-drbd_disk:0=10000 Oct 30 17:24:54 server2 attrd: [2322]: debug: attrd_local_callback: Supplied: 10000, Current: 10000, Stored: 10000 Oct 30 17:24:54 server2 crm_attribute: [4505]: debug: attrd_update_delegate: Sent update: master-drbd_disk:0=10000 for server2 Oct 30 17:24:54 server2 crm_attribute: [4505]: info: main: Update master-drbd_disk:0=10000 sent via attrd Oct 30 17:24:54 server2 crm_attribute: [4505]: debug: cib_native_signoff: Signing out of the CIB Service Oct 30 17:24:54 server2 crm_attribute: [4505]: info: crm_xml_cleanup: Cleaning up memory from libxml2 Oct 30 17:24:54 server2 drbd[4474]: DEBUG: disk0: Exit code 0 Oct 30 17:24:54 server2 drbd[4474]: DEBUG: disk0: Command output: Oct 30 17:24:54 server2 lrmd: [2321]: debug: RA output: (drbd_disk:0:monitor:stdout) Oct 30 17:24:54 server2 lrmd: [2321]: WARN: Managed drbd_disk:0:monitor process 4474 exited with return code 8. Oct 30 17:24:54 server2 crmd: [2324]: debug: create_operation_update: do_update_resource: Updating resouce drbd_disk:0 after complete monitor op (interval=50000) Oct 30 17:24:54 server2 crmd: [2324]: info: process_lrm_event: LRM operation drbd_disk:0_monitor_50000 (call=42, rc=8, cib-update=105, confirmed=false) master Oct 30 17:24:54 server2 crmd: [2324]: debug: te_update_diff: Processing diff (cib_modify): 0.162.4 -> 0.162.5 (S_TRANSITION_ENGINE) Oct 30 17:24:54 server2 crmd: [2324]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff Oct 30 17:24:54 server2 crmd: [2324]: info: match_graph_event: Action drbd_disk:0_monitor_50000 (19) confirmed on server2 (rc=0) Oct 30 17:24:54 server2 crmd: [2324]: notice: run_graph: ==================================================== Oct 30 17:24:54 server2 crmd: [2324]: WARN: run_graph: Transition 8 (Complete=5, Pending=0, Fired=0, Skipped=0, Incomplete=6, Source=/var/lib/pengine/pe-input-329.bz2): Terminated Oct 30 17:24:54 server2 crmd: [2324]: ERROR: te_graph_trigger: Transition failed: terminated Oct 30 17:24:54 server2 crmd: [2324]: WARN: print_graph: Graph 8 (11 actions in 11 synapses): batch-limit=30 jobs, network-delay=60000ms Oct 30 17:24:54 server2 crmd: [2324]: WARN: print_graph: Synapse 0 is pending (priority: 0) Oct 30 17:24:54 server2 crmd: [2324]: WARN: print_elem: [Action 12]: Pending (id: apache_group_running_0, type: pseduo, priority: 0) Oct 30 17:24:54 server2 corosync[2311]: [TOTEM ] mcasted message added to pending queue Oct 30 17:24:54 server2 crmd: [2324]: WARN: print_elem: * [Input 6]: Pending (id: apache_ip_start_0, loc: server2, priority: 0) Oct 30 17:24:54 server2 corosync[2311]: [TOTEM ] mcasted message added to pending queue Oct 30 17:24:54 server2 crmd: [2324]: WARN: print_elem: * [Input 8]: Pending (id: apache2_start_0, loc: server2, priority: 0) Oct 30 17:24:54 server2 corosync[2311]: [TOTEM ] mcasted message added to pending queue Oct 30 17:24:54 server2 crmd: [2324]: WARN: print_elem: * [Input 10]: Pending (id: fs_drbd_start_0, loc: server2, priority: 0) Oct 30 17:24:54 server2 corosync[2311]: [TOTEM ] Delivering 1cd to 1d0 Oct 30 17:24:54 server2 crmd: [2324]: WARN: print_elem: * [Input 11]: Completed (id: apache_group_start_0, type: pseduo, priority: 0) Oct 30 17:24:54 server2 corosync[2311]: [TOTEM ] Delivering MCAST message with seq 1ce to pending delivery queue Oct 30 17:24:54 server2 crmd: [2324]: WARN: print_graph: Synapse 1 was confirmed (priority: 0) Oct 30 17:24:54 server2 corosync[2311]: [TOTEM ] Delivering MCAST message with seq 1cf to pending delivery queue Oct 30 17:24:54 server2 crmd: [2324]: WARN: print_graph: Synapse 2 is pending (priority: 0) Oct 30 17:24:54 server2 corosync[2311]: [TOTEM ] Delivering MCAST message with seq 1d0 to pending delivery queue Oct 30 17:24:54 server2 crmd: [2324]: WARN: print_elem: [Action 7]: Pending (id: apache_ip_monitor_30000, loc: server2, priority: 0) Oct 30 17:24:54 server2 corosync[2311]: [TOTEM ] Received ringid(192.168.26.1:396) seq 1cf Oct 30 17:24:54 server2 crmd: [2324]: WARN: print_elem: * [Input 6]: Pending (id: apache_ip_start_0, loc: server2, priority: 0) Oct 30 17:24:54 server2 corosync[2311]: [TOTEM ] Received ringid(192.168.26.1:396) seq 1ce Oct 30 17:24:54 server2 crmd: [2324]: WARN: print_graph: Synapse 3 is pending (priority: 0) Oct 30 17:24:54 server2 corosync[2311]: [TOTEM ] Received ringid(192.168.26.1:396) seq 1d0 Oct 30 17:24:54 server2 crmd: [2324]: WARN: print_elem: [Action 6]: Pending (id: apache_ip_start_0, loc: server2, priority: 0) Oct 30 17:24:54 server2 crmd: [2324]: WARN: print_elem: * [Input 10]: Pending (id: fs_drbd_start_0, loc: server2, priority: 0) Oct 30 17:24:54 server2 crmd: [2324]: WARN: print_elem: * [Input 11]: Completed (id: apache_group_start_0, type: pseduo, priority: 0) Oct 30 17:24:54 server2 crmd: [2324]: WARN: print_graph: Synapse 4 is pending (priority: 0) Oct 30 17:24:54 server2 crmd: [2324]: WARN: print_elem: [Action 9]: Pending (id: apache2_monitor_30000, loc: server2, priority: 0) Oct 30 17:24:54 server2 crmd: [2324]: WARN: print_elem: * [Input 8]: Pending (id: apache2_start_0, loc: server2, priority: 0) Oct 30 17:24:54 server2 crmd: [2324]: WARN: print_graph: Synapse 5 is pending (priority: 0) Oct 30 17:24:54 server2 crmd: [2324]: WARN: print_elem: [Action 8]: Pending (id: apache2_start_0, loc: server2, priority: 0) Oct 30 17:24:54 server2 crmd: [2324]: WARN: print_elem: * [Input 6]: Pending (id: apache_ip_start_0, loc: server2, priority: 0) Oct 30 17:24:54 server2 crmd: [2324]: WARN: print_elem: * [Input 11]: Completed (id: apache_group_start_0, type: pseduo, priority: 0) Oct 30 17:24:54 server2 crmd: [2324]: WARN: print_graph: Synapse 6 is pending (priority: 0) Oct 30 17:24:54 server2 crmd: [2324]: WARN: print_elem: [Action 10]: Pending (id: fs_drbd_start_0, loc: server2, priority: 0) Oct 30 17:24:54 server2 crmd: [2324]: WARN: print_elem: * [Input 8]: Pending (id: apache2_start_0, loc: server2, priority: 0) Oct 30 17:24:54 server2 crmd: [2324]: WARN: print_elem: * [Input 11]: Completed (id: apache_group_start_0, type: pseduo, priority: 0) Oct 30 17:24:54 server2 crmd: [2324]: WARN: print_graph: Synapse 7 was confirmed (priority: 0) Oct 30 17:24:54 server2 crmd: [2324]: WARN: print_graph: Synapse 8 was confirmed (priority: 0) Oct 30 17:24:54 server2 crmd: [2324]: WARN: print_graph: Synapse 9 was confirmed (priority: 0) Oct 30 17:24:54 server2 crmd: [2324]: WARN: print_graph: Synapse 10 was confirmed (priority: 0) Oct 30 17:24:54 server2 crmd: [2324]: info: te_graph_trigger: Transition 8 is now complete Oct 30 17:24:54 server2 crmd: [2324]: debug: notify_crmd: Processing transition completion in state S_TRANSITION_ENGINE Oct 30 17:24:54 server2 crmd: [2324]: info: notify_crmd: Transition 8 status: done - Oct 30 17:24:54 server2 crmd: [2324]: debug: s_crmd_fsa: Processing I_TE_SUCCESS: [ state=S_TRANSITION_ENGINE cause=C_FSA_INTERNAL origin=notify_crmd ] Oct 30 17:24:54 server2 crmd: [2324]: debug: do_fsa_action: actions:trace: #011// A_LOG Oct 30 17:24:54 server2 crmd: [2324]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ] Oct 30 17:24:54 server2 crmd: [2324]: info: do_state_transition: Starting PEngine Recheck Timer Oct 30 17:24:54 server2 crmd: [2324]: debug: crm_timer_start: Started PEngine Recheck Timer (I_PE_CALC:900000ms), src=154 Oct 30 17:24:54 server2 crmd: [2324]: debug: do_fsa_action: actions:trace: #011// A_DC_TIMER_STOP Oct 30 17:24:54 server2 crmd: [2324]: debug: do_fsa_action: actions:trace: #011// A_INTEGRATE_TIMER_STOP Oct 30 17:24:54 server2 crmd: [2324]: debug: do_fsa_action: actions:trace: #011// A_FINALIZE_TIMER_STOP Oct 30 17:24:54 server2 corosync[2311]: [TOTEM ] releasing messages up to and including 1d0 Oct 30 17:24:55 server2 corosync[2311]: [TOTEM ] Received ringid(192.168.26.1:396) seq 1d1 Oct 30 17:24:55 server2 corosync[2311]: [TOTEM ] Delivering 1d0 to 1d1 Oct 30 17:24:55 server2 corosync[2311]: [TOTEM ] Delivering MCAST message with seq 1d1 to pending delivery queue Oct 30 17:24:55 server2 corosync[2311]: [pcmk ] debug: pcmk_cluster_id_callback: Node update: server1 (1.1.5) Oct 30 17:24:55 server2 corosync[2311]: [pcmk ] info: update_member: Node server1 now has process list: 00000000000000000000000000000002 (2) Oct 30 17:24:55 server2 corosync[2311]: [pcmk ] info: send_member_notification: Sending membership update 396 to 2 children Oct 30 17:24:55 server2 cib: [2320]: info: ais_dispatch_message: Membership 396: quorum retained Oct 30 17:24:55 server2 cib: [2320]: info: crm_update_peer: Node server1: id=18524352 state=member addr=r(0) ip(192.168.26.1) r(1) ip(192.168.25.1) votes=1 born=396 seen=396 proc=00000000000000000000000000000002 (new) Oct 30 17:24:55 server2 crmd: [2324]: info: ais_dispatch_message: Membership 396: quorum retained Oct 30 17:24:55 server2 crmd: [2324]: notice: crmd_peer_update: Status update: Client server1/crmd now has status [offline] (DC=true) Oct 30 17:24:55 server2 crmd: [2324]: info: erase_node_from_join: Removed node server1 from join calculations: welcomed=0 itegrated=0 finalized=0 confirmed=1 Oct 30 17:24:55 server2 crmd: [2324]: debug: check_join_state: Invoked by crmd_peer_update in state: S_IDLE Oct 30 17:24:55 server2 corosync[2311]: [TOTEM ] releasing messages up to and including 1d1 Oct 30 17:24:55 server2 crmd: [2324]: info: crm_update_peer: Node server1: id=18524352 state=member addr=r(0) ip(192.168.26.1) r(1) ip(192.168.25.1) votes=1 born=396 seen=396 proc=00000000000000000000000000000002 (new) Oct 30 17:24:55 server2 corosync[2311]: [TOTEM ] mcasted message added to pending queue Oct 30 17:24:55 server2 crmd: [2324]: debug: post_cache_update: Updated cache after membership event 396. Oct 30 17:24:55 server2 crmd: [2324]: debug: ghash_update_cib_node: Updating server1: true (overwrite=false) hash_size=1 Oct 30 17:24:55 server2 cib: [2320]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/107, version=0.162.7): ok (rc=0) Oct 30 17:24:55 server2 corosync[2311]: [TOTEM ] mcasted message added to pending queue Oct 30 17:24:55 server2 crmd: [2324]: debug: ghash_update_cib_node: Updating server2: true (overwrite=false) hash_size=1 Oct 30 17:24:55 server2 corosync[2311]: [TOTEM ] Delivering 1d1 to 1d3 Oct 30 17:24:55 server2 crmd: [2324]: debug: post_cache_update: post_cache_update added action A_ELECTION_CHECK to the FSA Oct 30 17:24:55 server2 corosync[2311]: [TOTEM ] Delivering MCAST message with seq 1d2 to pending delivery queue Oct 30 17:24:55 server2 cib: [2320]: debug: cib_process_xpath: Processing cib_query op for //cib/configuration/crm_config//cluster_property_set//nvpair[@name='expected-quorum-votes'] (/cib/configuration/crm_config/cluster_property_set/nvpair[3]) Oct 30 17:24:55 server2 corosync[2311]: [TOTEM ] Delivering MCAST message with seq 1d3 to pending delivery queue Oct 30 17:24:55 server2 cib: [2320]: notice: ais_dispatch_message: Membership 400: quorum lost Oct 30 17:24:55 server2 crmd: [2324]: debug: find_nvpair_attr_delegateMatch Oct 30 17:24:55 server2 corosync[2311]: [TOTEM ] Received ringid(192.168.26.1:396) seq 1d3 Oct 30 17:24:55 server2 cib: [2320]: info: crm_update_peer: Node server1: id=18524352 state=lost (new) addr=r(0) ip(192.168.26.1) r(1) ip(192.168.25.1) votes=1 born=396 seen=396 proc=00000000000000000000000000000002 Oct 30 17:24:55 server2 crmd: [2324]: info: crmd_ais_dispatch: Setting expected votes to 2 Oct 30 17:24:55 server2 corosync[2311]: [TOTEM ] Received ringid(192.168.26.1:396) seq 1d2 Oct 30 17:24:55 server2 cib: [2320]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/110, version=0.162.9): ok (rc=0) Oct 30 17:24:55 server2 crmd: [2324]: debug: do_fsa_action: actions:trace: #011// A_ELECTION_CHECK Oct 30 17:24:55 server2 corosync[2311]: [TOTEM ] releasing messages up to and including 1d3 Oct 30 17:24:55 server2 crmd: [2324]: debug: do_election_check: Ignore election check: we not in an election Oct 30 17:24:55 server2 corosync[2311]: [TOTEM ] mcasted message added to pending queue Oct 30 17:24:55 server2 crmd: [2324]: debug: te_update_diff: Processing diff (cib_modify): 0.162.5 -> 0.162.6 (S_IDLE) Oct 30 17:24:55 server2 corosync[2311]: [TOTEM ] mcasted message added to pending queue Oct 30 17:24:55 server2 crmd: [2324]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff Oct 30 17:24:55 server2 corosync[2311]: [TOTEM ] Delivering 1d3 to 1d5 Oct 30 17:24:55 server2 crmd: [2324]: debug: te_update_diff: Processing diff (cib_modify): 0.162.6 -> 0.162.7 (S_IDLE) Oct 30 17:24:55 server2 corosync[2311]: [TOTEM ] Delivering MCAST message with seq 1d4 to pending delivery queue Oct 30 17:24:55 server2 crmd: [2324]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff Oct 30 17:24:55 server2 corosync[2311]: [TOTEM ] Delivering MCAST message with seq 1d5 to pending delivery queue Oct 30 17:24:55 server2 crmd: [2324]: debug: te_update_diff: Processing diff (cib_modify): 0.162.7 -> 0.162.8 (S_IDLE) Oct 30 17:24:55 server2 corosync[2311]: [TOTEM ] Received ringid(192.168.26.1:396) seq 1d5 Oct 30 17:24:55 server2 crmd: [2324]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff Oct 30 17:24:55 server2 corosync[2311]: [TOTEM ] Received ringid(192.168.26.1:396) seq 1d4 Oct 30 17:24:55 server2 crmd: [2324]: notice: ais_dispatch_message: Membership 400: quorum lost Oct 30 17:24:55 server2 corosync[2311]: [TOTEM ] releasing messages up to and including 1d5 Oct 30 17:24:55 server2 crmd: [2324]: info: ais_status_callback: status: server1 is now lost (was member) Oct 30 17:24:55 server2 corosync[2311]: [TOTEM ] mcasted message added to pending queue Oct 30 17:24:55 server2 crmd: [2324]: info: crm_update_peer: Node server1: id=18524352 state=lost (new) addr=r(0) ip(192.168.26.1) r(1) ip(192.168.25.1) votes=1 born=396 seen=396 proc=00000000000000000000000000000002 Oct 30 17:24:55 server2 corosync[2311]: [TOTEM ] mcasted message added to pending queue Oct 30 17:24:55 server2 crmd: [2324]: debug: post_cache_update: Updated cache after membership event 400. Oct 30 17:24:55 server2 corosync[2311]: [TOTEM ] Delivering 1d5 to 1d7 Oct 30 17:24:55 server2 crmd: [2324]: debug: ghash_update_cib_node: Updating server1: false (overwrite=false) hash_size=1 Oct 30 17:24:55 server2 cib: [2320]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/111, version=0.162.10): ok (rc=0) Oct 30 17:24:55 server2 corosync[2311]: [TOTEM ] Delivering MCAST message with seq 1d6 to pending delivery queue Oct 30 17:24:55 server2 crmd: [2324]: debug: ghash_update_cib_node: Updating server2: true (overwrite=false) hash_size=1 Oct 30 17:24:55 server2 corosync[2311]: [TOTEM ] Delivering MCAST message with seq 1d7 to pending delivery queue Oct 30 17:24:55 server2 crmd: [2324]: debug: post_cache_update: post_cache_update added action A_ELECTION_CHECK to the FSA Oct 30 17:24:55 server2 corosync[2311]: [TOTEM ] Received ringid(192.168.26.1:396) seq 1d7 Oct 30 17:24:55 server2 crmd: [2324]: info: crm_update_quorum: Updating quorum status to false (call=113) Oct 30 17:24:55 server2 cib: [2320]: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/113, version=0.162.12): ok (rc=0) Oct 30 17:24:55 server2 corosync[2311]: [TOTEM ] Received ringid(192.168.26.1:396) seq 1d6 Oct 30 17:24:55 server2 cib: [2320]: debug: cib_process_xpath: Processing cib_query op for //cib/configuration/crm_config//cluster_property_set//nvpair[@name='expected-quorum-votes'] (/cib/configuration/crm_config/cluster_property_set/nvpair[3]) Oct 30 17:24:55 server2 corosync[2311]: [TOTEM ] entering GATHER state from 11. Oct 30 17:24:55 server2 crmd: [2324]: debug: find_nvpair_attr_delegateMatch Oct 30 17:24:55 server2 corosync[2311]: [TOTEM ] Creating commit token because I am the rep. Oct 30 17:24:55 server2 crmd: [2324]: info: crmd_ais_dispatch: Setting expected votes to 2 Oct 30 17:24:55 server2 cib: [2320]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/115, version=0.162.13): ok (rc=0) Oct 30 17:24:55 server2 corosync[2311]: [TOTEM ] Saving state aru 1d7 high seq received 1d7 Oct 30 17:24:55 server2 crmd: [2324]: debug: do_fsa_action: actions:trace: #011// A_ELECTION_CHECK Oct 30 17:24:55 server2 corosync[2311]: [TOTEM ] Storing new sequence id for ring 190 Oct 30 17:24:55 server2 crmd: [2324]: debug: do_election_check: Ignore election check: we not in an election Oct 30 17:24:55 server2 corosync[2311]: [TOTEM ] entering COMMIT state. Oct 30 17:24:55 server2 crmd: [2324]: debug: te_update_diff: Processing diff (cib_modify): 0.162.9 -> 0.162.10 (S_IDLE) Oct 30 17:24:55 server2 corosync[2311]: [TOTEM ] got commit token Oct 30 17:24:55 server2 crmd: [2324]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff Oct 30 17:24:55 server2 corosync[2311]: [TOTEM ] entering RECOVERY state. Oct 30 17:24:55 server2 crmd: [2324]: debug: te_update_diff: Processing diff (cib_modify): 0.162.10 -> 0.162.11 (S_IDLE) Oct 30 17:24:55 server2 corosync[2311]: [TOTEM ] TRANS [0] member 192.168.26.2: Oct 30 17:24:55 server2 crmd: [2324]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff Oct 30 17:24:55 server2 corosync[2311]: [TOTEM ] position [0] member 192.168.26.2: Oct 30 17:24:55 server2 crmd: [2324]: debug: match_down_event: Match found for action 116: do_shutdown on server1 Oct 30 17:24:55 server2 corosync[2311]: [TOTEM ] previous ring seq 396 rep 192.168.26.1 Oct 30 17:24:55 server2 crmd: [2324]: debug: te_update_diff: Confirming do_shutdown op 116 Oct 30 17:24:55 server2 corosync[2311]: [TOTEM ] aru 1d7 high delivered 1d7 received flag 1 Oct 30 17:24:55 server2 crmd: [2324]: debug: te_update_diff: Processing diff (cib_modify): 0.162.11 -> 0.162.12 (S_IDLE) Oct 30 17:24:55 server2 corosync[2311]: [TOTEM ] Did not need to originate any messages in recovery. Oct 30 17:24:55 server2 crmd: [2324]: debug: get_xpath_object: No match for //cib_update_result//diff-added//crm_config in /notify/cib_update_result/diff Oct 30 17:24:55 server2 corosync[2311]: [TOTEM ] got commit token Oct 30 17:24:55 server2 crmd: [2324]: info: te_graph_trigger: Transition 8 is now complete Oct 30 17:24:55 server2 corosync[2311]: [TOTEM ] Sending initial ORF token Oct 30 17:24:55 server2 crmd: [2324]: debug: notify_crmd: Processing transition completion in state S_IDLE Oct 30 17:24:55 server2 corosync[2311]: [TOTEM ] token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 0, aru 0 Oct 30 17:24:55 server2 crmd: [2324]: info: notify_crmd: Transition 8 status: done - Oct 30 17:24:55 server2 corosync[2311]: [TOTEM ] install seq 0 aru 0 high seq received 0 Oct 30 17:24:55 server2 corosync[2311]: [TOTEM ] token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 1, aru 0 Oct 30 17:24:55 server2 corosync[2311]: [TOTEM ] install seq 0 aru 0 high seq received 0 Oct 30 17:24:55 server2 corosync[2311]: [TOTEM ] token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 2, aru 0 Oct 30 17:24:55 server2 corosync[2311]: [TOTEM ] install seq 0 aru 0 high seq received 0 Oct 30 17:24:55 server2 corosync[2311]: [TOTEM ] token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 3, aru 0 Oct 30 17:24:55 server2 corosync[2311]: [TOTEM ] install seq 0 aru 0 high seq received 0 Oct 30 17:24:55 server2 corosync[2311]: [TOTEM ] retrans flag count 4 token aru 0 install seq 0 aru 0 0 Oct 30 17:24:55 server2 corosync[2311]: [TOTEM ] Resetting old ring state Oct 30 17:24:55 server2 corosync[2311]: [TOTEM ] recovery to regular 1-0 Oct 30 17:24:55 server2 corosync[2311]: [TOTEM ] Delivering to app 1d8 to 1d7 Oct 30 17:24:55 server2 corosync[2311]: [pcmk ] notice: pcmk_peer_update: Transitional membership event on ring 400: memb=1, new=0, lost=1 Oct 30 17:24:55 server2 corosync[2311]: [pcmk ] info: pcmk_peer_update: memb: server2 35301568 Oct 30 17:24:55 server2 corosync[2311]: [pcmk ] info: pcmk_peer_update: lost: server1 18524352 Oct 30 17:24:55 server2 corosync[2311]: [pcmk ] notice: pcmk_peer_update: Stable membership event on ring 400: memb=1, new=0, lost=0 Oct 30 17:24:55 server2 corosync[2311]: [pcmk ] info: pcmk_peer_update: MEMB: server2 35301568 Oct 30 17:24:55 server2 corosync[2311]: [pcmk ] info: ais_mark_unseen_peer_dead: Node server1 was not seen in the previous transition Oct 30 17:24:55 server2 corosync[2311]: [pcmk ] info: update_member: Node 18524352/server1 is now: lost Oct 30 17:24:55 server2 corosync[2311]: [pcmk ] debug: pcmk_peer_update: 1 nodes changed Oct 30 17:24:55 server2 corosync[2311]: [pcmk ] info: send_member_notification: Sending membership update 400 to 2 children Oct 30 17:24:55 server2 corosync[2311]: [pcmk ] debug: send_cluster_id: Local update: id=35301568, born=388, seq=400 Oct 30 17:24:55 server2 corosync[2311]: [SYNC ] This node is within the primary component and will provide service. Oct 30 17:24:55 server2 corosync[2311]: [TOTEM ] entering OPERATIONAL state. Oct 30 17:24:55 server2 corosync[2311]: [TOTEM ] A processor joined or left the membership and a new membership was formed. Oct 30 17:24:55 server2 corosync[2311]: [TOTEM ] mcasted message added to pending queue Oct 30 17:24:55 server2 corosync[2311]: [TOTEM ] Delivering 0 to 1 Oct 30 17:24:55 server2 corosync[2311]: [TOTEM ] Delivering MCAST message with seq 1 to pending delivery queue Oct 30 17:24:55 server2 corosync[2311]: [pcmk ] debug: pcmk_cluster_id_callback: Node update: server2 (1.1.5) Oct 30 17:24:55 server2 corosync[2311]: [SYNC ] confchg entries 1 Oct 30 17:24:55 server2 corosync[2311]: [SYNC ] Barrier Start Received From 35301568 Oct 30 17:24:55 server2 corosync[2311]: [SYNC ] Barrier completion status for nodeid 35301568 = 1. Oct 30 17:24:55 server2 corosync[2311]: [SYNC ] Synchronization barrier completed Oct 30 17:24:55 server2 corosync[2311]: [SYNC ] Synchronization actions starting for (dummy CLM service) Oct 30 17:24:55 server2 corosync[2311]: [TOTEM ] mcasted message added to pending queue Oct 30 17:24:55 server2 corosync[2311]: [TOTEM ] Delivering 1 to 2 Oct 30 17:24:55 server2 corosync[2311]: [TOTEM ] Delivering MCAST message with seq 2 to pending delivery queue Oct 30 17:24:55 server2 corosync[2311]: [TOTEM ] mcasted message added to pending queue Oct 30 17:24:55 server2 corosync[2311]: [TOTEM ] releasing messages up to and including 1 Oct 30 17:24:55 server2 corosync[2311]: [TOTEM ] Delivering 2 to 3 Oct 30 17:24:55 server2 corosync[2311]: [TOTEM ] Delivering MCAST message with seq 3 to pending delivery queue Oct 30 17:24:55 server2 corosync[2311]: [SYNC ] confchg entries 1 Oct 30 17:24:55 server2 corosync[2311]: [SYNC ] Barrier Start Received From 35301568 Oct 30 17:24:55 server2 corosync[2311]: [SYNC ] Barrier completion status for nodeid 35301568 = 1. Oct 30 17:24:55 server2 corosync[2311]: [SYNC ] Synchronization barrier completed Oct 30 17:24:55 server2 corosync[2311]: [SYNC ] Committing synchronization for (dummy CLM service) Oct 30 17:24:55 server2 corosync[2311]: [SYNC ] Synchronization actions starting for (dummy AMF service) Oct 30 17:24:55 server2 corosync[2311]: [TOTEM ] releasing messages up to and including 2 Oct 30 17:24:55 server2 corosync[2311]: [TOTEM ] mcasted message added to pending queue Oct 30 17:24:55 server2 corosync[2311]: [TOTEM ] releasing messages up to and including 3 Oct 30 17:24:55 server2 corosync[2311]: [TOTEM ] Delivering 3 to 4 Oct 30 17:24:55 server2 corosync[2311]: [TOTEM ] Delivering MCAST message with seq 4 to pending delivery queue Oct 30 17:24:55 server2 corosync[2311]: [SYNC ] confchg entries 1 Oct 30 17:24:55 server2 corosync[2311]: [SYNC ] Barrier Start Received From 35301568 Oct 30 17:24:55 server2 corosync[2311]: [SYNC ] Barrier completion status for nodeid 35301568 = 1. Oct 30 17:24:55 server2 corosync[2311]: [SYNC ] Synchronization barrier completed Oct 30 17:24:55 server2 corosync[2311]: [SYNC ] Committing synchronization for (dummy AMF service) Oct 30 17:24:55 server2 corosync[2311]: [SYNC ] Synchronization actions starting for (dummy CKPT service) Oct 30 17:24:55 server2 corosync[2311]: [TOTEM ] mcasted message added to pending queue Oct 30 17:24:55 server2 corosync[2311]: [TOTEM ] releasing messages up to and including 4 Oct 30 17:24:55 server2 corosync[2311]: [TOTEM ] Delivering 4 to 5 Oct 30 17:24:55 server2 corosync[2311]: [TOTEM ] Delivering MCAST message with seq 5 to pending delivery queue Oct 30 17:24:55 server2 corosync[2311]: [SYNC ] confchg entries 1 Oct 30 17:24:55 server2 corosync[2311]: [SYNC ] Barrier Start Received From 35301568 Oct 30 17:24:55 server2 corosync[2311]: [SYNC ] Barrier completion status for nodeid 35301568 = 1. Oct 30 17:24:55 server2 corosync[2311]: [SYNC ] Synchronization barrier completed Oct 30 17:24:55 server2 corosync[2311]: [SYNC ] Committing synchronization for (dummy CKPT service) Oct 30 17:24:55 server2 corosync[2311]: [SYNC ] Synchronization actions starting for (dummy EVT service) Oct 30 17:24:55 server2 corosync[2311]: [TOTEM ] mcasted message added to pending queue Oct 30 17:24:55 server2 corosync[2311]: [TOTEM ] Delivering 5 to 6 Oct 30 17:24:55 server2 corosync[2311]: [TOTEM ] Delivering MCAST message with seq 6 to pending delivery queue Oct 30 17:24:55 server2 corosync[2311]: [TOTEM ] mcasted message added to pending queue Oct 30 17:24:55 server2 corosync[2311]: [TOTEM ] releasing messages up to and including 5 Oct 30 17:24:55 server2 corosync[2311]: [TOTEM ] Delivering 6 to 7 Oct 30 17:24:55 server2 corosync[2311]: [TOTEM ] Delivering MCAST message with seq 7 to pending delivery queue Oct 30 17:24:55 server2 corosync[2311]: [TOTEM ] releasing messages up to and including 6 Oct 30 17:24:55 server2 corosync[2311]: [TOTEM ] mcasted message added to pending queue Oct 30 17:24:55 server2 corosync[2311]: [TOTEM ] releasing messages up to and including 7 Oct 30 17:24:55 server2 corosync[2311]: [TOTEM ] Delivering 7 to 8 Oct 30 17:24:55 server2 corosync[2311]: [TOTEM ] Delivering MCAST message with seq 8 to pending delivery queue Oct 30 17:24:55 server2 corosync[2311]: [SYNC ] confchg entries 1 Oct 30 17:24:55 server2 corosync[2311]: [SYNC ] Barrier Start Received From 35301568 Oct 30 17:24:55 server2 corosync[2311]: [SYNC ] Barrier completion status for nodeid 35301568 = 1. Oct 30 17:24:55 server2 corosync[2311]: [SYNC ] Synchronization barrier completed Oct 30 17:24:55 server2 corosync[2311]: [SYNC ] Committing synchronization for (dummy EVT service) Oct 30 17:24:55 server2 corosync[2311]: [SYNC ] Synchronization actions starting for (corosync cluster closed process group service v1.01) Oct 30 17:24:55 server2 corosync[2311]: [TOTEM ] mcasted message added to pending queue Oct 30 17:24:55 server2 corosync[2311]: [TOTEM ] mcasted message added to pending queue Oct 30 17:24:55 server2 corosync[2311]: [TOTEM ] Delivering 8 to a Oct 30 17:24:55 server2 corosync[2311]: [TOTEM ] Delivering MCAST message with seq 9 to pending delivery queue Oct 30 17:24:55 server2 corosync[2311]: [TOTEM ] Delivering MCAST message with seq a to pending delivery queue Oct 30 17:24:55 server2 corosync[2311]: [CPG ] chosen downlist from node r(0) ip(192.168.26.2) r(1) ip(192.168.25.2) Oct 30 17:24:55 server2 corosync[2311]: [TOTEM ] mcasted message added to pending queue Oct 30 17:24:55 server2 corosync[2311]: [TOTEM ] releasing messages up to and including 8 Oct 30 17:24:55 server2 corosync[2311]: [TOTEM ] Delivering a to b Oct 30 17:24:55 server2 corosync[2311]: [TOTEM ] Delivering MCAST message with seq b to pending delivery queue Oct 30 17:24:55 server2 corosync[2311]: [SYNC ] confchg entries 1 Oct 30 17:24:55 server2 corosync[2311]: [SYNC ] Barrier Start Received From 35301568 Oct 30 17:24:55 server2 corosync[2311]: [SYNC ] Barrier completion status for nodeid 35301568 = 1. Oct 30 17:24:55 server2 rsyslogd-2177: imuxsock begins to drop messages from pid 2311 due to rate-limiting Oct 30 17:24:57 server2 lrmd: [2321]: debug: rsc:ping:0:18: monitor Oct 30 17:24:57 server2 lrmd: [4514]: debug: perform_ra_op: resetting scheduler class to SCHED_OTHER Oct 30 17:25:01 server2 attrd_updater: [4525]: info: attrd_lazy_update: Connecting to cluster... 5 retries remaining Oct 30 17:25:01 server2 attrd_updater: [4525]: debug: init_client_ipc_comms_nodispatch: Attempting to talk on: /var/run/crm/attrd Oct 30 17:25:01 server2 attrd: [2322]: debug: attrd_local_callback: update message from attrd_updater: pingd=1000 Oct 30 17:25:01 server2 attrd: [2322]: debug: attrd_local_callback: Supplied: 1000, Current: 1000, Stored: 1000 Oct 30 17:25:01 server2 attrd_updater: [4525]: debug: attrd_update_delegate: Sent update: pingd=1000 for localhost ^C root@server2:~#