<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2//EN">
<HTML>
<HEAD>
<META HTTP-EQUIV="Content-Type" CONTENT="text/html; charset=iso-8859-1">
<META NAME="Generator" CONTENT="MS Exchange Server version 6.5.7638.1">
<TITLE>Re: Problems with migration of kvm on primary/primary cluster</TITLE>
</HEAD>
<BODY>
<!-- Converted from text/plain format -->
<P><FONT SIZE=2>Hallo again!<BR>
<BR>
Now i have updated pacemaker to 1.0.11 (debian squeeze backports) but the problem still exist.<BR>
I think the problem is my filesystem.<BR>
<BR>
My config:<BR>
<BR>
node virtualserver01 \<BR>
attributes standby="off"<BR>
node virtualserver02 \<BR>
attributes standby="off"<BR>
primitive dlm ocf:pacemaker:controld \<BR>
operations $id="dlm-operations" \<BR>
op start interval="0" timeout="90" \<BR>
op stop interval="0" timeout="100" \<BR>
op monitor interval="10" timeout="20" start-delay="0" \<BR>
meta target-role="started"<BR>
primitive drbd_r0 ocf:linbit:drbd \<BR>
params drbd_resource="r0" \<BR>
operations $id="drbd_r0-operations" \<BR>
op start interval="0" timeout="240" \<BR>
op promote interval="0" timeout="90" \<BR>
op demote interval="0" timeout="90" \<BR>
op stop interval="0" timeout="100" \<BR>
op monitor interval="10" timeout="20" start-delay="1min" \<BR>
op notify interval="0" timeout="90" \<BR>
meta target-role="started"<BR>
primitive fs ocf:heartbeat:Filesystem \<BR>
params device="/dev/drbd0" directory="/mnt" fstype="ocfs2" \<BR>
operations $id="fs-operations" \<BR>
op start interval="0" timeout="60" \<BR>
op stop interval="0" timeout="60" \<BR>
op monitor interval="20" timeout="40" start-delay="0" \<BR>
op notify interval="0" timeout="60" \<BR>
meta target-role="started"<BR>
primitive o2cb ocf:pacemaker:o2cb \<BR>
op monitor interval="120s" \<BR>
meta target-role="started"<BR>
ms ms_drbd_r0 drbd_r0 \<BR>
meta master-max="2" clone-max="2" notify="true" interleave="true" resource-stickiness="100"<BR>
clone dlm-clone dlm \<BR>
meta clone-max="2" interleave="true"<BR>
clone fs-clone fs \<BR>
meta clone-max="2" ordered="true" interleave="true"<BR>
clone o2cb-clone o2cb<BR>
colocation col_dlm_drbd inf: dlm-clone ms_drbd_r0:Master<BR>
colocation col_fs_o2cb inf: fs-clone o2cb-clone<BR>
colocation col_o2cb_dlm inf: o2cb-clone dlm-clone<BR>
order ord_drbd_dlm 0: ms_drbd_r0:promote dlm-clone<BR>
order ord_o2cb_after_dlm 0: dlm-clone o2cb-clone<BR>
order ord_o2cb_fs 0: o2cb-clone fs-clone<BR>
property $id="cib-bootstrap-options" \<BR>
expected-quorum-votes="2" \<BR>
stonith-enabled="false" \<BR>
dc-version="1.0.11-6e010d6b0d49a6b929d17c0114e9d2d934dc8e04" \<BR>
no-quorum-policy="ignore" \<BR>
cluster-infrastructure="openais" \<BR>
last-lrm-refresh="1312195244"<BR>
<BR>
<BR>
============<BR>
Last updated: Tue Aug 2 06:54:04 2011<BR>
Stack: openais<BR>
Current DC: virtualserver01 - partition with quorum<BR>
Version: 1.0.11-6e010d6b0d49a6b929d17c0114e9d2d934dc8e04<BR>
2 Nodes configured, 2 expected votes<BR>
4 Resources configured.<BR>
============<BR>
<BR>
Node virtualserver01: online<BR>
fs:0 (ocf::heartbeat:Filesystem) Started<BR>
dlm:0 (ocf::pacemaker:controld) Started<BR>
o2cb:0 (ocf::pacemaker:o2cb) Started<BR>
drbd_r0:0 (ocf::linbit:drbd) Master<BR>
Node virtualserver02: online<BR>
drbd_r0:1 (ocf::linbit:drbd) Master<BR>
dlm:1 (ocf::pacemaker:controld) Started<BR>
o2cb:1 (ocf::pacemaker:o2cb) Started<BR>
fs:1 (ocf::heartbeat:Filesystem) Started<BR>
<BR>
<BR>
When i shutdown the nodes or pulling the plugs, the online node shows in crm_mon that the filesystem started, but i cant access to the mountpoint.<BR>
I think this is the problem why my kvm vm's crash.<BR>
<BR>
============<BR>
Last updated: Tue Aug 2 07:07:44 2011<BR>
Stack: openais<BR>
Current DC: virtualserver02 - partition WITHOUT quorum<BR>
Version: 1.0.11-6e010d6b0d49a6b929d17c0114e9d2d934dc8e04<BR>
2 Nodes configured, 2 expected votes<BR>
4 Resources configured.<BR>
============<BR>
<BR>
Node virtualserver01: OFFLINE<BR>
Node virtualserver02: online<BR>
fs:1 (ocf::heartbeat:Filesystem) Started<BR>
dlm:1 (ocf::pacemaker:controld) Started<BR>
drbd_r0:1 (ocf::linbit:drbd) Master<BR>
o2cb:1 (ocf::pacemaker:o2cb) Started<BR>
<BR>
any idea?<BR>
<BR>
best regards<BR>
<BR>
<BR>
the logfile from the online node:<BR>
<BR>
Aug 2 06:53:28 virtualserver02 Filesystem[8659]: INFO: Running start for /dev/drbd0 on /mnt<BR>
Aug 2 06:53:28 virtualserver02 lrmd: [1755]: info: RA output: (fs:1:start:stderr) FATAL: Module scsi_hostadapter not found.<BR>
Aug 2 06:53:28 virtualserver02 kernel: [ 533.775158] dlm: Using SCTP for communications<BR>
Aug 2 06:53:28 virtualserver02 kernel: [ 533.782151] dlm: connecting to 1694607552 sctp association 1<BR>
Aug 2 06:53:32 virtualserver02 kernel: [ 537.812934] ocfs2: Mounting device (147,0) on (node 1711384, slot 1) with ordered data mode.<BR>
Aug 2 06:53:32 virtualserver02 crmd: [1758]: info: process_lrm_event: LRM operation fs:1_start_0 (call=29, rc=0, cib-update=36, confirmed=true) ok<BR>
Aug 2 06:53:32 virtualserver02 crmd: [1758]: info: do_lrm_rsc_op: Performing key=60:12:0:3116dbc3-9da7-47fe-9546-6e1ba7030970 op=fs:1_monitor_20000 )<BR>
Aug 2 06:53:32 virtualserver02 lrmd: [1755]: info: rsc:fs:1:30: monitor<BR>
Aug 2 06:53:32 virtualserver02 crmd: [1758]: info: process_lrm_event: LRM operation fs:1_monitor_20000 (call=30, rc=0, cib-update=37, confirmed=false) ok<BR>
Aug 2 06:55:13 virtualserver02 cib: [1754]: info: cib_stats: Processed 159 operations (251.00us average, 0% utilization) in the last 10min<BR>
Aug 2 07:04:40 virtualserver02 corosync[1725]: [TOTEM ] A processor failed, forming new configuration.<BR>
Aug 2 07:04:42 virtualserver02 kernel: [ 1207.632051] block drbd0: PingAck did not arrive in time.<BR>
Aug 2 07:04:42 virtualserver02 kernel: [ 1207.632117] block drbd0: peer( Primary -> Unknown ) conn( Connected -> NetworkFailure ) pdsk( UpToDate -> DUnknown )<BR>
Aug 2 07:04:42 virtualserver02 kernel: [ 1207.632205] block drbd0: asender terminated<BR>
Aug 2 07:04:42 virtualserver02 kernel: [ 1207.632212] block drbd0: short read expecting header on sock: r=-512<BR>
Aug 2 07:04:42 virtualserver02 kernel: [ 1207.632232] block drbd0: Creating new current UUID<BR>
Aug 2 07:04:42 virtualserver02 kernel: [ 1207.632391] block drbd0: Terminating drbd0_asender<BR>
Aug 2 07:04:42 virtualserver02 kernel: [ 1207.652246] block drbd0: Connection closed<BR>
Aug 2 07:04:42 virtualserver02 kernel: [ 1207.652315] block drbd0: conn( NetworkFailure -> Unconnected )<BR>
Aug 2 07:04:42 virtualserver02 kernel: [ 1207.652376] block drbd0: receiver terminated<BR>
Aug 2 07:04:42 virtualserver02 kernel: [ 1207.652430] block drbd0: Restarting drbd0_receiver<BR>
Aug 2 07:04:42 virtualserver02 kernel: [ 1207.652485] block drbd0: receiver (re)started<BR>
Aug 2 07:04:42 virtualserver02 kernel: [ 1207.652547] block drbd0: conn( Unconnected -> WFConnection )<BR>
Aug 2 07:04:44 virtualserver02 corosync[1725]: [pcmk ] notice: pcmk_peer_update: Transitional membership event on ring 384: memb=1, new=0, lost=1<BR>
Aug 2 07:04:44 virtualserver02 corosync[1725]: [pcmk ] info: pcmk_peer_update: memb: virtualserver02 1711384768<BR>
Aug 2 07:04:44 virtualserver02 corosync[1725]: [pcmk ] info: pcmk_peer_update: lost: virtualserver01 1694607552<BR>
Aug 2 07:04:44 virtualserver02 corosync[1725]: [pcmk ] notice: pcmk_peer_update: Stable membership event on ring 384: memb=1, new=0, lost=0<BR>
Aug 2 07:04:44 virtualserver02 kernel: [ 1209.281090] dlm: closing connection to node 1694607552<BR>
Aug 2 07:04:44 virtualserver02 corosync[1725]: [pcmk ] info: pcmk_peer_update: MEMB: virtualserver02 1711384768<BR>
Aug 2 07:04:44 virtualserver02 corosync[1725]: [pcmk ] info: ais_mark_unseen_peer_dead: Node virtualserver01 was not seen in the previous transition<BR>
Aug 2 07:04:44 virtualserver02 corosync[1725]: [pcmk ] info: update_member: Node 1694607552/virtualserver01 is now: lost<BR>
Aug 2 07:04:44 virtualserver02 corosync[1725]: [pcmk ] info: send_member_notification: Sending membership update 384 to 4 children<BR>
Aug 2 07:04:44 virtualserver02 cib: [1754]: notice: ais_dispatch: Membership 384: quorum lost<BR>
Aug 2 07:04:44 virtualserver02 cib: [1754]: info: crm_update_peer: Node virtualserver01: id=1694607552 state=lost (new) addr=r(0) ip(192.168.1.101) r(1) ip(10.0.0.101) votes=1 born=380 seen=380 proc=00000000000000000000000000013312<BR>
Aug 2 07:04:44 virtualserver02 corosync[1725]: [TOTEM ] A processor joined or left the membership and a new membership was formed.<BR>
Aug 2 07:04:44 virtualserver02 crmd: [1758]: notice: ais_dispatch: Membership 384: quorum lost<BR>
Aug 2 07:04:44 virtualserver02 crmd: [1758]: info: crm_update_peer: Node virtualserver01: id=1694607552 state=lost (new) addr=r(0) ip(192.168.1.101) r(1) ip(10.0.0.101) votes=1 born=380 seen=380 proc=00000000000000000000000000013312<BR>
Aug 2 07:04:44 virtualserver02 crmd: [1758]: WARN: check_dead_member: Our DC node (virtualserver01) left the cluster<BR>
Aug 2 07:04:44 virtualserver02 crmd: [1758]: info: do_state_transition: State transition S_NOT_DC -> S_ELECTION [ input=I_ELECTION cause=C_FSA_INTERNAL origin=check_dead_member ]<BR>
Aug 2 07:04:44 virtualserver02 crmd: [1758]: info: update_dc: Unset DC virtualserver01<BR>
Aug 2 07:04:44 virtualserver02 crmd: [1758]: info: do_state_transition: State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC cause=C_FSA_INTERNAL origin=do_election_check ]<BR>
Aug 2 07:04:44 virtualserver02 corosync[1725]: [CPG ] chosen downlist from node r(0) ip(192.168.1.102) r(1) ip(10.0.0.102)<BR>
Aug 2 07:04:44 virtualserver02 crmd: [1758]: info: do_te_control: Registering TE UUID: f679ff7d-6b65-4176-b395-216bf6324c40<BR>
Aug 2 07:04:44 virtualserver02 corosync[1725]: [MAIN ] Completed service synchronization, ready to provide service.<BR>
Aug 2 07:04:44 virtualserver02 crmd: [1758]: info: set_graph_functions: Setting custom graph functions<BR>
Aug 2 07:04:44 virtualserver02 crmd: [1758]: info: unpack_graph: Unpacked transition -1: 0 actions in 0 synapses<BR>
Aug 2 07:04:44 virtualserver02 crmd: [1758]: info: do_dc_takeover: Taking over DC status for this partition<BR>
Aug 2 07:04:44 virtualserver02 cib: [1754]: info: cib_process_readwrite: We are now in R/W mode<BR>
Aug 2 07:04:44 virtualserver02 cib: [1754]: info: cib_process_request: Operation complete: op cib_master for section 'all' (origin=local/crmd/38, version=0.237.5): ok (rc=0)<BR>
Aug 2 07:04:44 virtualserver02 cib: [1754]: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/39, version=0.237.5): ok (rc=0)<BR>
Aug 2 07:04:44 virtualserver02 cib: [1754]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/41, version=0.237.5): ok (rc=0)<BR>
Aug 2 07:04:44 virtualserver02 crmd: [1758]: info: join_make_offer: Making join offers based on membership 384<BR>
Aug 2 07:04:44 virtualserver02 crmd: [1758]: info: do_dc_join_offer_all: join-1: Waiting on 1 outstanding join acks<BR>
Aug 2 07:04:44 virtualserver02 crmd: [1758]: info: ais_dispatch: Membership 384: quorum still lost<BR>
Aug 2 07:04:44 virtualserver02 cib: [1754]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/43, version=0.237.5): ok (rc=0)<BR>
Aug 2 07:04:44 virtualserver02 crmd: [1758]: info: crm_ais_dispatch: Setting expected votes to 2<BR>
Aug 2 07:04:44 virtualserver02 crmd: [1758]: info: config_query_callback: Checking for expired actions every 900000ms<BR>
Aug 2 07:04:44 virtualserver02 crmd: [1758]: info: config_query_callback: Sending expected-votes=2 to corosync<BR>
Aug 2 07:04:44 virtualserver02 crmd: [1758]: info: update_dc: Set DC to virtualserver02 (3.0.1)<BR>
Aug 2 07:04:44 virtualserver02 crmd: [1758]: info: ais_dispatch: Membership 384: quorum still lost<BR>
Aug 2 07:04:44 virtualserver02 cib: [1754]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/46, version=0.237.5): ok (rc=0)<BR>
Aug 2 07:04:44 virtualserver02 crmd: [1758]: info: crm_ais_dispatch: Setting expected votes to 2<BR>
Aug 2 07:04:44 virtualserver02 crmd: [1758]: info: do_state_transition: State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED cause=C_FSA_INTERNAL origin=check_join_state ]<BR>
Aug 2 07:04:44 virtualserver02 crmd: [1758]: info: do_state_transition: All 1 cluster nodes responded to the join offer.<BR>
Aug 2 07:04:44 virtualserver02 crmd: [1758]: info: do_dc_join_finalize: join-1: Syncing the CIB from virtualserver02 to the rest of the cluster<BR>
Aug 2 07:04:44 virtualserver02 crmd: [1758]: info: te_connect_stonith: Attempting connection to fencing daemon...<BR>
Aug 2 07:04:44 virtualserver02 cib: [1754]: info: cib_process_request: Operation complete: op cib_modify for section crm_config (origin=local/crmd/49, version=0.237.5): ok (rc=0)<BR>
Aug 2 07:04:44 virtualserver02 cib: [1754]: info: cib_process_request: Operation complete: op cib_sync for section 'all' (origin=local/crmd/50, version=0.237.5): ok (rc=0)<BR>
Aug 2 07:04:45 virtualserver02 crmd: [1758]: info: te_connect_stonith: Connected<BR>
Aug 2 07:04:45 virtualserver02 cib: [1754]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/51, version=0.237.5): ok (rc=0)<BR>
Aug 2 07:04:45 virtualserver02 crmd: [1758]: info: do_dc_join_ack: join-1: Updating node state to member for virtualserver02<BR>
Aug 2 07:04:45 virtualserver02 cib: [1754]: info: cib_process_request: Operation complete: op cib_delete for section //node_state[@uname='virtualserver02']/lrm (origin=local/crmd/52, version=0.237.6): ok (rc=0)<BR>
Aug 2 07:04:45 virtualserver02 crmd: [1758]: info: erase_xpath_callback: Deletion of "//node_state[@uname='virtualserver02']/lrm": ok (rc=0)<BR>
Aug 2 07:04:45 virtualserver02 crmd: [1758]: info: do_state_transition: State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED cause=C_FSA_INTERNAL origin=check_join_state ]<BR>
Aug 2 07:04:45 virtualserver02 crmd: [1758]: info: do_state_transition: All 1 cluster nodes are eligible to run resources.<BR>
Aug 2 07:04:45 virtualserver02 crmd: [1758]: info: do_dc_join_final: Ensuring DC, quorum and node attributes are up-to-date<BR>
Aug 2 07:04:45 virtualserver02 crmd: [1758]: info: crm_update_quorum: Updating quorum status to false (call=56)<BR>
Aug 2 07:04:45 virtualserver02 crmd: [1758]: info: abort_transition_graph: do_te_invoke:185 - Triggered transition abort (complete=1) : Peer Cancelled<BR>
Aug 2 07:04:45 virtualserver02 crmd: [1758]: info: do_pe_invoke: Query 57: Requesting the current CIB: S_POLICY_ENGINE<BR>
Aug 2 07:04:45 virtualserver02 attrd: [1756]: info: attrd_local_callback: Sending full refresh (origin=crmd)<BR>
Aug 2 07:04:45 virtualserver02 attrd: [1756]: info: attrd_trigger_update: Sending flush op to all hosts for: shutdown (<null>)<BR>
Aug 2 07:04:45 virtualserver02 cib: [1754]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crmd/54, version=0.237.7): ok (rc=0)<BR>
Aug 2 07:04:45 virtualserver02 crmd: [1758]: WARN: match_down_event: No match for shutdown action on virtualserver01<BR>
Aug 2 07:04:45 virtualserver02 crmd: [1758]: info: te_update_diff: Stonith/shutdown of virtualserver01 not matched<BR>
Aug 2 07:04:45 virtualserver02 crmd: [1758]: info: abort_transition_graph: te_update_diff:198 - Triggered transition abort (complete=1, tag=node_state, id=virtualserver01, magic=NA, cib=0.237.8) : Node failure<BR>
Aug 2 07:04:45 virtualserver02 crmd: [1758]: info: do_pe_invoke: Query 58: Requesting the current CIB: S_POLICY_ENGINE<BR>
Aug 2 07:04:45 virtualserver02 cib: [1754]: info: log_data_element: cib:diff: - <cib have-quorum="1" dc-uuid="virtualserver01" admin_epoch="0" epoch="237" num_updates="8" /><BR>
Aug 2 07:04:45 virtualserver02 cib: [1754]: info: log_data_element: cib:diff: + <cib have-quorum="0" dc-uuid="virtualserver02" admin_epoch="0" epoch="238" num_updates="1" /><BR>
Aug 2 07:04:45 virtualserver02 cib: [1754]: info: cib_process_request: Operation complete: op cib_modify for section cib (origin=local/crmd/56, version=0.238.1): ok (rc=0)<BR>
Aug 2 07:04:45 virtualserver02 crmd: [1758]: info: abort_transition_graph: need_abort:59 - Triggered transition abort (complete=1) : Non-status change<BR>
Aug 2 07:04:45 virtualserver02 crmd: [1758]: info: need_abort: Aborting on change to have-quorum<BR>
Aug 2 07:04:45 virtualserver02 crmd: [1758]: info: do_pe_invoke: Query 59: Requesting the current CIB: S_POLICY_ENGINE<BR>
Aug 2 07:04:45 virtualserver02 attrd: [1756]: info: attrd_trigger_update: Sending flush op to all hosts for: terminate (<null>)<BR>
Aug 2 07:04:45 virtualserver02 attrd: [1756]: info: attrd_trigger_update: Sending flush op to all hosts for: master-drbd_r0:0 (<null>)<BR>
Aug 2 07:04:45 virtualserver02 attrd: [1756]: info: attrd_trigger_update: Sending flush op to all hosts for: master-drbd_r0:1 (10000)<BR>
Aug 2 07:04:45 virtualserver02 crmd: [1758]: info: do_pe_invoke_callback: Invoking the PE: query=59, ref=pe_calc-dc-1312261485-15, seq=384, quorate=0<BR>
Aug 2 07:04:45 virtualserver02 pengine: [1757]: notice: unpack_config: On loss of CCM Quorum: Ignore<BR>
Aug 2 07:04:45 virtualserver02 pengine: [1757]: info: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0<BR>
Aug 2 07:04:45 virtualserver02 pengine: [1757]: info: determine_online_status: Node virtualserver02 is online<BR>
Aug 2 07:04:45 virtualserver02 pengine: [1757]: notice: unpack_rsc_op: Operation drbd_r0:1_monitor_0 found resource drbd_r0:1 active in master mode on virtualserver02<BR>
Aug 2 07:04:45 virtualserver02 pengine: [1757]: notice: clone_print: Master/Slave Set: ms_drbd_r0<BR>
Aug 2 07:04:45 virtualserver02 pengine: [1757]: notice: short_print: Masters: [ virtualserver02 ]<BR>
Aug 2 07:04:45 virtualserver02 pengine: [1757]: notice: short_print: Stopped: [ drbd_r0:0 ]<BR>
Aug 2 07:04:45 virtualserver02 pengine: [1757]: notice: clone_print: Clone Set: dlm-clone<BR>
Aug 2 07:04:45 virtualserver02 attrd: [1756]: info: attrd_trigger_update: Sending flush op to all hosts for: probe_complete (true)<BR>
Aug 2 07:04:45 virtualserver02 pengine: [1757]: notice: short_print: Started: [ virtualserver02 ]<BR>
Aug 2 07:04:45 virtualserver02 pengine: [1757]: notice: short_print: Stopped: [ dlm:0 ]<BR>
Aug 2 07:04:45 virtualserver02 pengine: [1757]: notice: clone_print: Clone Set: o2cb-clone<BR>
Aug 2 07:04:45 virtualserver02 pengine: [1757]: notice: short_print: Started: [ virtualserver02 ]<BR>
Aug 2 07:04:45 virtualserver02 pengine: [1757]: notice: short_print: Stopped: [ o2cb:0 ]<BR>
Aug 2 07:04:45 virtualserver02 pengine: [1757]: notice: clone_print: Clone Set: fs-clone<BR>
Aug 2 07:04:45 virtualserver02 pengine: [1757]: notice: short_print: Started: [ virtualserver02 ]<BR>
Aug 2 07:04:45 virtualserver02 pengine: [1757]: notice: short_print: Stopped: [ fs:0 ]<BR>
Aug 2 07:04:45 virtualserver02 pengine: [1757]: info: native_color: Resource drbd_r0:0 cannot run anywhere<BR>
Aug 2 07:04:45 virtualserver02 pengine: [1757]: info: master_color: Promoting drbd_r0:1 (Master virtualserver02)<BR>
Aug 2 07:04:45 virtualserver02 pengine: [1757]: info: master_color: ms_drbd_r0: Promoted 1 instances of a possible 2 to master<BR>
Aug 2 07:04:45 virtualserver02 pengine: [1757]: info: master_color: Promoting drbd_r0:1 (Master virtualserver02)<BR>
Aug 2 07:04:45 virtualserver02 pengine: [1757]: info: master_color: ms_drbd_r0: Promoted 1 instances of a possible 2 to master<BR>
Aug 2 07:04:45 virtualserver02 pengine: [1757]: info: master_color: Promoting drbd_r0:1 (Master virtualserver02)<BR>
Aug 2 07:04:45 virtualserver02 pengine: [1757]: info: master_color: ms_drbd_r0: Promoted 1 instances of a possible 2 to master<BR>
Aug 2 07:04:45 virtualserver02 pengine: [1757]: info: native_color: Resource dlm:0 cannot run anywhere<BR>
Aug 2 07:04:45 virtualserver02 pengine: [1757]: info: native_color: Resource o2cb:0 cannot run anywhere<BR>
Aug 2 07:04:45 virtualserver02 pengine: [1757]: notice: clone_rsc_colocation_rh: Cannot pair fs:0 with instance of o2cb-clone<BR>
Aug 2 07:04:45 virtualserver02 pengine: [1757]: info: native_color: Resource fs:0 cannot run anywhere<BR>
Aug 2 07:04:45 virtualserver02 pengine: [1757]: notice: LogActions: Leave resource drbd_r0:0#011(Stopped)<BR>
Aug 2 07:04:45 virtualserver02 pengine: [1757]: notice: LogActions: Leave resource drbd_r0:1#011(Master virtualserver02)<BR>
Aug 2 07:04:45 virtualserver02 pengine: [1757]: notice: LogActions: Leave resource dlm:0#011(Stopped)<BR>
Aug 2 07:04:45 virtualserver02 pengine: [1757]: notice: LogActions: Leave resource dlm:1#011(Started virtualserver02)<BR>
Aug 2 07:04:45 virtualserver02 pengine: [1757]: notice: LogActions: Leave resource o2cb:0#011(Stopped)<BR>
Aug 2 07:04:45 virtualserver02 pengine: [1757]: notice: LogActions: Leave resource o2cb:1#011(Started virtualserver02)<BR>
Aug 2 07:04:45 virtualserver02 pengine: [1757]: notice: LogActions: Leave resource fs:0#011(Stopped)<BR>
Aug 2 07:04:45 virtualserver02 pengine: [1757]: notice: LogActions: Leave resource fs:1#011(Started virtualserver02)<BR>
Aug 2 07:04:45 virtualserver02 cib: [19306]: info: write_cib_contents: Archived previous version as /var/lib/heartbeat/crm/cib-47.raw<BR>
Aug 2 07:04:45 virtualserver02 crmd: [1758]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]<BR>
Aug 2 07:04:45 virtualserver02 crmd: [1758]: info: unpack_graph: Unpacked transition 0: 0 actions in 0 synapses<BR>
Aug 2 07:04:45 virtualserver02 crmd: [1758]: info: do_te_invoke: Processing graph 0 (ref=pe_calc-dc-1312261485-15) derived from /var/lib/pengine/pe-input-101.bz2<BR>
Aug 2 07:04:45 virtualserver02 crmd: [1758]: info: run_graph: ====================================================<BR>
Aug 2 07:04:45 virtualserver02 crmd: [1758]: notice: run_graph: Transition 0 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-input-101.bz2): Complete<BR>
Aug 2 07:04:45 virtualserver02 crmd: [1758]: info: te_graph_trigger: Transition 0 is now complete<BR>
Aug 2 07:04:45 virtualserver02 crmd: [1758]: info: notify_crmd: Transition 0 status: done - <null><BR>
Aug 2 07:04:45 virtualserver02 crmd: [1758]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]<BR>
Aug 2 07:04:45 virtualserver02 crmd: [1758]: info: do_state_transition: Starting PEngine Recheck Timer<BR>
Aug 2 07:04:45 virtualserver02 cib: [19306]: info: write_cib_contents: Wrote version 0.238.0 of the CIB to disk (digest: 3ad7e501b66a385cbb08f9897259f1f2)<BR>
Aug 2 07:04:45 virtualserver02 pengine: [1757]: info: process_pe_message: Transition 0: PEngine Input stored in: /var/lib/pengine/pe-input-101.bz2<BR>
Aug 2 07:04:45 virtualserver02 cib: [19306]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.n4vOgp (digest: /var/lib/heartbeat/crm/cib.9Gtl5C)<BR>
Aug 2 07:04:51 virtualserver02 cibadmin: [19316]: info: Invoked: /usr/sbin/cibadmin -Ql<BR>
Aug 2 07:04:51 virtualserver02 cibadmin: [19333]: info: Invoked: /usr/sbin/cibadmin -Ql<BR>
<BR>
</FONT>
</P>
</BODY>
</HTML>