<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2//EN">
<HTML>
<HEAD>
<META HTTP-EQUIV="Content-Type" CONTENT="text/html; charset=iso-8859-1">
<META NAME="Generator" CONTENT="MS Exchange Server version 6.5.7638.1">
<TITLE>Problems with migration of kvm on primary/primary cluster</TITLE>
</HEAD>
<BODY>
<!-- Converted from text/plain format -->
<P><FONT SIZE=2>Hallo,<BR>
<BR>
I have a "little" Problem with the migration of kvm vm.<BR>
I build a cluster with drbd,ocfs2 and pacemaker on two nodes (primary/primary).<BR>
If I do the migration manually with "virt-manager" or with "crm resource" all works fine, but if I reboot or shutdown the node,<BR>
pacemaker STOP the vm completely and start it again on the other node.<BR>
<BR>
I don't know what to do next.<BR>
I cant find the error in the config?<BR>
<BR>
<BR>
name of the nodes: virtualserver03 and virtualserver05<BR>
resource kvm vm: unbuntu<BR>
<BR>
<BR>
cib:<BR>
<BR>
node virtualserver03 \<BR>
attributes standby="off"<BR>
node virtualserver05 \<BR>
attributes standby="off"<BR>
primitive resDLM ocf:pacemaker:controld \<BR>
op monitor interval="120s"<BR>
primitive resDRBD ocf:linbit:drbd \<BR>
params drbd_resource="disk0" \<BR>
operations $id="resDRBD-operations" \<BR>
op monitor interval="20" role="Master" timeout="20" \<BR>
op monitor interval="30" role="Slave" timeout="20"<BR>
primitive resFS ocf:heartbeat:Filesystem \<BR>
params device="/dev/drbd/by-res/disk0" directory="/opt" fstype="ocfs2" \<BR>
op monitor interval="120s"<BR>
primitive resO2CB ocf:pacemaker:o2cb \<BR>
op monitor interval="120s"<BR>
primitive ubuntu ocf:heartbeat:VirtualDomain \<BR>
params config="/etc/libvirt/qemu/ubuntu.xml" hypervisor="qemu:///system" migration_transport="ssh" force_stop="0" \<BR>
op start interval="0" timeout="90" \<BR>
op stop interval="0" timeout="300" \<BR>
op monitor interval="10" timeout="30" depth="0" \<BR>
op migrate_from interval="0" timeout="240" \<BR>
op migrate_to interval="0" timeout="240" \<BR>
meta allow-migrate="true" target-role="started" is-managed="true"<BR>
ms msDRBD resDRBD \<BR>
meta resource-stickines="100" notify="true" master-max="2" interleave="true"<BR>
clone cloneDLM resDLM \<BR>
meta globally-unique="false" interleave="true"<BR>
clone cloneFS resFS \<BR>
meta interleave="true" ordered="true"<BR>
clone cloneO2CB resO2CB \<BR>
meta globally-unique="false" interleave="true"<BR>
colocation colDLMDRBD inf: cloneDLM msDRBD:Master<BR>
colocation colFSO2CB inf: cloneFS cloneO2CB<BR>
colocation colO2CBDLM inf: cloneO2CB cloneDLM<BR>
order ordDLMO2CB 0: cloneDLM cloneO2CB<BR>
order ordDRBDDLM 0: msDRBD:promote cloneDLM<BR>
order ordO2CBFS 0: cloneO2CB cloneFS<BR>
property $id="cib-bootstrap-options" \<BR>
dc-version="1.0.8-042548a451fce8400660f6031f4da6f0223dd5dd" \<BR>
cluster-infrastructure="openais" \<BR>
stonith-enabled="false" \<BR>
no-quorum-policy="ignore" \<BR>
expected-quorum-votes="2" \<BR>
last-lrm-refresh="1307433605"<BR>
<BR>
<BR>
The logfile when virtualserver03 go into standby<BR>
<BR>
<BR>
Jun 7 09:09:23 virtualserver03 crmd: [903]: notice: run_graph: Transition 26 (Complete=2, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-input-52.bz2): Complete<BR>
Jun 7 09:09:23 virtualserver03 crmd: [903]: info: te_graph_trigger: Transition 26 is now complete<BR>
Jun 7 09:09:23 virtualserver03 crmd: [903]: info: notify_crmd: Transition 26 status: done - <null><BR>
Jun 7 09:09:23 virtualserver03 crmd: [903]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]<BR>
Jun 7 09:09:23 virtualserver03 crmd: [903]: info: do_state_transition: Starting PEngine Recheck Timer<BR>
Jun 7 09:09:25 virtualserver03 cibadmin: [14056]: info: Invoked: /usr/sbin/cibadmin -Ql<BR>
Jun 7 09:09:32 virtualserver03 cib: [898]: WARN: send_ipc_message: IPC Channel to 1403 is not connected<BR>
Jun 7 09:09:34 virtualserver03 kernel: [ 2920.781261] vnet0: no IPv6 routers present<BR>
Jun 7 09:09:38 virtualserver03 kernel: [ 2924.891266] br0: port 2(vnet0) entering forwarding state<BR>
Jun 7 09:09:43 virtualserver03 cib: [898]: WARN: send_ipc_message: IPC Channel to 1403 is not connected<BR>
Jun 7 09:10:15 virtualserver03 cib: last message repeated 2 times<BR>
Jun 7 09:10:15 virtualserver03 cibadmin: [15019]: info: Invoked: cibadmin -Ql -o nodes<BR>
Jun 7 09:10:15 virtualserver03 crm_attribute: [15024]: info: Invoked: crm_attribute -N virtualserver03 -n standby -v on -l forever<BR>
Jun 7 09:10:15 virtualserver03 cib: [898]: info: log_data_element: cib:diff: - <cib admin_epoch="0" epoch="138" num_updates="3" ><BR>
Jun 7 09:10:15 virtualserver03 cib: [898]: info: log_data_element: cib:diff: - <configuration ><BR>
Jun 7 09:10:15 virtualserver03 cib: [898]: info: log_data_element: cib:diff: - <nodes ><BR>
Jun 7 09:10:15 virtualserver03 cib: [898]: info: log_data_element: cib:diff: - <node id="virtualserver03" ><BR>
Jun 7 09:10:15 virtualserver03 cib: [898]: info: log_data_element: cib:diff: - <instance_attributes id="nodes-virtualserver03" ><BR>
Jun 7 09:10:15 virtualserver03 cib: [898]: info: log_data_element: cib:diff: - <nvpair value="off" id="nodes-virtualserver03-standby" /><BR>
Jun 7 09:10:15 virtualserver03 cib: [898]: info: log_data_element: cib:diff: - </instance_attributes><BR>
Jun 7 09:10:15 virtualserver03 cib: [898]: info: log_data_element: cib:diff: - </node><BR>
Jun 7 09:10:15 virtualserver03 cib: [898]: info: log_data_element: cib:diff: - </nodes><BR>
Jun 7 09:10:15 virtualserver03 cib: [898]: info: log_data_element: cib:diff: - </configuration><BR>
Jun 7 09:10:15 virtualserver03 cib: [898]: info: log_data_element: cib:diff: - </cib><BR>
Jun 7 09:10:15 virtualserver03 crmd: [903]: info: abort_transition_graph: need_abort:59 - Triggered transition abort (complete=1) : Non-status change<BR>
Jun 7 09:10:15 virtualserver03 cib: [898]: info: log_data_element: cib:diff: + <cib admin_epoch="0" epoch="139" num_updates="1" ><BR>
Jun 7 09:10:15 virtualserver03 crmd: [903]: info: need_abort: Aborting on change to admin_epoch<BR>
Jun 7 09:10:15 virtualserver03 cib: [898]: info: log_data_element: cib:diff: + <configuration ><BR>
Jun 7 09:10:15 virtualserver03 cib: [898]: info: log_data_element: cib:diff: + <nodes ><BR>
Jun 7 09:10:15 virtualserver03 crmd: [903]: info: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]<BR>
Jun 7 09:10:15 virtualserver03 cib: [898]: info: log_data_element: cib:diff: + <node id="virtualserver03" ><BR>
Jun 7 09:10:15 virtualserver03 crmd: [903]: info: do_state_transition: All 2 cluster nodes are eligible to run resources.<BR>
Jun 7 09:10:15 virtualserver03 cib: [898]: info: log_data_element: cib:diff: + <instance_attributes id="nodes-virtualserver03" ><BR>
Jun 7 09:10:15 virtualserver03 cib: [898]: info: log_data_element: cib:diff: + <nvpair value="on" id="nodes-virtualserver03-standby" /><BR>
Jun 7 09:10:15 virtualserver03 crmd: [903]: info: do_pe_invoke: Query 160: Requesting the current CIB: S_POLICY_ENGINE<BR>
Jun 7 09:10:15 virtualserver03 cib: [898]: info: log_data_element: cib:diff: + </instance_attributes><BR>
Jun 7 09:10:15 virtualserver03 cib: [898]: info: log_data_element: cib:diff: + </node><BR>
Jun 7 09:10:15 virtualserver03 cib: [898]: info: log_data_element: cib:diff: + </nodes><BR>
Jun 7 09:10:15 virtualserver03 cib: [898]: info: log_data_element: cib:diff: + </configuration><BR>
Jun 7 09:10:15 virtualserver03 cib: [898]: info: log_data_element: cib:diff: + </cib><BR>
Jun 7 09:10:15 virtualserver03 cib: [898]: info: cib_process_request: Operation complete: op cib_modify for section nodes (origin=local/crm_attribute/4, version=0.139.1): ok (rc=0)<BR>
Jun 7 09:10:15 virtualserver03 cib: [15025]: info: write_cib_contents: Archived previous version as /var/lib/heartbeat/crm/cib-25.raw<BR>
Jun 7 09:10:15 virtualserver03 cib: [15025]: info: write_cib_contents: Wrote version 0.139.0 of the CIB to disk (digest: 8ac9dbbc5de14a957fbfa8d372ff03c9)<BR>
Jun 7 09:10:15 virtualserver03 cib: [15025]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.gNfdj7 (digest: /var/lib/heartbeat/crm/cib.3YnDJX)<BR>
Jun 7 09:10:15 virtualserver03 crmd: [903]: info: do_pe_invoke_callback: Invoking the PE: query=160, ref=pe_calc-dc-1307430615-172, seq=132, quorate=1<BR>
Jun 7 09:10:15 virtualserver03 pengine: [902]: notice: unpack_config: On loss of CCM Quorum: Ignore<BR>
Jun 7 09:10:15 virtualserver03 pengine: [902]: info: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0<BR>
Jun 7 09:10:15 virtualserver03 pengine: [902]: info: unpack_status: Node virtualserver03 is in standby-mode<BR>
Jun 7 09:10:15 virtualserver03 pengine: [902]: info: determine_online_status: Node virtualserver03 is standby<BR>
Jun 7 09:10:15 virtualserver03 pengine: [902]: info: determine_online_status: Node virtualserver05 is online<BR>
Jun 7 09:10:15 virtualserver03 pengine: [902]: notice: clone_print: Master/Slave Set: msDRBD<BR>
Jun 7 09:10:15 virtualserver03 pengine: [902]: notice: short_print: Masters: [ virtualserver03 virtualserver05 ]<BR>
Jun 7 09:10:15 virtualserver03 pengine: [902]: notice: clone_print: Clone Set: cloneDLM<BR>
Jun 7 09:10:15 virtualserver03 pengine: [902]: notice: short_print: Started: [ virtualserver05 virtualserver03 ]<BR>
Jun 7 09:10:15 virtualserver03 pengine: [902]: notice: clone_print: Clone Set: cloneO2CB<BR>
Jun 7 09:10:15 virtualserver03 pengine: [902]: notice: short_print: Started: [ virtualserver05 virtualserver03 ]<BR>
Jun 7 09:10:15 virtualserver03 pengine: [902]: notice: clone_print: Clone Set: cloneFS<BR>
Jun 7 09:10:15 virtualserver03 pengine: [902]: notice: short_print: Started: [ virtualserver05 virtualserver03 ]<BR>
Jun 7 09:10:15 virtualserver03 pengine: [902]: notice: native_print: ubuntu#011(ocf::heartbeat:VirtualDomain):#011Started virtualserver03<BR>
Jun 7 09:10:15 virtualserver03 pengine: [902]: info: native_merge_weights: msDRBD: Rolling back scores from cloneDLM<BR>
Jun 7 09:10:15 virtualserver03 pengine: [902]: info: native_merge_weights: msDRBD: Rolling back scores from cloneO2CB<BR>
Jun 7 09:10:15 virtualserver03 pengine: [902]: info: native_color: Resource resDRBD:0 cannot run anywhere<BR>
Jun 7 09:10:15 virtualserver03 pengine: [902]: info: native_merge_weights: msDRBD: Rolling back scores from cloneDLM<BR>
Jun 7 09:10:15 virtualserver03 pengine: [902]: info: native_merge_weights: msDRBD: Rolling back scores from cloneO2CB<BR>
Jun 7 09:10:15 virtualserver03 pengine: [902]: info: master_color: Promoting resDRBD:1 (Master virtualserver05)<BR>
Jun 7 09:10:15 virtualserver03 pengine: [902]: info: master_color: msDRBD: Promoted 1 instances of a possible 2 to master<BR>
Jun 7 09:10:15 virtualserver03 pengine: [902]: info: native_merge_weights: cloneDLM: Rolling back scores from cloneO2CB<BR>
Jun 7 09:10:15 virtualserver03 pengine: [902]: info: master_color: Promoting resDRBD:1 (Master virtualserver05)<BR>
Jun 7 09:10:15 virtualserver03 pengine: [902]: info: master_color: msDRBD: Promoted 1 instances of a possible 2 to master<BR>
Jun 7 09:10:15 virtualserver03 pengine: [902]: info: master_color: Promoting resDRBD:1 (Master virtualserver05)<BR>
Jun 7 09:10:15 virtualserver03 pengine: [902]: info: master_color: msDRBD: Promoted 1 instances of a possible 2 to master<BR>
Jun 7 09:10:15 virtualserver03 pengine: [902]: info: native_color: Resource resDLM:1 cannot run anywhere<BR>
Jun 7 09:10:15 virtualserver03 pengine: [902]: info: native_color: Resource resO2CB:1 cannot run anywhere<BR>
Jun 7 09:10:15 virtualserver03 pengine: [902]: info: native_color: Resource resFS:1 cannot run anywhere<BR>
Jun 7 09:10:15 virtualserver03 pengine: [902]: notice: RecurringOp: Start recurring monitor (10s) for ubuntu on virtualserver05<BR>
Jun 7 09:10:15 virtualserver03 pengine: [902]: info: find_compatible_child: Colocating resO2CB:0 with resDLM:0 on virtualserver05<BR>
Jun 7 09:10:15 virtualserver03 pengine: [902]: notice: clone_rsc_order_lh: Interleaving resDLM:0 with resO2CB:0<BR>
Jun 7 09:10:15 virtualserver03 pengine: [902]: info: find_compatible_child: Colocating resDLM:0 with resO2CB:0 on virtualserver05<BR>
Jun 7 09:10:15 virtualserver03 pengine: [902]: notice: clone_rsc_order_lh: Interleaving resO2CB:0 with resDLM:0<BR>
Jun 7 09:10:15 virtualserver03 pengine: [902]: info: find_compatible_child: Colocating resDLM:1 with resO2CB:1 on virtualserver03<BR>
Jun 7 09:10:15 virtualserver03 pengine: [902]: notice: clone_rsc_order_lh: Interleaving resO2CB:1 with resDLM:1<BR>
Jun 7 09:10:15 virtualserver03 pengine: [902]: info: find_compatible_child: Colocating resDLM:0 with resDRBD:1 on virtualserver05<BR>
Jun 7 09:10:15 virtualserver03 pengine: [902]: notice: clone_rsc_order_lh: Interleaving resDRBD:1 with resDLM:0<BR>
Jun 7 09:10:15 virtualserver03 pengine: [902]: info: find_compatible_child: Colocating resDRBD:0 with resDLM:1 on virtualserver03<BR>
Jun 7 09:10:15 virtualserver03 pengine: [902]: notice: clone_rsc_order_lh: Interleaving resDLM:1 with resDRBD:0<BR>
Jun 7 09:10:15 virtualserver03 pengine: [902]: info: find_compatible_child: Colocating resDRBD:1 with resDLM:0 on virtualserver05<BR>
Jun 7 09:10:15 virtualserver03 pengine: [902]: notice: clone_rsc_order_lh: Interleaving resDLM:0 with resDRBD:1<BR>
Jun 7 09:10:15 virtualserver03 pengine: [902]: info: find_compatible_child: Colocating resFS:0 with resO2CB:0 on virtualserver05<BR>
Jun 7 09:10:15 virtualserver03 pengine: [902]: notice: clone_rsc_order_lh: Interleaving resO2CB:0 with resFS:0<BR>
Jun 7 09:10:15 virtualserver03 pengine: [902]: info: find_compatible_child: Colocating resO2CB:0 with resFS:0 on virtualserver05<BR>
Jun 7 09:10:15 virtualserver03 pengine: [902]: notice: clone_rsc_order_lh: Interleaving resFS:0 with resO2CB:0<BR>
Jun 7 09:10:15 virtualserver03 pengine: [902]: info: find_compatible_child: Colocating resO2CB:1 with resFS:1 on virtualserver03<BR>
Jun 7 09:10:15 virtualserver03 pengine: [902]: notice: clone_rsc_order_lh: Interleaving resFS:1 with resO2CB:1<BR>
Jun 7 09:10:15 virtualserver03 pengine: [902]: notice: LogActions: Demote resDRBD:0#011(Master -> Stopped virtualserver03)<BR>
Jun 7 09:10:15 virtualserver03 pengine: [902]: notice: LogActions: Stop resource resDRBD:0#011(virtualserver03)<BR>
Jun 7 09:10:15 virtualserver03 pengine: [902]: notice: LogActions: Leave resource resDRBD:1#011(Master virtualserver05)<BR>
Jun 7 09:10:15 virtualserver03 pengine: [902]: notice: LogActions: Leave resource resDLM:0#011(Started virtualserver05)<BR>
Jun 7 09:10:15 virtualserver03 pengine: [902]: notice: LogActions: Stop resource resDLM:1#011(virtualserver03)<BR>
Jun 7 09:10:15 virtualserver03 pengine: [902]: notice: LogActions: Leave resource resO2CB:0#011(Started virtualserver05)<BR>
Jun 7 09:10:15 virtualserver03 pengine: [902]: notice: LogActions: Stop resource resO2CB:1#011(virtualserver03)<BR>
Jun 7 09:10:15 virtualserver03 pengine: [902]: notice: LogActions: Leave resource resFS:0#011(Started virtualserver05)<BR>
Jun 7 09:10:15 virtualserver03 pengine: [902]: notice: LogActions: Stop resource resFS:1#011(virtualserver03)<BR>
Jun 7 09:10:15 virtualserver03 pengine: [902]: info: complex_migrate_reload: Migrating ubuntu from virtualserver03 to virtualserver05<BR>
Jun 7 09:10:15 virtualserver03 pengine: [902]: notice: LogActions: Migrate resource ubuntu#011(Started virtualserver03 -> virtualserver05)<BR>
Jun 7 09:10:15 virtualserver03 crmd: [903]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]<BR>
Jun 7 09:10:15 virtualserver03 crmd: [903]: info: unpack_graph: Unpacked transition 27: 34 actions in 34 synapses<BR>
Jun 7 09:10:15 virtualserver03 crmd: [903]: info: do_te_invoke: Processing graph 27 (ref=pe_calc-dc-1307430615-172) derived from /var/lib/pengine/pe-input-53.bz2<BR>
Jun 7 09:10:15 virtualserver03 crmd: [903]: info: te_pseudo_action: Pseudo action 40 fired and confirmed<BR>
Jun 7 09:10:15 virtualserver03 crmd: [903]: info: te_pseudo_action: Pseudo action 49 fired and confirmed<BR>
Jun 7 09:10:15 virtualserver03 crmd: [903]: info: te_pseudo_action: Pseudo action 63 fired and confirmed<BR>
Jun 7 09:10:15 virtualserver03 crmd: [903]: info: te_rsc_command: Initiating action 65: migrate_to ubuntu_migrate_to_0 on virtualserver03 (local)<BR>
Jun 7 09:10:15 virtualserver03 pengine: [902]: info: process_pe_message: Transition 27: PEngine Input stored in: /var/lib/pengine/pe-input-53.bz2<BR>
Jun 7 09:10:15 virtualserver03 lrmd: [900]: info: cancel_op: operation monitor[56] on ocf::VirtualDomain::ubuntu for client 903, its parameters: hypervisor=[qemu:///system] CRM_meta_depth=[0] CRM_meta_interval=[10000] force_stop=[0] config=[/etc/libvirt/qemu/ubuntu.xml] depth=[0] crm_feature_set=[3.0.1] CRM_meta_name=[monitor] migration_transport=[ssh] CRM_meta_timeout=[30000] cancelled<BR>
Jun 7 09:10:15 virtualserver03 crmd: [903]: info: do_lrm_rsc_op: Performing key=65:27:0:f9c426e5-68d1-49f0-ae7f-0eff420e35a2 op=ubuntu_migrate_to_0 )<BR>
Jun 7 09:10:15 virtualserver03 lrmd: [900]: info: rsc:ubuntu:57: migrate_to<BR>
Jun 7 09:10:15 virtualserver03 crmd: [903]: info: process_lrm_event: LRM operation ubuntu_monitor_10000 (call=56, status=1, cib-update=0, confirmed=true) Cancelled<BR>
Jun 7 09:10:15 virtualserver03 crmd: [903]: info: te_rsc_command: Initiating action 84: notify resDRBD:0_pre_notify_demote_0 on virtualserver03 (local)<BR>
Jun 7 09:10:15 virtualserver03 crmd: [903]: info: do_lrm_rsc_op: Performing key=84:27:0:f9c426e5-68d1-49f0-ae7f-0eff420e35a2 op=resDRBD:0_notify_0 )<BR>
Jun 7 09:10:15 virtualserver03 lrmd: [900]: info: rsc:resDRBD:0:58: notify<BR>
Jun 7 09:10:15 virtualserver03 crmd: [903]: info: te_rsc_command: Initiating action 86: notify resDRBD:1_pre_notify_demote_0 on virtualserver05<BR>
Jun 7 09:10:15 virtualserver03 crmd: [903]: info: te_rsc_command: Initiating action 60: stop resFS:1_stop_0 on virtualserver03 (local)<BR>
Jun 7 09:10:15 virtualserver03 lrmd: [900]: info: cancel_op: operation monitor[45] on ocf::Filesystem::resFS:1 for client 903, its parameters: CRM_meta_clone=[1] fstype=[ocfs2] device=[/dev/drbd/by-res/disk0] CRM_meta_clone_max=[2] crm_feature_set=[3.0.1] CRM_meta_globally_unique=[false] directory=[/opt] CRM_meta_name=[monitor] CRM_meta_timeout=[20000] CRM_meta_interval=[120000] cancelled<BR>
Jun 7 09:10:15 virtualserver03 crmd: [903]: info: do_lrm_rsc_op: Performing key=60:27:0:f9c426e5-68d1-49f0-ae7f-0eff420e35a2 op=resFS:1_stop_0 )<BR>
Jun 7 09:10:15 virtualserver03 lrmd: [900]: info: rsc:resFS:1:59: stop<BR>
Jun 7 09:10:15 virtualserver03 crmd: [903]: info: process_lrm_event: LRM operation resFS:1_monitor_120000 (call=45, status=1, cib-update=0, confirmed=true) Cancelled<BR>
Jun 7 09:10:15 virtualserver03 Filesystem[15028]: INFO: Running stop for /dev/drbd/by-res/disk0 on /opt<BR>
Jun 7 09:10:15 virtualserver03 VirtualDomain[15026]: INFO: ubuntu: Starting live migration to virtualserver05 (using remote hypervisor URI qemu+ssh://virtualserver05/system).<BR>
Jun 7 09:10:15 virtualserver03 crmd: [903]: info: match_graph_event: Action resDRBD:1_pre_notify_demote_0 (86) confirmed on virtualserver05 (rc=0)<BR>
Jun 7 09:10:15 virtualserver03 Filesystem[15028]: INFO: Trying to unmount /opt<BR>
Jun 7 09:10:15 virtualserver03 crmd: [903]: info: process_lrm_event: LRM operation resDRBD:0_notify_0 (call=58, rc=0, cib-update=161, confirmed=true) ok<BR>
Jun 7 09:10:15 virtualserver03 lrmd: [900]: info: RA output: (resFS:1:stop:stderr) umount: /opt: device is busy.#012 (In some cases useful info about processes that use#012 the device is found by lsof(8) or fuser(1))<BR>
Jun 7 09:10:15 virtualserver03 crmd: [903]: info: match_graph_event: Action resDRBD:0_pre_notify_demote_0 (84) confirmed on virtualserver03 (rc=0)<BR>
Jun 7 09:10:15 virtualserver03 crmd: [903]: info: te_pseudo_action: Pseudo action 41 fired and confirmed<BR>
Jun 7 09:10:15 virtualserver03 crmd: [903]: info: te_pseudo_action: Pseudo action 38 fired and confirmed<BR>
Jun 7 09:10:15 virtualserver03 crmd: [903]: info: te_rsc_command: Initiating action 14: demote resDRBD:0_demote_0 on virtualserver03 (local)<BR>
Jun 7 09:10:15 virtualserver03 lrmd: [900]: info: cancel_op: operation monitor[40] on ocf::drbd::resDRBD:0 for client 903, its parameters: CRM_meta_clone=[0] CRM_meta_timeout=[20000] CRM_meta_notify_slave_resource=[ ] CRM_meta_notify_active_resource=[ ] CRM_meta_notify_demote_uname=[ ] drbd_resource=[disk0] CRM_meta_notify_stop_resource=[ ] CRM_meta_notify_master_resource=[resDRBD:1 ] CRM_meta_clone_max=[2] CRM_meta_notify_demote_resource=[ ] CRM_meta_notify_start_resource=[resDRBD:0 ] CRM_meta_notify_master_uname=[virtualserver05 ] CRM_meta_master_max=[2] CRM_meta_ cancelled<BR>
Jun 7 09:10:15 virtualserver03 crmd: [903]: info: do_lrm_rsc_op: Performing key=14:27:0:f9c426e5-68d1-49f0-ae7f-0eff420e35a2 op=resDRBD:0_demote_0 )<BR>
Jun 7 09:10:15 virtualserver03 lrmd: [900]: info: rsc:resDRBD:0:60: demote<BR>
Jun 7 09:10:15 virtualserver03 crmd: [903]: info: process_lrm_event: LRM operation resDRBD:0_monitor_20000 (call=40, status=1, cib-update=0, confirmed=true) Cancelled<BR>
Jun 7 09:10:15 virtualserver03 Filesystem[15028]: ERROR: Couldn't unmount /opt; trying cleanup with SIGTERM<BR>
Jun 7 09:10:15 virtualserver03 lrmd: [900]: info: RA output: (resFS:1:stop:stderr) /opt:<BR>
Jun 7 09:10:15 virtualserver03 lrmd: [900]: info: RA output: (resFS:1:stop:stderr) <BR>
Jun 7 09:10:15 virtualserver03 lrmd: last message repeated 14 times<BR>
Jun 7 09:10:15 virtualserver03 lrmd: [900]: info: RA output: (resFS:1:stop:stdout) 14027<BR>
Jun 7 09:10:15 virtualserver03 lrmd: [900]: info: RA output: (resFS:1:stop:stderr)<BR>
Jun 7 09:10:15 virtualserver03 Filesystem[15028]: INFO: Some processes on /opt were signalled<BR>
Jun 7 09:10:15 virtualserver03 kernel: [ 2961.696583] block drbd0: State change failed: Device is held open by someone<BR>
Jun 7 09:10:15 virtualserver03 kernel: [ 2961.703532] br0: port 2(vnet0) entering disabled state<BR>
Jun 7 09:10:15 virtualserver03 kernel: [ 2961.706567] block drbd0: state = { cs:Connected ro:Primary/Primary ds:UpToDate/UpToDate r--- }<BR>
Jun 7 09:10:15 virtualserver03 kernel: [ 2961.706567] block drbd0: wanted = { cs:Connected ro:Secondary/Primary ds:UpToDate/UpToDate r--- }<BR>
Jun 7 09:10:15 virtualserver03 lrmd: [900]: info: RA output: (resDRBD:0:demote:stderr) 0: State change failed: (-12) Device is held open by someone<BR>
Jun 7 09:10:15 virtualserver03 lrmd: [900]: info: RA output: (resDRBD:0:demote:stderr) Command 'drbdsetup 0 secondary' terminated with exit code 11<BR>
Jun 7 09:10:15 virtualserver03 drbd[15127]: ERROR: disk0: Called drbdadm -c /etc/drbd.conf secondary disk0<BR>
Jun 7 09:10:15 virtualserver03 drbd[15127]: ERROR: disk0: Exit code 11<BR>
Jun 7 09:10:15 virtualserver03 drbd[15127]: ERROR: disk0: Command output:<BR>
Jun 7 09:10:15 virtualserver03 lrmd: [900]: info: RA output: (resDRBD:0:demote:stdout)<BR>
Jun 7 09:10:15 virtualserver03 lrmd: [900]: info: RA output: (resDRBD:0:demote:stderr) 0: State change failed: (-12) Device is held open by someone<BR>
Jun 7 09:10:15 virtualserver03 lrmd: [900]: info: RA output: (resDRBD:0:demote:stderr) Command 'drbdsetup 0 secondary' terminated with exit code 11<BR>
Jun 7 09:10:15 virtualserver03 drbd[15127]: ERROR: disk0: Called drbdadm -c /etc/drbd.conf secondary disk0<BR>
Jun 7 09:10:15 virtualserver03 drbd[15127]: ERROR: disk0: Exit code 11<BR>
Jun 7 09:10:15 virtualserver03 drbd[15127]: ERROR: disk0: Command output:<BR>
Jun 7 09:10:15 virtualserver03 lrmd: [900]: info: RA output: (resDRBD:0:demote:stdout)<BR>
Jun 7 09:10:15 virtualserver03 kernel: [ 2961.791246] device vnet0 left promiscuous mode<BR>
Jun 7 09:10:15 virtualserver03 kernel: [ 2961.791741] block drbd0: State change failed: Device is held open by someone<BR>
Jun 7 09:10:15 virtualserver03 kernel: [ 2961.791748] block drbd0: state = { cs:Connected ro:Primary/Primary ds:UpToDate/UpToDate r--- }<BR>
Jun 7 09:10:15 virtualserver03 kernel: [ 2961.791753] block drbd0: wanted = { cs:Connected ro:Secondary/Primary ds:UpToDate/UpToDate r--- }<BR>
Jun 7 09:10:15 virtualserver03 kernel: [ 2961.866998] br0: port 2(vnet0) entering disabled state<BR>
Jun 7 09:10:16 virtualserver03 kernel: [ 2962.249486] type=1505 audit(1307430616.014:25): operation="profile_remove" pid=15205 name="libvirt-147ae184-c43d-80d3-9ce1-67c3801154ae" namespace="root"<BR>
Jun 7 09:10:16 virtualserver03 cib: [898]: WARN: send_ipc_message: IPC Channel to 1403 is not connected<BR>
Jun 7 09:10:16 virtualserver03 kernel: [ 2962.814480] block drbd0: State change failed: Device is held open by someone<BR>
Jun 7 09:10:16 virtualserver03 kernel: [ 2962.824463] block drbd0: state = { cs:Connected ro:Primary/Primary ds:UpToDate/UpToDate r--- }<BR>
Jun 7 09:10:16 virtualserver03 kernel: [ 2962.824463] block drbd0: wanted = { cs:Connected ro:Secondary/Primary ds:UpToDate/UpToDate r--- }<BR>
Jun 7 09:10:16 virtualserver03 lrmd: [900]: info: RA output: (resDRBD:0:demote:stderr) 0: State change failed: (-12) Device is held open by someone<BR>
Jun 7 09:10:16 virtualserver03 lrmd: [900]: info: RA output: (resDRBD:0:demote:stderr) Command 'drbdsetup 0 secondary' terminated with exit code 11<BR>
Jun 7 09:10:16 virtualserver03 drbd[15127]: ERROR: disk0: Called drbdadm -c /etc/drbd.conf secondary disk0<BR>
Jun 7 09:10:16 virtualserver03 drbd[15127]: ERROR: disk0: Exit code 11<BR>
Jun 7 09:10:16 virtualserver03 drbd[15127]: ERROR: disk0: Command output:<BR>
Jun 7 09:10:16 virtualserver03 lrmd: [900]: info: RA output: (resDRBD:0:demote:stdout)<BR>
Jun 7 09:10:16 virtualserver03 libvirtd: 09:10:16.668: error : qemudDomainMigratePerform:7615 : Requested operation is not valid: domain is not running<BR>
Jun 7 09:10:16 virtualserver03 kernel: [ 2962.913569] ocfs2: Unmounting device (147,0) on (node 1728096)<BR>
Jun 7 09:10:16 virtualserver03 Filesystem[15028]: INFO: unmounted /opt successfully<BR>
Jun 7 09:10:16 virtualserver03 crmd: [903]: info: process_lrm_event: LRM operation resFS:1_stop_0 (call=59, rc=0, cib-update=162, confirmed=true) ok<BR>
Jun 7 09:10:16 virtualserver03 crmd: [903]: info: match_graph_event: Action resFS:1_stop_0 (60) confirmed on virtualserver03 (rc=0)<BR>
Jun 7 09:10:16 virtualserver03 crmd: [903]: info: te_pseudo_action: Pseudo action 64 fired and confirmed<BR>
Jun 7 09:10:16 virtualserver03 crmd: [903]: info: te_pseudo_action: Pseudo action 56 fired and confirmed<BR>
Jun 7 09:10:16 virtualserver03 crmd: [903]: info: te_rsc_command: Initiating action 53: stop resO2CB:1_stop_0 on virtualserver03 (local)<BR>
Jun 7 09:10:16 virtualserver03 lrmd: [900]: info: cancel_op: operation monitor[43] on ocf::o2cb::resO2CB:1 for client 903, its parameters: CRM_meta_clone_max=[2] CRM_meta_name=[monitor] crm_feature_set=[3.0.1] CRM_meta_timeout=[20000] CRM_meta_globally_unique=[false] CRM_meta_interval=[120000] CRM_meta_clone=[1] cancelled<BR>
Jun 7 09:10:16 virtualserver03 crmd: [903]: info: do_lrm_rsc_op: Performing key=53:27:0:f9c426e5-68d1-49f0-ae7f-0eff420e35a2 op=resO2CB:1_stop_0 )<BR>
Jun 7 09:10:16 virtualserver03 lrmd: [900]: info: rsc:resO2CB:1:61: stop<BR>
Jun 7 09:10:16 virtualserver03 crmd: [903]: info: process_lrm_event: LRM operation resO2CB:1_monitor_120000 (call=43, status=1, cib-update=0, confirmed=true) Cancelled<BR>
Jun 7 09:10:16 virtualserver03 o2cb[15259]: INFO: Stopping resO2CB:1<BR>
Jun 7 09:10:16 virtualserver03 o2cb[15259]: INFO: Stopping ocfs2_controld.pcmk<BR>
Jun 7 09:10:16 virtualserver03 ocfs2_controld[6156]: Caught signal 15, exiting<BR>
Jun 7 09:10:16 virtualserver03 lrmd: [900]: info: RA output: (ubuntu:migrate_to:stderr) error: Unknown failure<BR>
Jun 7 09:10:16 virtualserver03 VirtualDomain[15026]: ERROR: ubuntu: live migration to qemu+ssh://virtualserver05/system failed: 1<BR>
Jun 7 09:10:16 virtualserver03 crmd: [903]: info: process_lrm_event: LRM operation ubuntu_migrate_to_0 (call=57, rc=1, cib-update=163, confirmed=true) unknown error<BR>
Jun 7 09:10:16 virtualserver03 crmd: [903]: WARN: status_from_rc: Action 65 (ubuntu_migrate_to_0) on virtualserver03 failed (target: 0 vs. rc: 1): Error<BR>
Jun 7 09:10:16 virtualserver03 crmd: [903]: WARN: update_failcount: Updating failcount for ubuntu on virtualserver03 after failed stop: rc=1 (update=INFINITY, time=1307430616)<BR>
Jun 7 09:10:16 virtualserver03 crmd: [903]: info: abort_transition_graph: match_graph_event:272 - Triggered transition abort (complete=0, tag=lrm_rsc_op, id=ubuntu_stop_0, magic=0:1;65:27:0:f9c426e5-68d1-49f0-ae7f-0eff420e35a2, cib=0.139.5) : Event failed<BR>
Jun 7 09:10:16 virtualserver03 crmd: [903]: info: update_abort_priority: Abort priority upgraded from 0 to 1<BR>
Jun 7 09:10:16 virtualserver03 crmd: [903]: info: update_abort_priority: Abort action done superceeded by restart<BR>
Jun 7 09:10:16 virtualserver03 crmd: [903]: info: match_graph_event: Action ubuntu_stop_0 (65) confirmed on virtualserver03 (rc=4)<BR>
Jun 7 09:10:16 virtualserver03 attrd: [901]: info: attrd_trigger_update: Sending flush op to all hosts for: fail-count-ubuntu (INFINITY)<BR>
Jun 7 09:10:16 virtualserver03 attrd: [901]: info: attrd_perform_update: Sent update 79: fail-count-ubuntu=INFINITY<BR>
Jun 7 09:10:16 virtualserver03 attrd: [901]: info: attrd_trigger_update: Sending flush op to all hosts for: last-failure-ubuntu (1307430616)<BR>
Jun 7 09:10:16 virtualserver03 crmd: [903]: info: abort_transition_graph: te_update_diff:146 - Triggered transition abort (complete=0, tag=transient_attributes, id=virtualserver03, magic=NA, cib=0.139.6) : Transient attribute: update<BR>
Jun 7 09:10:16 virtualserver03 crmd: [903]: info: update_abort_priority: Abort priority upgraded from 1 to 1000000<BR>
Jun 7 09:10:16 virtualserver03 crmd: [903]: info: update_abort_priority: 'Event failed' abort superceeded<BR>
Jun 7 09:10:16 virtualserver03 attrd: [901]: info: attrd_perform_update: Sent update 81: last-failure-ubuntu=1307430616<BR>
Jun 7 09:10:16 virtualserver03 crmd: [903]: info: abort_transition_graph: te_update_diff:146 - Triggered transition abort (complete=0, tag=transient_attributes, id=virtualserver03, magic=NA, cib=0.139.7) : Transient attribute: update<BR>
Jun 7 09:10:16 virtualserver03 kernel: [ 2963.161914] ocfs2: Unregistered cluster interface user<BR>
Jun 7 09:10:16 virtualserver03 crmd: [903]: info: process_lrm_event: LRM operation resO2CB:1_stop_0 (call=61, rc=0, cib-update=164, confirmed=true) ok<BR>
Jun 7 09:10:16 virtualserver03 crmd: [903]: info: match_graph_event: Action resO2CB:1_stop_0 (53) confirmed on virtualserver03 (rc=0)<BR>
Jun 7 09:10:16 virtualserver03 crmd: [903]: info: te_pseudo_action: Pseudo action 57 fired and confirmed<BR>
Jun 7 09:10:17 virtualserver03 cibadmin: [15342]: info: Invoked: /usr/sbin/cibadmin -Ql<BR>
Jun 7 09:10:17 virtualserver03 kernel: [ 2963.898376] block drbd0: role( Primary -> Secondary )<BR>
Jun 7 09:10:17 virtualserver03 lrmd: [900]: info: RA output: (resDRBD:0:demote:stdout)<BR>
Jun 7 09:10:18 virtualserver03 cibadmin: [15385]: info: Invoked: /usr/sbin/cibadmin -Ql<BR>
Jun 7 09:10:18 virtualserver03 crmd: [903]: info: process_lrm_event: LRM operation resDRBD:0_demote_0 (call=60, rc=0, cib-update=165, confirmed=true) ok<BR>
Jun 7 09:10:18 virtualserver03 crmd: [903]: info: match_graph_event: Action resDRBD:0_demote_0 (14) confirmed on virtualserver03 (rc=0)<BR>
Jun 7 09:10:18 virtualserver03 crmd: [903]: info: te_pseudo_action: Pseudo action 39 fired and confirmed<BR>
Jun 7 09:10:18 virtualserver03 crmd: [903]: info: te_pseudo_action: Pseudo action 42 fired and confirmed<BR>
Jun 7 09:10:18 virtualserver03 crmd: [903]: info: te_rsc_command: Initiating action 85: notify resDRBD:0_post_notify_demote_0 on virtualserver03 (local)<BR>
Jun 7 09:10:18 virtualserver03 crmd: [903]: info: do_lrm_rsc_op: Performing key=85:27:0:f9c426e5-68d1-49f0-ae7f-0eff420e35a2 op=resDRBD:0_notify_0 )<BR>
Jun 7 09:10:18 virtualserver03 lrmd: [900]: info: rsc:resDRBD:0:62: notify<BR>
Jun 7 09:10:18 virtualserver03 crmd: [903]: info: te_rsc_command: Initiating action 87: notify resDRBD:1_post_notify_demote_0 on virtualserver05<BR>
Jun 7 09:10:18 virtualserver03 crmd: [903]: info: match_graph_event: Action resDRBD:1_post_notify_demote_0 (87) confirmed on virtualserver05 (rc=0)<BR>
Jun 7 09:10:18 virtualserver03 lrmd: [900]: info: RA output: (resDRBD:0:notify:stdout)<BR>
Jun 7 09:10:18 virtualserver03 crmd: [903]: info: process_lrm_event: LRM operation resDRBD:0_notify_0 (call=62, rc=0, cib-update=166, confirmed=true) ok<BR>
Jun 7 09:10:18 virtualserver03 crmd: [903]: info: match_graph_event: Action resDRBD:0_post_notify_demote_0 (85) confirmed on virtualserver03 (rc=0)<BR>
Jun 7 09:10:18 virtualserver03 crmd: [903]: info: te_pseudo_action: Pseudo action 43 fired and confirmed<BR>
Jun 7 09:10:18 virtualserver03 crmd: [903]: info: run_graph: ====================================================<BR>
Jun 7 09:10:18 virtualserver03 crmd: [903]: notice: run_graph: Transition 27 (Complete=19, Pending=0, Fired=0, Skipped=10, Incomplete=5, Source=/var/lib/pengine/pe-input-53.bz2): Stopped<BR>
Jun 7 09:10:18 virtualserver03 crmd: [903]: info: te_graph_trigger: Transition 27 is now complete<BR>
Jun 7 09:10:18 virtualserver03 crmd: [903]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_crmd ]<BR>
Jun 7 09:10:18 virtualserver03 crmd: [903]: info: do_state_transition: All 2 cluster nodes are eligible to run resources.<BR>
Jun 7 09:10:18 virtualserver03 crmd: [903]: info: do_pe_invoke: Query 167: Requesting the current CIB: S_POLICY_ENGINE<BR>
Jun 7 09:10:18 virtualserver03 crmd: [903]: info: do_pe_invoke_callback: Invoking the PE: query=167, ref=pe_calc-dc-1307430618-181, seq=132, quorate=1<BR>
Jun 7 09:10:18 virtualserver03 pengine: [902]: notice: unpack_config: On loss of CCM Quorum: Ignore<BR>
Jun 7 09:10:18 virtualserver03 pengine: [902]: info: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0<BR>
Jun 7 09:10:18 virtualserver03 pengine: [902]: info: unpack_status: Node virtualserver03 is in standby-mode<BR>
Jun 7 09:10:18 virtualserver03 pengine: [902]: info: determine_online_status: Node virtualserver03 is standby<BR>
Jun 7 09:10:18 virtualserver03 pengine: [902]: WARN: unpack_rsc_op: Processing failed op ubuntu_stop_0 on virtualserver03: unknown error (1)<BR>
Jun 7 09:10:18 virtualserver03 pengine: [902]: info: native_add_running: resource ubuntu isnt managed<BR>
Jun 7 09:10:18 virtualserver03 pengine: [902]: info: determine_online_status: Node virtualserver05 is online<BR>
Jun 7 09:10:18 virtualserver03 pengine: [902]: notice: clone_print: Master/Slave Set: msDRBD<BR>
Jun 7 09:10:18 virtualserver03 pengine: [902]: notice: short_print: Masters: [ virtualserver05 ]<BR>
Jun 7 09:10:18 virtualserver03 pengine: [902]: notice: short_print: Slaves: [ virtualserver03 ]<BR>
Jun 7 09:10:18 virtualserver03 pengine: [902]: notice: clone_print: Clone Set: cloneDLM<BR>
Jun 7 09:10:18 virtualserver03 pengine: [902]: notice: short_print: Started: [ virtualserver05 virtualserver03 ]<BR>
Jun 7 09:10:18 virtualserver03 pengine: [902]: notice: clone_print: Clone Set: cloneO2CB<BR>
Jun 7 09:10:18 virtualserver03 pengine: [902]: notice: short_print: Started: [ virtualserver05 ]<BR>
Jun 7 09:10:18 virtualserver03 pengine: [902]: notice: short_print: Stopped: [ resO2CB:1 ]<BR>
Jun 7 09:10:18 virtualserver03 pengine: [902]: notice: clone_print: Clone Set: cloneFS<BR>
Jun 7 09:10:18 virtualserver03 pengine: [902]: notice: short_print: Started: [ virtualserver05 ]<BR>
Jun 7 09:10:18 virtualserver03 pengine: [902]: notice: short_print: Stopped: [ resFS:1 ]<BR>
Jun 7 09:10:18 virtualserver03 pengine: [902]: notice: native_print: ubuntu#011(ocf::heartbeat:VirtualDomain):#011Started virtualserver03 (unmanaged) FAILED<BR>
Jun 7 09:10:18 virtualserver03 pengine: [902]: info: get_failcount: ubuntu has failed 1000000 times on virtualserver03<BR>
Jun 7 09:10:18 virtualserver03 pengine: [902]: WARN: common_apply_stickiness: Forcing ubuntu away from virtualserver03 after 1000000 failures (max=1000000)<BR>
Jun 7 09:10:18 virtualserver03 pengine: [902]: info: native_merge_weights: msDRBD: Rolling back scores from cloneDLM<BR>
Jun 7 09:10:18 virtualserver03 pengine: [902]: info: native_merge_weights: msDRBD: Rolling back scores from cloneO2CB<BR>
Jun 7 09:10:18 virtualserver03 pengine: [902]: info: native_color: Resource resDRBD:0 cannot run anywhere<BR>
Jun 7 09:10:18 virtualserver03 pengine: [902]: info: native_merge_weights: msDRBD: Rolling back scores from cloneDLM<BR>
Jun 7 09:10:18 virtualserver03 pengine: [902]: info: native_merge_weights: msDRBD: Rolling back scores from cloneO2CB<BR>
Jun 7 09:10:18 virtualserver03 pengine: [902]: info: master_color: Promoting resDRBD:1 (Master virtualserver05)<BR>
Jun 7 09:10:18 virtualserver03 pengine: [902]: info: master_color: msDRBD: Promoted 1 instances of a possible 2 to master<BR>
Jun 7 09:10:18 virtualserver03 pengine: [902]: info: native_merge_weights: cloneDLM: Rolling back scores from cloneO2CB<BR>
Jun 7 09:10:18 virtualserver03 pengine: [902]: info: master_color: Promoting resDRBD:1 (Master virtualserver05)<BR>
Jun 7 09:10:18 virtualserver03 pengine: [902]: info: master_color: msDRBD: Promoted 1 instances of a possible 2 to master<BR>
Jun 7 09:10:18 virtualserver03 pengine: [902]: info: master_color: Promoting resDRBD:1 (Master virtualserver05)<BR>
Jun 7 09:10:18 virtualserver03 pengine: [902]: info: master_color: msDRBD: Promoted 1 instances of a possible 2 to master<BR>
Jun 7 09:10:18 virtualserver03 pengine: [902]: info: native_color: Resource resDLM:1 cannot run anywhere<BR>
Jun 7 09:10:18 virtualserver03 pengine: [902]: info: native_color: Resource resO2CB:1 cannot run anywhere<BR>
Jun 7 09:10:18 virtualserver03 pengine: [902]: info: native_color: Resource resFS:1 cannot run anywhere<BR>
Jun 7 09:10:18 virtualserver03 pengine: [902]: info: native_color: Unmanaged resource ubuntu allocated to 'nowhere': failed<BR>
Jun 7 09:10:18 virtualserver03 pengine: [902]: info: find_compatible_child: Colocating resO2CB:0 with resDLM:0 on virtualserver05<BR>
Jun 7 09:10:18 virtualserver03 pengine: [902]: notice: clone_rsc_order_lh: Interleaving resDLM:0 with resO2CB:0<BR>
Jun 7 09:10:18 virtualserver03 pengine: [902]: info: find_compatible_child: Colocating resDLM:0 with resO2CB:0 on virtualserver05<BR>
Jun 7 09:10:18 virtualserver03 pengine: [902]: notice: clone_rsc_order_lh: Interleaving resO2CB:0 with resDLM:0<BR>
Jun 7 09:10:18 virtualserver03 pengine: [902]: info: find_compatible_child: Colocating resDLM:0 with resDRBD:1 on virtualserver05<BR>
Jun 7 09:10:18 virtualserver03 pengine: [902]: notice: clone_rsc_order_lh: Interleaving resDRBD:1 with resDLM:0<BR>
Jun 7 09:10:18 virtualserver03 pengine: [902]: info: find_compatible_child: Colocating resDRBD:0 with resDLM:1 on virtualserver03<BR>
Jun 7 09:10:18 virtualserver03 pengine: [902]: notice: clone_rsc_order_lh: Interleaving resDLM:1 with resDRBD:0<BR>
Jun 7 09:10:18 virtualserver03 pengine: [902]: info: find_compatible_child: Colocating resDRBD:1 with resDLM:0 on virtualserver05<BR>
Jun 7 09:10:18 virtualserver03 pengine: [902]: notice: clone_rsc_order_lh: Interleaving resDLM:0 with resDRBD:1<BR>
Jun 7 09:10:18 virtualserver03 pengine: [902]: info: find_compatible_child: Colocating resFS:0 with resO2CB:0 on virtualserver05<BR>
Jun 7 09:10:18 virtualserver03 pengine: [902]: notice: clone_rsc_order_lh: Interleaving resO2CB:0 with resFS:0<BR>
Jun 7 09:10:18 virtualserver03 pengine: [902]: info: find_compatible_child: Colocating resO2CB:0 with resFS:0 on virtualserver05<BR>
Jun 7 09:10:18 virtualserver03 pengine: [902]: notice: clone_rsc_order_lh: Interleaving resFS:0 with resO2CB:0<BR>
Jun 7 09:10:18 virtualserver03 pengine: [902]: notice: LogActions: Stop resource resDRBD:0#011(virtualserver03)<BR>
Jun 7 09:10:18 virtualserver03 pengine: [902]: notice: LogActions: Leave resource resDRBD:1#011(Master virtualserver05)<BR>
Jun 7 09:10:18 virtualserver03 pengine: [902]: notice: LogActions: Leave resource resDLM:0#011(Started virtualserver05)<BR>
Jun 7 09:10:18 virtualserver03 pengine: [902]: notice: LogActions: Stop resource resDLM:1#011(virtualserver03)<BR>
Jun 7 09:10:18 virtualserver03 pengine: [902]: notice: LogActions: Leave resource resO2CB:0#011(Started virtualserver05)<BR>
Jun 7 09:10:18 virtualserver03 pengine: [902]: notice: LogActions: Leave resource resO2CB:1#011(Stopped)<BR>
Jun 7 09:10:18 virtualserver03 pengine: [902]: notice: LogActions: Leave resource resFS:0#011(Started virtualserver05)<BR>
Jun 7 09:10:18 virtualserver03 pengine: [902]: notice: LogActions: Leave resource resFS:1#011(Stopped)<BR>
Jun 7 09:10:18 virtualserver03 pengine: [902]: notice: LogActions: Leave resource ubuntu#011(Started unmanaged)<BR>
Jun 7 09:10:18 virtualserver03 crmd: [903]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]<BR>
Jun 7 09:10:18 virtualserver03 crmd: [903]: info: unpack_graph: Unpacked transition 28: 14 actions in 14 synapses<BR>
Jun 7 09:10:18 virtualserver03 crmd: [903]: info: do_te_invoke: Processing graph 28 (ref=pe_calc-dc-1307430618-181) derived from /var/lib/pengine/pe-input-54.bz2<BR>
Jun 7 09:10:18 virtualserver03 crmd: [903]: info: te_pseudo_action: Pseudo action 24 fired and confirmed<BR>
Jun 7 09:10:18 virtualserver03 crmd: [903]: info: te_pseudo_action: Pseudo action 45 fired and confirmed<BR>
Jun 7 09:10:18 virtualserver03 crmd: [903]: info: te_rsc_command: Initiating action 76: notify resDRBD:0_pre_notify_stop_0 on virtualserver03 (local)<BR>
Jun 7 09:10:18 virtualserver03 crmd: [903]: info: do_lrm_rsc_op: Performing key=76:28:0:f9c426e5-68d1-49f0-ae7f-0eff420e35a2 op=resDRBD:0_notify_0 )<BR>
Jun 7 09:10:18 virtualserver03 lrmd: [900]: info: rsc:resDRBD:0:63: notify<BR>
Jun 7 09:10:18 virtualserver03 crmd: [903]: info: te_rsc_command: Initiating action 77: notify resDRBD:1_pre_notify_stop_0 on virtualserver05<BR>
Jun 7 09:10:18 virtualserver03 crmd: [903]: info: te_rsc_command: Initiating action 42: stop resDLM:1_stop_0 on virtualserver03 (local)<BR>
Jun 7 09:10:18 virtualserver03 lrmd: [900]: info: cancel_op: operation monitor[41] on ocf::controld::resDLM:1 for client 903, its parameters: CRM_meta_clone_max=[2] CRM_meta_name=[monitor] crm_feature_set=[3.0.1] CRM_meta_timeout=[20000] CRM_meta_globally_unique=[false] CRM_meta_interval=[120000] CRM_meta_clone=[1] cancelled<BR>
Jun 7 09:10:18 virtualserver03 crmd: [903]: info: do_lrm_rsc_op: Performing key=42:28:0:f9c426e5-68d1-49f0-ae7f-0eff420e35a2 op=resDLM:1_stop_0 )<BR>
Jun 7 09:10:18 virtualserver03 lrmd: [900]: info: rsc:resDLM:1:64: stop<BR>
Jun 7 09:10:18 virtualserver03 crmd: [903]: info: process_lrm_event: LRM operation resDLM:1_monitor_120000 (call=41, status=1, cib-update=0, confirmed=true) Cancelled<BR>
Jun 7 09:10:18 virtualserver03 pengine: [902]: info: process_pe_message: Transition 28: PEngine Input stored in: /var/lib/pengine/pe-input-54.bz2<BR>
Jun 7 09:10:18 virtualserver03 kernel: [ 2965.048034] dlm: closing connection to node 1761650880<BR>
Jun 7 09:10:18 virtualserver03 kernel: [ 2965.059725] dlm: closing connection to node 1728096448<BR>
Jun 7 09:10:18 virtualserver03 crmd: [903]: info: match_graph_event: Action resDRBD:1_pre_notify_stop_0 (77) confirmed on virtualserver05 (rc=0)<BR>
Jun 7 09:10:18 virtualserver03 crmd: [903]: info: process_lrm_event: LRM operation resDRBD:0_notify_0 (call=63, rc=0, cib-update=168, confirmed=true) ok<BR>
Jun 7 09:10:18 virtualserver03 crmd: [903]: info: match_graph_event: Action resDRBD:0_pre_notify_stop_0 (76) confirmed on virtualserver03 (rc=0)<BR>
Jun 7 09:10:18 virtualserver03 crmd: [903]: info: te_pseudo_action: Pseudo action 25 fired and confirmed<BR>
Jun 7 09:10:18 virtualserver03 crmd: [903]: info: te_pseudo_action: Pseudo action 22 fired and confirmed<BR>
Jun 7 09:10:18 virtualserver03 crmd: [903]: info: te_rsc_command: Initiating action 11: stop resDRBD:0_stop_0 on virtualserver03 (local)<BR>
Jun 7 09:10:18 virtualserver03 crmd: [903]: info: do_lrm_rsc_op: Performing key=11:28:0:f9c426e5-68d1-49f0-ae7f-0eff420e35a2 op=resDRBD:0_stop_0 )<BR>
Jun 7 09:10:18 virtualserver03 lrmd: [900]: info: rsc:resDRBD:0:65: stop<BR>
Jun 7 09:10:18 virtualserver03 kernel: [ 2965.141353] block drbd0: peer( Primary -> Unknown ) conn( Connected -> Disconnecting ) pdsk( UpToDate -> DUnknown )<BR>
Jun 7 09:10:18 virtualserver03 kernel: [ 2965.150786] block drbd0: meta connection shut down by peer.<BR>
Jun 7 09:10:18 virtualserver03 kernel: [ 2965.176483] block drbd0: asender terminated<BR>
Jun 7 09:10:18 virtualserver03 kernel: [ 2965.188282] block drbd0: Terminating asender thread<BR>
Jun 7 09:10:18 virtualserver03 kernel: [ 2965.199854] block drbd0: Connection closed<BR>
Jun 7 09:10:18 virtualserver03 kernel: [ 2965.211136] block drbd0: conn( Disconnecting -> StandAlone )<BR>
Jun 7 09:10:19 virtualserver03 kernel: [ 2965.222421] block drbd0: receiver terminated<BR>
Jun 7 09:10:19 virtualserver03 kernel: [ 2965.222468] block drbd0: disk( UpToDate -> Diskless )<BR>
Jun 7 09:10:19 virtualserver03 kernel: [ 2965.244230] block drbd0: Terminating receiver thread<BR>
Jun 7 09:10:19 virtualserver03 kernel: [ 2965.252864] block drbd0: State change failed: Disk state is lower than outdated<BR>
Jun 7 09:10:19 virtualserver03 kernel: [ 2965.252870] block drbd0: state = { cs:StandAlone ro:Secondary/Unknown ds:Diskless/DUnknown r--- }<BR>
Jun 7 09:10:19 virtualserver03 kernel: [ 2965.252874] block drbd0: wanted = { cs:StandAlone ro:Secondary/Unknown ds:Outdated/DUnknown r--- }<BR>
Jun 7 09:10:19 virtualserver03 kernel: [ 2965.319324] block drbd0: drbd_bm_resize called with capacity == 0<BR>
Jun 7 09:10:19 virtualserver03 kernel: [ 2965.330489] block drbd0: worker terminated<BR>
Jun 7 09:10:19 virtualserver03 kernel: [ 2965.341395] block drbd0: Terminating worker thread<BR>
Jun 7 09:10:19 virtualserver03 attrd: [901]: info: attrd_trigger_update: Sending flush op to all hosts for: master-resDRBD:0 (-INFINITY)<BR>
Jun 7 09:10:19 virtualserver03 attrd: [901]: info: attrd_perform_update: Sent update 83: master-resDRBD:0=-INFINITY<BR>
Jun 7 09:10:19 virtualserver03 lrmd: [900]: info: RA output: (resDRBD:0:stop:stdout) #012<BR>
Jun 7 09:10:19 virtualserver03 lrmd: [900]: info: RA output: (resDRBD:0:stop:stdout)<BR>
Jun 7 09:10:19 virtualserver03 crmd: [903]: info: abort_transition_graph: te_update_diff:146 - Triggered transition abort (complete=0, tag=transient_attributes, id=virtualserver03, magic=NA, cib=0.139.14) : Transient attribute: update<BR>
Jun 7 09:10:19 virtualserver03 crmd: [903]: info: update_abort_priority: Abort priority upgraded from 0 to 1000000<BR>
Jun 7 09:10:19 virtualserver03 crmd: [903]: info: update_abort_priority: Abort action done superceeded by restart<BR>
Jun 7 09:10:19 virtualserver03 crm_attribute: [15500]: info: Invoked: crm_attribute -N virtualserver03 -n master-resDRBD:0 -l reboot -D<BR>
Jun 7 09:10:19 virtualserver03 attrd: [901]: info: attrd_trigger_update: Sending flush op to all hosts for: master-resDRBD:0 (<null>)<BR>
Jun 7 09:10:19 virtualserver03 attrd: [901]: info: attrd_perform_update: Sent delete 85: node=virtualserver03, attr=master-resDRBD:0, id=<n/a>, set=(null), section=status<BR>
Jun 7 09:10:19 virtualserver03 lrmd: [900]: info: RA output: (resDRBD:0:stop:stdout)<BR>
Jun 7 09:10:19 virtualserver03 crmd: [903]: info: process_lrm_event: LRM operation resDRBD:0_stop_0 (call=65, rc=0, cib-update=169, confirmed=true) ok<BR>
Jun 7 09:10:19 virtualserver03 crmd: [903]: info: abort_transition_graph: te_update_diff:157 - Triggered transition abort (complete=0, tag=transient_attributes, id=virtualserver03, magic=NA, cib=0.139.15) : Transient attribute: removal<BR>
Jun 7 09:10:19 virtualserver03 attrd: [901]: info: attrd_perform_update: Sent delete -22: node=virtualserver03, attr=master-resDRBD:0, id=<n/a>, set=(null), section=status<BR>
Jun 7 09:10:19 virtualserver03 crmd: [903]: info: match_graph_event: Action resDRBD:0_stop_0 (11) confirmed on virtualserver03 (rc=0)<BR>
Jun 7 09:10:19 virtualserver03 crmd: [903]: info: te_pseudo_action: Pseudo action 23 fired and confirmed<BR>
Jun 7 09:10:19 virtualserver03 crmd: [903]: info: te_pseudo_action: Pseudo action 26 fired and confirmed<BR>
Jun 7 09:10:19 virtualserver03 crmd: [903]: info: te_rsc_command: Initiating action 78: notify resDRBD:1_post_notify_stop_0 on virtualserver05<BR>
Jun 7 09:10:19 virtualserver03 crmd: [903]: info: match_graph_event: Action resDRBD:1_post_notify_stop_0 (78) confirmed on virtualserver05 (rc=0)<BR>
Jun 7 09:10:19 virtualserver03 crmd: [903]: info: te_pseudo_action: Pseudo action 27 fired and confirmed<BR>
Jun 7 09:10:19 virtualserver03 lrmd: [900]: info: RA output: (resDLM:1:stop:stderr) dlm_controld.pcmk: no process found<BR>
Jun 7 09:10:20 virtualserver03 cibadmin: [15532]: info: Invoked: /usr/sbin/cibadmin -Ql<BR>
Jun 7 09:10:20 virtualserver03 crmd: [903]: info: process_lrm_event: LRM operation resDLM:1_stop_0 (call=64, rc=0, cib-update=170, confirmed=true) ok<BR>
Jun 7 09:10:20 virtualserver03 crmd: [903]: info: match_graph_event: Action resDLM:1_stop_0 (42) confirmed on virtualserver03 (rc=0)<BR>
Jun 7 09:10:20 virtualserver03 crmd: [903]: info: te_pseudo_action: Pseudo action 46 fired and confirmed<BR>
Jun 7 09:10:20 virtualserver03 crmd: [903]: info: run_graph: ====================================================<BR>
Jun 7 09:10:20 virtualserver03 crmd: [903]: notice: run_graph: Transition 28 (Complete=13, Pending=0, Fired=0, Skipped=1, Incomplete=0, Source=/var/lib/pengine/pe-input-54.bz2): Stopped<BR>
Jun 7 09:10:20 virtualserver03 crmd: [903]: info: te_graph_trigger: Transition 28 is now complete<BR>
Jun 7 09:10:20 virtualserver03 crmd: [903]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_crmd ]<BR>
Jun 7 09:10:20 virtualserver03 crmd: [903]: info: do_state_transition: All 2 cluster nodes are eligible to run resources.<BR>
Jun 7 09:10:20 virtualserver03 crmd: [903]: info: do_pe_invoke: Query 171: Requesting the current CIB: S_POLICY_ENGINE<BR>
Jun 7 09:10:20 virtualserver03 crmd: [903]: info: do_pe_invoke_callback: Invoking the PE: query=171, ref=pe_calc-dc-1307430620-187, seq=132, quorate=1<BR>
Jun 7 09:10:20 virtualserver03 pengine: [902]: notice: unpack_config: On loss of CCM Quorum: Ignore<BR>
Jun 7 09:10:20 virtualserver03 pengine: [902]: info: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0<BR>
Jun 7 09:10:20 virtualserver03 pengine: [902]: info: unpack_status: Node virtualserver03 is in standby-mode<BR>
Jun 7 09:10:20 virtualserver03 pengine: [902]: info: determine_online_status: Node virtualserver03 is standby<BR>
Jun 7 09:10:20 virtualserver03 pengine: [902]: WARN: unpack_rsc_op: Processing failed op ubuntu_stop_0 on virtualserver03: unknown error (1)<BR>
Jun 7 09:10:20 virtualserver03 pengine: [902]: info: native_add_running: resource ubuntu isnt managed<BR>
Jun 7 09:10:20 virtualserver03 pengine: [902]: info: determine_online_status: Node virtualserver05 is online<BR>
Jun 7 09:10:20 virtualserver03 pengine: [902]: notice: clone_print: Master/Slave Set: msDRBD<BR>
Jun 7 09:10:20 virtualserver03 pengine: [902]: notice: short_print: Masters: [ virtualserver05 ]<BR>
Jun 7 09:10:20 virtualserver03 pengine: [902]: notice: short_print: Stopped: [ resDRBD:0 ]<BR>
Jun 7 09:10:20 virtualserver03 pengine: [902]: notice: clone_print: Clone Set: cloneDLM<BR>
Jun 7 09:10:20 virtualserver03 pengine: [902]: notice: short_print: Started: [ virtualserver05 ]<BR>
Jun 7 09:10:20 virtualserver03 pengine: [902]: notice: short_print: Stopped: [ resDLM:1 ]<BR>
Jun 7 09:10:20 virtualserver03 pengine: [902]: notice: clone_print: Clone Set: cloneO2CB<BR>
Jun 7 09:10:20 virtualserver03 pengine: [902]: notice: short_print: Started: [ virtualserver05 ]<BR>
Jun 7 09:10:20 virtualserver03 pengine: [902]: notice: short_print: Stopped: [ resO2CB:1 ]<BR>
Jun 7 09:10:20 virtualserver03 pengine: [902]: notice: clone_print: Clone Set: cloneFS<BR>
Jun 7 09:10:20 virtualserver03 pengine: [902]: notice: short_print: Started: [ virtualserver05 ]<BR>
Jun 7 09:10:20 virtualserver03 pengine: [902]: notice: short_print: Stopped: [ resFS:1 ]<BR>
Jun 7 09:10:20 virtualserver03 pengine: [902]: notice: native_print: ubuntu#011(ocf::heartbeat:VirtualDomain):#011Started virtualserver03 (unmanaged) FAILED<BR>
Jun 7 09:10:20 virtualserver03 pengine: [902]: info: get_failcount: ubuntu has failed 1000000 times on virtualserver03<BR>
Jun 7 09:10:20 virtualserver03 pengine: [902]: WARN: common_apply_stickiness: Forcing ubuntu away from virtualserver03 after 1000000 failures (max=1000000)<BR>
Jun 7 09:10:20 virtualserver03 pengine: [902]: info: native_merge_weights: msDRBD: Rolling back scores from cloneDLM<BR>
Jun 7 09:10:20 virtualserver03 pengine: [902]: info: native_merge_weights: msDRBD: Rolling back scores from cloneO2CB<BR>
Jun 7 09:10:20 virtualserver03 pengine: [902]: info: native_color: Resource resDRBD:0 cannot run anywhere<BR>
Jun 7 09:10:20 virtualserver03 pengine: [902]: info: native_merge_weights: msDRBD: Rolling back scores from cloneDLM<BR>
Jun 7 09:10:20 virtualserver03 pengine: [902]: info: native_merge_weights: msDRBD: Rolling back scores from cloneO2CB<BR>
Jun 7 09:10:20 virtualserver03 pengine: [902]: info: master_color: Promoting resDRBD:1 (Master virtualserver05)<BR>
Jun 7 09:10:20 virtualserver03 pengine: [902]: info: master_color: msDRBD: Promoted 1 instances of a possible 2 to master<BR>
Jun 7 09:10:20 virtualserver03 pengine: [902]: info: native_merge_weights: cloneDLM: Rolling back scores from cloneO2CB<BR>
Jun 7 09:10:20 virtualserver03 pengine: [902]: info: master_color: Promoting resDRBD:1 (Master virtualserver05)<BR>
Jun 7 09:10:20 virtualserver03 pengine: [902]: info: master_color: msDRBD: Promoted 1 instances of a possible 2 to master<BR>
Jun 7 09:10:20 virtualserver03 pengine: [902]: info: master_color: Promoting resDRBD:1 (Master virtualserver05)<BR>
Jun 7 09:10:20 virtualserver03 pengine: [902]: info: master_color: msDRBD: Promoted 1 instances of a possible 2 to master<BR>
Jun 7 09:10:20 virtualserver03 pengine: [902]: info: native_color: Resource resDLM:1 cannot run anywhere<BR>
Jun 7 09:10:20 virtualserver03 pengine: [902]: info: native_color: Resource resO2CB:1 cannot run anywhere<BR>
Jun 7 09:10:20 virtualserver03 pengine: [902]: info: native_color: Resource resFS:1 cannot run anywhere<BR>
Jun 7 09:10:20 virtualserver03 pengine: [902]: info: native_color: Unmanaged resource ubuntu allocated to 'nowhere': failed<BR>
Jun 7 09:10:20 virtualserver03 pengine: [902]: info: find_compatible_child: Colocating resO2CB:0 with resDLM:0 on virtualserver05<BR>
Jun 7 09:10:20 virtualserver03 pengine: [902]: notice: clone_rsc_order_lh: Interleaving resDLM:0 with resO2CB:0<BR>
Jun 7 09:10:20 virtualserver03 pengine: [902]: info: find_compatible_child: Colocating resDLM:0 with resO2CB:0 on virtualserver05<BR>
Jun 7 09:10:20 virtualserver03 pengine: [902]: notice: clone_rsc_order_lh: Interleaving resO2CB:0 with resDLM:0<BR>
Jun 7 09:10:20 virtualserver03 pengine: [902]: info: find_compatible_child: Colocating resDLM:0 with resDRBD:1 on virtualserver05<BR>
Jun 7 09:10:20 virtualserver03 pengine: [902]: notice: clone_rsc_order_lh: Interleaving resDRBD:1 with resDLM:0<BR>
Jun 7 09:10:20 virtualserver03 pengine: [902]: info: find_compatible_child: Colocating resDRBD:1 with resDLM:0 on virtualserver05<BR>
Jun 7 09:10:20 virtualserver03 pengine: [902]: notice: clone_rsc_order_lh: Interleaving resDLM:0 with resDRBD:1<BR>
Jun 7 09:10:20 virtualserver03 pengine: [902]: info: find_compatible_child: Colocating resFS:0 with resO2CB:0 on virtualserver05<BR>
Jun 7 09:10:20 virtualserver03 pengine: [902]: notice: clone_rsc_order_lh: Interleaving resO2CB:0 with resFS:0<BR>
Jun 7 09:10:20 virtualserver03 pengine: [902]: info: find_compatible_child: Colocating resO2CB:0 with resFS:0 on virtualserver05<BR>
Jun 7 09:10:20 virtualserver03 pengine: [902]: notice: clone_rsc_order_lh: Interleaving resFS:0 with resO2CB:0<BR>
Jun 7 09:10:20 virtualserver03 pengine: [902]: notice: LogActions: Leave resource resDRBD:0#011(Stopped)<BR>
Jun 7 09:10:20 virtualserver03 pengine: [902]: notice: LogActions: Leave resource resDRBD:1#011(Master virtualserver05)<BR>
Jun 7 09:10:20 virtualserver03 pengine: [902]: notice: LogActions: Leave resource resDLM:0#011(Started virtualserver05)<BR>
Jun 7 09:10:20 virtualserver03 pengine: [902]: notice: LogActions: Leave resource resDLM:1#011(Stopped)<BR>
Jun 7 09:10:20 virtualserver03 pengine: [902]: notice: LogActions: Leave resource resO2CB:0#011(Started virtualserver05)<BR>
Jun 7 09:10:20 virtualserver03 pengine: [902]: notice: LogActions: Leave resource resO2CB:1#011(Stopped)<BR>
Jun 7 09:10:20 virtualserver03 pengine: [902]: notice: LogActions: Leave resource resFS:0#011(Started virtualserver05)<BR>
Jun 7 09:10:20 virtualserver03 pengine: [902]: notice: LogActions: Leave resource resFS:1#011(Stopped)<BR>
Jun 7 09:10:20 virtualserver03 pengine: [902]: notice: LogActions: Leave resource ubuntu#011(Started unmanaged)<BR>
Jun 7 09:10:20 virtualserver03 crmd: [903]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]<BR>
Jun 7 09:10:20 virtualserver03 crmd: [903]: info: unpack_graph: Unpacked transition 29: 0 actions in 0 synapses<BR>
Jun 7 09:10:20 virtualserver03 crmd: [903]: info: do_te_invoke: Processing graph 29 (ref=pe_calc-dc-1307430620-187) derived from /var/lib/pengine/pe-input-55.bz2<BR>
Jun 7 09:10:20 virtualserver03 crmd: [903]: info: run_graph: ====================================================<BR>
Jun 7 09:10:20 virtualserver03 crmd: [903]: notice: run_graph: Transition 29 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pengine/pe-input-55.bz2): Complete<BR>
Jun 7 09:10:20 virtualserver03 crmd: [903]: info: te_graph_trigger: Transition 29 is now complete<BR>
Jun 7 09:10:20 virtualserver03 crmd: [903]: info: notify_crmd: Transition 29 status: done - <null><BR>
Jun 7 09:10:20 virtualserver03 crmd: [903]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]<BR>
Jun 7 09:10:20 virtualserver03 crmd: [903]: info: do_state_transition: Starting PEngine Recheck Timer<BR>
Jun 7 09:10:20 virtualserver03 pengine: [902]: info: process_pe_message: Transition 29: PEngine Input stored in: /var/lib/pengine/pe-input-55.bz2<BR>
Jun 7 09:10:22 virtualserver03 cibadmin: [15535]: info: Invoked: /usr/sbin/cibadmin -Ql<BR>
<BR>
<BR>
crm_mon -n<BR>
<BR>
Last updated: Tue Jun 7 10:00:46 2011<BR>
Stack: openais<BR>
Current DC: virtualserver05 - partition with quorum<BR>
Version: 1.0.8-042548a451fce8400660f6031f4da6f0223dd5dd<BR>
2 Nodes configured, 2 expected votes<BR>
5 Resources configured.<BR>
============<BR>
<BR>
Node virtualserver03: online<BR>
resDRBD:0 (ocf::linbit:drbd) Master<BR>
resDLM:1 (ocf::pacemaker:controld) Started<BR>
resO2CB:1 (ocf::pacemaker:o2cb) Started<BR>
resFS:1 (ocf::heartbeat:Filesystem) Started<BR>
ubuntu (ocf::heartbeat:VirtualDomain) Started<BR>
Node virtualserver05: online<BR>
resDLM:0 (ocf::pacemaker:controld) Started<BR>
resDRBD:1 (ocf::linbit:drbd) Master<BR>
resO2CB:0 (ocf::pacemaker:o2cb) Started<BR>
resFS:0 (ocf::heartbeat:Filesystem) Started<BR>
<BR>
I hope somebody can help me?<BR>
sorry for my english :)<BR>
<BR>
king regards<BR>
Patrik</FONT>
</P>
</BODY>
</HTML>