[Pacemaker] Problems with migration of kvm on primary/primary cluster

Andrew Beekhof andrew at beekhof.net
Thu Aug 11 01:56:30 EDT 2011


On Tue, Aug 2, 2011 at 3:18 PM, Patrik Plank
<p.plank at st-georgen-gusen.ooe.gv.at> wrote:
> Hallo again!
>
> Now i have updated pacemaker to 1.0.11 (debian squeeze backports) but the
> problem still exist.
> I think the problem is my filesystem.
>
> My config:
>
> node virtualserver01 \
>         attributes standby="off"
> node virtualserver02 \
>         attributes standby="off"
> primitive dlm ocf:pacemaker:controld \
>         operations $id="dlm-operations" \
>         op start interval="0" timeout="90" \
>         op stop interval="0" timeout="100" \
>         op monitor interval="10" timeout="20" start-delay="0" \
>         meta target-role="started"
> primitive drbd_r0 ocf:linbit:drbd \
>         params drbd_resource="r0" \
>         operations $id="drbd_r0-operations" \
>         op start interval="0" timeout="240" \
>         op promote interval="0" timeout="90" \
>         op demote interval="0" timeout="90" \
>         op stop interval="0" timeout="100" \
>         op monitor interval="10" timeout="20" start-delay="1min" \
>         op notify interval="0" timeout="90" \
>         meta target-role="started"
> primitive fs ocf:heartbeat:Filesystem \
>         params device="/dev/drbd0" directory="/mnt" fstype="ocfs2" \
>         operations $id="fs-operations" \
>         op start interval="0" timeout="60" \
>         op stop interval="0" timeout="60" \
>         op monitor interval="20" timeout="40" start-delay="0" \
>         op notify interval="0" timeout="60" \
>         meta target-role="started"
> primitive o2cb ocf:pacemaker:o2cb \
>         op monitor interval="120s" \
>         meta target-role="started"
> ms ms_drbd_r0 drbd_r0 \
>         meta master-max="2" clone-max="2" notify="true" interleave="true"
> resource-stickiness="100"
> clone dlm-clone dlm \
>         meta clone-max="2" interleave="true"
> clone fs-clone fs \
>         meta clone-max="2" ordered="true" interleave="true"
> clone o2cb-clone o2cb
> colocation col_dlm_drbd inf: dlm-clone ms_drbd_r0:Master

The above looks wrong, the dlm and o2cb pieces should be running everywhere.
s/dlm-clone/fs-clone/

> colocation col_fs_o2cb inf: fs-clone o2cb-clone
> colocation col_o2cb_dlm inf: o2cb-clone dlm-clone
> order ord_drbd_dlm 0: ms_drbd_r0:promote dlm-clone

Same here.

> order ord_o2cb_after_dlm 0: dlm-clone o2cb-clone
> order ord_o2cb_fs 0: o2cb-clone fs-clone
> property $id="cib-bootstrap-options" \
>         expected-quorum-votes="2" \
>         stonith-enabled="false" \
>         dc-version="1.0.11-6e010d6b0d49a6b929d17c0114e9d2d934dc8e04" \
>         no-quorum-policy="ignore" \
>         cluster-infrastructure="openais" \
>         last-lrm-refresh="1312195244"
>
>
> ============
> Last updated: Tue Aug  2 06:54:04 2011
> Stack: openais
> Current DC: virtualserver01 - partition with quorum
> Version: 1.0.11-6e010d6b0d49a6b929d17c0114e9d2d934dc8e04
> 2 Nodes configured, 2 expected votes
> 4 Resources configured.
> ============
>
> Node virtualserver01: online
>         fs:0    (ocf::heartbeat:Filesystem) Started
>         dlm:0   (ocf::pacemaker:controld) Started
>         o2cb:0  (ocf::pacemaker:o2cb) Started
>         drbd_r0:0       (ocf::linbit:drbd) Master
> Node virtualserver02: online
>         drbd_r0:1       (ocf::linbit:drbd) Master
>         dlm:1   (ocf::pacemaker:controld) Started
>         o2cb:1  (ocf::pacemaker:o2cb) Started
>         fs:1    (ocf::heartbeat:Filesystem) Started
>
>
> When i shutdown the nodes or pulling the plugs, the online node shows in
> crm_mon that the filesystem started, but i cant access to the mountpoint.
> I think this is the problem why my kvm vm's crash.
>
> ============
> Last updated: Tue Aug  2 07:07:44 2011
> Stack: openais
> Current DC: virtualserver02 - partition WITHOUT quorum
> Version: 1.0.11-6e010d6b0d49a6b929d17c0114e9d2d934dc8e04
> 2 Nodes configured, 2 expected votes
> 4 Resources configured.
> ============
>
> Node virtualserver01: OFFLINE
> Node virtualserver02: online
>         fs:1    (ocf::heartbeat:Filesystem) Started
>         dlm:1   (ocf::pacemaker:controld) Started
>         drbd_r0:1       (ocf::linbit:drbd) Master
>         o2cb:1  (ocf::pacemaker:o2cb) Started
>
> any idea?
>
> best regards
>
>
> the logfile from the online node:
>
> Aug  2 06:53:28 virtualserver02 Filesystem[8659]: INFO: Running start for
> /dev/drbd0 on /mnt
> Aug  2 06:53:28 virtualserver02 lrmd: [1755]: info: RA output:
> (fs:1:start:stderr) FATAL: Module scsi_hostadapter not found.
> Aug  2 06:53:28 virtualserver02 kernel: [  533.775158] dlm: Using SCTP for
> communications
> Aug  2 06:53:28 virtualserver02 kernel: [  533.782151] dlm: connecting to
> 1694607552 sctp association 1
> Aug  2 06:53:32 virtualserver02 kernel: [  537.812934] ocfs2: Mounting
> device (147,0) on (node 1711384, slot 1) with ordered data mode.
> Aug  2 06:53:32 virtualserver02 crmd: [1758]: info: process_lrm_event: LRM
> operation fs:1_start_0 (call=29, rc=0, cib-update=36, confirmed=true) ok
> Aug  2 06:53:32 virtualserver02 crmd: [1758]: info: do_lrm_rsc_op:
> Performing key=60:12:0:3116dbc3-9da7-47fe-9546-6e1ba7030970
> op=fs:1_monitor_20000 )
> Aug  2 06:53:32 virtualserver02 lrmd: [1755]: info: rsc:fs:1:30: monitor
> Aug  2 06:53:32 virtualserver02 crmd: [1758]: info: process_lrm_event: LRM
> operation fs:1_monitor_20000 (call=30, rc=0, cib-update=37, confirmed=false)
> ok
> Aug  2 06:55:13 virtualserver02 cib: [1754]: info: cib_stats: Processed 159
> operations (251.00us average, 0% utilization) in the last 10min
> Aug  2 07:04:40 virtualserver02 corosync[1725]:   [TOTEM ] A processor
> failed, forming new configuration.
> Aug  2 07:04:42 virtualserver02 kernel: [ 1207.632051] block drbd0: PingAck
> did not arrive in time.
> Aug  2 07:04:42 virtualserver02 kernel: [ 1207.632117] block drbd0: peer(
> Primary -> Unknown ) conn( Connected -> NetworkFailure ) pdsk( UpToDate ->
> DUnknown )
> Aug  2 07:04:42 virtualserver02 kernel: [ 1207.632205] block drbd0: asender
> terminated
> Aug  2 07:04:42 virtualserver02 kernel: [ 1207.632212] block drbd0: short
> read expecting header on sock: r=-512
> Aug  2 07:04:42 virtualserver02 kernel: [ 1207.632232] block drbd0: Creating
> new current UUID
> Aug  2 07:04:42 virtualserver02 kernel: [ 1207.632391] block drbd0:
> Terminating drbd0_asender
> Aug  2 07:04:42 virtualserver02 kernel: [ 1207.652246] block drbd0:
> Connection closed
> Aug  2 07:04:42 virtualserver02 kernel: [ 1207.652315] block drbd0: conn(
> NetworkFailure -> Unconnected )
> Aug  2 07:04:42 virtualserver02 kernel: [ 1207.652376] block drbd0: receiver
> terminated
> Aug  2 07:04:42 virtualserver02 kernel: [ 1207.652430] block drbd0:
> Restarting drbd0_receiver
> Aug  2 07:04:42 virtualserver02 kernel: [ 1207.652485] block drbd0: receiver
> (re)started
> Aug  2 07:04:42 virtualserver02 kernel: [ 1207.652547] block drbd0: conn(
> Unconnected -> WFConnection )
> Aug  2 07:04:44 virtualserver02 corosync[1725]:   [pcmk  ] notice:
> pcmk_peer_update: Transitional membership event on ring 384: memb=1, new=0,
> lost=1
> Aug  2 07:04:44 virtualserver02 corosync[1725]:   [pcmk  ] info:
> pcmk_peer_update: memb: virtualserver02 1711384768
> Aug  2 07:04:44 virtualserver02 corosync[1725]:   [pcmk  ] info:
> pcmk_peer_update: lost: virtualserver01 1694607552
> Aug  2 07:04:44 virtualserver02 corosync[1725]:   [pcmk  ] notice:
> pcmk_peer_update: Stable membership event on ring 384: memb=1, new=0, lost=0
> Aug  2 07:04:44 virtualserver02 kernel: [ 1209.281090] dlm: closing
> connection to node 1694607552
> Aug  2 07:04:44 virtualserver02 corosync[1725]:   [pcmk  ] info:
> pcmk_peer_update: MEMB: virtualserver02 1711384768
> Aug  2 07:04:44 virtualserver02 corosync[1725]:   [pcmk  ] info:
> ais_mark_unseen_peer_dead: Node virtualserver01 was not seen in the previous
> transition
> Aug  2 07:04:44 virtualserver02 corosync[1725]:   [pcmk  ] info:
> update_member: Node 1694607552/virtualserver01 is now: lost
> Aug  2 07:04:44 virtualserver02 corosync[1725]:   [pcmk  ] info:
> send_member_notification: Sending membership update 384 to 4 children
> Aug  2 07:04:44 virtualserver02 cib: [1754]: notice: ais_dispatch:
> Membership 384: quorum lost
> Aug  2 07:04:44 virtualserver02 cib: [1754]: info: crm_update_peer: Node
> virtualserver01: id=1694607552 state=lost (new) addr=r(0) ip(192.168.1.101)
> r(1) ip(10.0.0.101)  votes=1 born=380 seen=380
> proc=00000000000000000000000000013312
> Aug  2 07:04:44 virtualserver02 corosync[1725]:   [TOTEM ] A processor
> joined or left the membership and a new membership was formed.
> Aug  2 07:04:44 virtualserver02 crmd: [1758]: notice: ais_dispatch:
> Membership 384: quorum lost
> Aug  2 07:04:44 virtualserver02 crmd: [1758]: info: crm_update_peer: Node
> virtualserver01: id=1694607552 state=lost (new) addr=r(0) ip(192.168.1.101)
> r(1) ip(10.0.0.101)  votes=1 born=380 seen=380
> proc=00000000000000000000000000013312
> Aug  2 07:04:44 virtualserver02 crmd: [1758]: WARN: check_dead_member: Our
> DC node (virtualserver01) left the cluster
> Aug  2 07:04:44 virtualserver02 crmd: [1758]: info: do_state_transition:
> State transition S_NOT_DC -> S_ELECTION [ input=I_ELECTION
> cause=C_FSA_INTERNAL origin=check_dead_member ]
> Aug  2 07:04:44 virtualserver02 crmd: [1758]: info: update_dc: Unset DC
> virtualserver01
> Aug  2 07:04:44 virtualserver02 crmd: [1758]: info: do_state_transition:
> State transition S_ELECTION -> S_INTEGRATION [ input=I_ELECTION_DC
> cause=C_FSA_INTERNAL origin=do_election_check ]
> Aug  2 07:04:44 virtualserver02 corosync[1725]:   [CPG   ] chosen downlist
> from node r(0) ip(192.168.1.102) r(1) ip(10.0.0.102)
> Aug  2 07:04:44 virtualserver02 crmd: [1758]: info: do_te_control:
> Registering TE UUID: f679ff7d-6b65-4176-b395-216bf6324c40
> Aug  2 07:04:44 virtualserver02 corosync[1725]:   [MAIN  ] Completed service
> synchronization, ready to provide service.
> Aug  2 07:04:44 virtualserver02 crmd: [1758]: info: set_graph_functions:
> Setting custom graph functions
> Aug  2 07:04:44 virtualserver02 crmd: [1758]: info: unpack_graph: Unpacked
> transition -1: 0 actions in 0 synapses
> Aug  2 07:04:44 virtualserver02 crmd: [1758]: info: do_dc_takeover: Taking
> over DC status for this partition
> Aug  2 07:04:44 virtualserver02 cib: [1754]: info: cib_process_readwrite: We
> are now in R/W mode
> Aug  2 07:04:44 virtualserver02 cib: [1754]: info: cib_process_request:
> Operation complete: op cib_master for section 'all' (origin=local/crmd/38,
> version=0.237.5): ok (rc=0)
> Aug  2 07:04:44 virtualserver02 cib: [1754]: info: cib_process_request:
> Operation complete: op cib_modify for section cib (origin=local/crmd/39,
> version=0.237.5): ok (rc=0)
> Aug  2 07:04:44 virtualserver02 cib: [1754]: info: cib_process_request:
> Operation complete: op cib_modify for section crm_config
> (origin=local/crmd/41, version=0.237.5): ok (rc=0)
> Aug  2 07:04:44 virtualserver02 crmd: [1758]: info: join_make_offer: Making
> join offers based on membership 384
> Aug  2 07:04:44 virtualserver02 crmd: [1758]: info: do_dc_join_offer_all:
> join-1: Waiting on 1 outstanding join acks
> Aug  2 07:04:44 virtualserver02 crmd: [1758]: info: ais_dispatch: Membership
> 384: quorum still lost
> Aug  2 07:04:44 virtualserver02 cib: [1754]: info: cib_process_request:
> Operation complete: op cib_modify for section crm_config
> (origin=local/crmd/43, version=0.237.5): ok (rc=0)
> Aug  2 07:04:44 virtualserver02 crmd: [1758]: info: crm_ais_dispatch:
> Setting expected votes to 2
> Aug  2 07:04:44 virtualserver02 crmd: [1758]: info: config_query_callback:
> Checking for expired actions every 900000ms
> Aug  2 07:04:44 virtualserver02 crmd: [1758]: info: config_query_callback:
> Sending expected-votes=2 to corosync
> Aug  2 07:04:44 virtualserver02 crmd: [1758]: info: update_dc: Set DC to
> virtualserver02 (3.0.1)
> Aug  2 07:04:44 virtualserver02 crmd: [1758]: info: ais_dispatch: Membership
> 384: quorum still lost
> Aug  2 07:04:44 virtualserver02 cib: [1754]: info: cib_process_request:
> Operation complete: op cib_modify for section crm_config
> (origin=local/crmd/46, version=0.237.5): ok (rc=0)
> Aug  2 07:04:44 virtualserver02 crmd: [1758]: info: crm_ais_dispatch:
> Setting expected votes to 2
> Aug  2 07:04:44 virtualserver02 crmd: [1758]: info: do_state_transition:
> State transition S_INTEGRATION -> S_FINALIZE_JOIN [ input=I_INTEGRATED
> cause=C_FSA_INTERNAL origin=check_join_state ]
> Aug  2 07:04:44 virtualserver02 crmd: [1758]: info: do_state_transition: All
> 1 cluster nodes responded to the join offer.
> Aug  2 07:04:44 virtualserver02 crmd: [1758]: info: do_dc_join_finalize:
> join-1: Syncing the CIB from virtualserver02 to the rest of the cluster
> Aug  2 07:04:44 virtualserver02 crmd: [1758]: info: te_connect_stonith:
> Attempting connection to fencing daemon...
> Aug  2 07:04:44 virtualserver02 cib: [1754]: info: cib_process_request:
> Operation complete: op cib_modify for section crm_config
> (origin=local/crmd/49, version=0.237.5): ok (rc=0)
> Aug  2 07:04:44 virtualserver02 cib: [1754]: info: cib_process_request:
> Operation complete: op cib_sync for section 'all' (origin=local/crmd/50,
> version=0.237.5): ok (rc=0)
> Aug  2 07:04:45 virtualserver02 crmd: [1758]: info: te_connect_stonith:
> Connected
> Aug  2 07:04:45 virtualserver02 cib: [1754]: info: cib_process_request:
> Operation complete: op cib_modify for section nodes (origin=local/crmd/51,
> version=0.237.5): ok (rc=0)
> Aug  2 07:04:45 virtualserver02 crmd: [1758]: info: do_dc_join_ack: join-1:
> Updating node state to member for virtualserver02
> Aug  2 07:04:45 virtualserver02 cib: [1754]: info: cib_process_request:
> Operation complete: op cib_delete for section
> //node_state[@uname='virtualserver02']/lrm (origin=local/crmd/52,
> version=0.237.6): ok (rc=0)
> Aug  2 07:04:45 virtualserver02 crmd: [1758]: info: erase_xpath_callback:
> Deletion of "//node_state[@uname='virtualserver02']/lrm": ok (rc=0)
> Aug  2 07:04:45 virtualserver02 crmd: [1758]: info: do_state_transition:
> State transition S_FINALIZE_JOIN -> S_POLICY_ENGINE [ input=I_FINALIZED
> cause=C_FSA_INTERNAL origin=check_join_state ]
> Aug  2 07:04:45 virtualserver02 crmd: [1758]: info: do_state_transition: All
> 1 cluster nodes are eligible to run resources.
> Aug  2 07:04:45 virtualserver02 crmd: [1758]: info: do_dc_join_final:
> Ensuring DC, quorum and node attributes are up-to-date
> Aug  2 07:04:45 virtualserver02 crmd: [1758]: info: crm_update_quorum:
> Updating quorum status to false (call=56)
> Aug  2 07:04:45 virtualserver02 crmd: [1758]: info: abort_transition_graph:
> do_te_invoke:185 - Triggered transition abort (complete=1) : Peer Cancelled
> Aug  2 07:04:45 virtualserver02 crmd: [1758]: info: do_pe_invoke: Query 57:
> Requesting the current CIB: S_POLICY_ENGINE
> Aug  2 07:04:45 virtualserver02 attrd: [1756]: info: attrd_local_callback:
> Sending full refresh (origin=crmd)
> Aug  2 07:04:45 virtualserver02 attrd: [1756]: info: attrd_trigger_update:
> Sending flush op to all hosts for: shutdown (<null>)
> Aug  2 07:04:45 virtualserver02 cib: [1754]: info: cib_process_request:
> Operation complete: op cib_modify for section nodes (origin=local/crmd/54,
> version=0.237.7): ok (rc=0)
> Aug  2 07:04:45 virtualserver02 crmd: [1758]: WARN: match_down_event: No
> match for shutdown action on virtualserver01
> Aug  2 07:04:45 virtualserver02 crmd: [1758]: info: te_update_diff:
> Stonith/shutdown of virtualserver01 not matched
> Aug  2 07:04:45 virtualserver02 crmd: [1758]: info: abort_transition_graph:
> te_update_diff:198 - Triggered transition abort (complete=1, tag=node_state,
> id=virtualserver01, magic=NA, cib=0.237.8) : Node failure
> Aug  2 07:04:45 virtualserver02 crmd: [1758]: info: do_pe_invoke: Query 58:
> Requesting the current CIB: S_POLICY_ENGINE
> Aug  2 07:04:45 virtualserver02 cib: [1754]: info: log_data_element:
> cib:diff: - <cib have-quorum="1" dc-uuid="virtualserver01" admin_epoch="0"
> epoch="237" num_updates="8" />
> Aug  2 07:04:45 virtualserver02 cib: [1754]: info: log_data_element:
> cib:diff: + <cib have-quorum="0" dc-uuid="virtualserver02" admin_epoch="0"
> epoch="238" num_updates="1" />
> Aug  2 07:04:45 virtualserver02 cib: [1754]: info: cib_process_request:
> Operation complete: op cib_modify for section cib (origin=local/crmd/56,
> version=0.238.1): ok (rc=0)
> Aug  2 07:04:45 virtualserver02 crmd: [1758]: info: abort_transition_graph:
> need_abort:59 - Triggered transition abort (complete=1) : Non-status change
> Aug  2 07:04:45 virtualserver02 crmd: [1758]: info: need_abort: Aborting on
> change to have-quorum
> Aug  2 07:04:45 virtualserver02 crmd: [1758]: info: do_pe_invoke: Query 59:
> Requesting the current CIB: S_POLICY_ENGINE
> Aug  2 07:04:45 virtualserver02 attrd: [1756]: info: attrd_trigger_update:
> Sending flush op to all hosts for: terminate (<null>)
> Aug  2 07:04:45 virtualserver02 attrd: [1756]: info: attrd_trigger_update:
> Sending flush op to all hosts for: master-drbd_r0:0 (<null>)
> Aug  2 07:04:45 virtualserver02 attrd: [1756]: info: attrd_trigger_update:
> Sending flush op to all hosts for: master-drbd_r0:1 (10000)
> Aug  2 07:04:45 virtualserver02 crmd: [1758]: info: do_pe_invoke_callback:
> Invoking the PE: query=59, ref=pe_calc-dc-1312261485-15, seq=384, quorate=0
> Aug  2 07:04:45 virtualserver02 pengine: [1757]: notice: unpack_config: On
> loss of CCM Quorum: Ignore
> Aug  2 07:04:45 virtualserver02 pengine: [1757]: info: unpack_config: Node
> scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
> Aug  2 07:04:45 virtualserver02 pengine: [1757]: info:
> determine_online_status: Node virtualserver02 is online
> Aug  2 07:04:45 virtualserver02 pengine: [1757]: notice: unpack_rsc_op:
> Operation drbd_r0:1_monitor_0 found resource drbd_r0:1 active in master mode
> on virtualserver02
> Aug  2 07:04:45 virtualserver02 pengine: [1757]: notice: clone_print:
> Master/Slave Set: ms_drbd_r0
> Aug  2 07:04:45 virtualserver02 pengine: [1757]: notice: short_print:
> Masters: [ virtualserver02 ]
> Aug  2 07:04:45 virtualserver02 pengine: [1757]: notice: short_print:
> Stopped: [ drbd_r0:0 ]
> Aug  2 07:04:45 virtualserver02 pengine: [1757]: notice: clone_print:  Clone
> Set: dlm-clone
> Aug  2 07:04:45 virtualserver02 attrd: [1756]: info: attrd_trigger_update:
> Sending flush op to all hosts for: probe_complete (true)
> Aug  2 07:04:45 virtualserver02 pengine: [1757]: notice: short_print:
> Started: [ virtualserver02 ]
> Aug  2 07:04:45 virtualserver02 pengine: [1757]: notice: short_print:
> Stopped: [ dlm:0 ]
> Aug  2 07:04:45 virtualserver02 pengine: [1757]: notice: clone_print:  Clone
> Set: o2cb-clone
> Aug  2 07:04:45 virtualserver02 pengine: [1757]: notice: short_print:
> Started: [ virtualserver02 ]
> Aug  2 07:04:45 virtualserver02 pengine: [1757]: notice: short_print:
> Stopped: [ o2cb:0 ]
> Aug  2 07:04:45 virtualserver02 pengine: [1757]: notice: clone_print:  Clone
> Set: fs-clone
> Aug  2 07:04:45 virtualserver02 pengine: [1757]: notice: short_print:
> Started: [ virtualserver02 ]
> Aug  2 07:04:45 virtualserver02 pengine: [1757]: notice: short_print:
> Stopped: [ fs:0 ]
> Aug  2 07:04:45 virtualserver02 pengine: [1757]: info: native_color:
> Resource drbd_r0:0 cannot run anywhere
> Aug  2 07:04:45 virtualserver02 pengine: [1757]: info: master_color:
> Promoting drbd_r0:1 (Master virtualserver02)
> Aug  2 07:04:45 virtualserver02 pengine: [1757]: info: master_color:
> ms_drbd_r0: Promoted 1 instances of a possible 2 to master
> Aug  2 07:04:45 virtualserver02 pengine: [1757]: info: master_color:
> Promoting drbd_r0:1 (Master virtualserver02)
> Aug  2 07:04:45 virtualserver02 pengine: [1757]: info: master_color:
> ms_drbd_r0: Promoted 1 instances of a possible 2 to master
> Aug  2 07:04:45 virtualserver02 pengine: [1757]: info: master_color:
> Promoting drbd_r0:1 (Master virtualserver02)
> Aug  2 07:04:45 virtualserver02 pengine: [1757]: info: master_color:
> ms_drbd_r0: Promoted 1 instances of a possible 2 to master
> Aug  2 07:04:45 virtualserver02 pengine: [1757]: info: native_color:
> Resource dlm:0 cannot run anywhere
> Aug  2 07:04:45 virtualserver02 pengine: [1757]: info: native_color:
> Resource o2cb:0 cannot run anywhere
> Aug  2 07:04:45 virtualserver02 pengine: [1757]: notice:
> clone_rsc_colocation_rh: Cannot pair fs:0 with instance of o2cb-clone
> Aug  2 07:04:45 virtualserver02 pengine: [1757]: info: native_color:
> Resource fs:0 cannot run anywhere
> Aug  2 07:04:45 virtualserver02 pengine: [1757]: notice: LogActions: Leave
> resource drbd_r0:0#011(Stopped)
> Aug  2 07:04:45 virtualserver02 pengine: [1757]: notice: LogActions: Leave
> resource drbd_r0:1#011(Master virtualserver02)
> Aug  2 07:04:45 virtualserver02 pengine: [1757]: notice: LogActions: Leave
> resource dlm:0#011(Stopped)
> Aug  2 07:04:45 virtualserver02 pengine: [1757]: notice: LogActions: Leave
> resource dlm:1#011(Started virtualserver02)
> Aug  2 07:04:45 virtualserver02 pengine: [1757]: notice: LogActions: Leave
> resource o2cb:0#011(Stopped)
> Aug  2 07:04:45 virtualserver02 pengine: [1757]: notice: LogActions: Leave
> resource o2cb:1#011(Started virtualserver02)
> Aug  2 07:04:45 virtualserver02 pengine: [1757]: notice: LogActions: Leave
> resource fs:0#011(Stopped)
> Aug  2 07:04:45 virtualserver02 pengine: [1757]: notice: LogActions: Leave
> resource fs:1#011(Started virtualserver02)
> Aug  2 07:04:45 virtualserver02 cib: [19306]: info: write_cib_contents:
> Archived previous version as /var/lib/heartbeat/crm/cib-47.raw
> Aug  2 07:04:45 virtualserver02 crmd: [1758]: info: do_state_transition:
> State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS
> cause=C_IPC_MESSAGE origin=handle_response ]
> Aug  2 07:04:45 virtualserver02 crmd: [1758]: info: unpack_graph: Unpacked
> transition 0: 0 actions in 0 synapses
> Aug  2 07:04:45 virtualserver02 crmd: [1758]: info: do_te_invoke: Processing
> graph 0 (ref=pe_calc-dc-1312261485-15) derived from
> /var/lib/pengine/pe-input-101.bz2
> Aug  2 07:04:45 virtualserver02 crmd: [1758]: info: run_graph:
> ====================================================
> Aug  2 07:04:45 virtualserver02 crmd: [1758]: notice: run_graph: Transition
> 0 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0,
> Source=/var/lib/pengine/pe-input-101.bz2): Complete
> Aug  2 07:04:45 virtualserver02 crmd: [1758]: info: te_graph_trigger:
> Transition 0 is now complete
> Aug  2 07:04:45 virtualserver02 crmd: [1758]: info: notify_crmd: Transition
> 0 status: done - <null>
> Aug  2 07:04:45 virtualserver02 crmd: [1758]: info: do_state_transition:
> State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS
> cause=C_FSA_INTERNAL origin=notify_crmd ]
> Aug  2 07:04:45 virtualserver02 crmd: [1758]: info: do_state_transition:
> Starting PEngine Recheck Timer
> Aug  2 07:04:45 virtualserver02 cib: [19306]: info: write_cib_contents:
> Wrote version 0.238.0 of the CIB to disk (digest:
> 3ad7e501b66a385cbb08f9897259f1f2)
> Aug  2 07:04:45 virtualserver02 pengine: [1757]: info: process_pe_message:
> Transition 0: PEngine Input stored in: /var/lib/pengine/pe-input-101.bz2
> Aug  2 07:04:45 virtualserver02 cib: [19306]: info: retrieveCib: Reading
> cluster configuration from: /var/lib/heartbeat/crm/cib.n4vOgp (digest:
> /var/lib/heartbeat/crm/cib.9Gtl5C)
> Aug  2 07:04:51 virtualserver02 cibadmin: [19316]: info: Invoked:
> /usr/sbin/cibadmin -Ql
> Aug  2 07:04:51 virtualserver02 cibadmin: [19333]: info: Invoked:
> /usr/sbin/cibadmin -Ql
>
>
> _______________________________________________
> Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs:
> http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker
>
>




More information about the Pacemaker mailing list