[Pacemaker] Dual-primary DRBD problem: Promoted 0 instances of a possible 2 to master
Matt Anderson
tuxfan at hotmail.com
Mon Aug 15 19:16:10 UTC 2011
There was quite a lot of it (all were pe-input* files). I made one file out
of them that has everything under /var/lib/pengine/ that was created
the same hour than my previous logs: http://minus.com/dcHdfKuZD.gz
Since the test in the logs I have changed my corosync config a bit by
removing the redundant ring settings because the second ring was
always marked as faulty and some posts lately on the list indicate
that rrp isn't quite ready for production yet. But that didn't solve my
problem.
However, I noticed that if I stop the virtual domain resources in crm
before starting the ms_drbd_* resources, both DRBD devices are
correctly promoted to masters on both nodes. And after that I can
start the virtual domains in crm with no problems. So either this is
some kind of timing issue, or I have something wrong in my
pacemaker config?
Also, if I have all resources running on s2 and I put s2 on standby I
eventually get:
Node s2: standby
Online: [ s1 s3 ]
Master/Slave Set: ms_drbd_www
drbd_www:1 (ocf::linbit:drbd): Slave s2 (unmanaged) FAILED
Masters: [ s1 ]
Master/Slave Set: ms_drbd_www2
drbd_www2:1 (ocf::linbit:drbd): Slave s2 (unmanaged) FAILED
Masters: [ s1 ]
www-server (ocf::heartbeat:VirtualDomain): Started s2 (unmanaged) FAILED
www2-server (ocf::heartbeat:VirtualDomain): Started s2 (unmanaged) FAILED
Failed actions:
drbd_www:1_demote_0 (node=s2, call=1152, rc=-2, status=Timed Out): unknown exec error
drbd_www:1_stop_0 (node=s2, call=1157, rc=-2, status=Timed Out): unknown exec error
drbd_www2:1_demote_0 (node=s2, call=1159, rc=-2, status=Timed Out): unknown exec error
drbd_www2:1_stop_0 (node=s2, call=1162, rc=-2, status=Timed Out): unknown exec error
www-server_stop_0 (node=s2, call=1147, rc=1, status=complete): unknown error
www2-server_stop_0 (node=s2, call=1148, rc=1, status=complete): unknown error
And DRBD is still running as primary on both nodes and virtual servers
are both also still running on s2.
The "unkown" errors in DC seems to be:
Aug 15 21:38:13 s3-1 pengine: [20809]: WARN: unpack_rsc_op: Processing failed op drbd_www:1_stop_0 on s2: unknown exec error (-2)
Aug 15 21:38:13 s3-1 pengine: [20809]: WARN: unpack_rsc_op: Processing failed op www-server_stop_0 on s2: unknown error (1)
Just stopping the virtual domain resource also gives "unknown error", but
the actual virtual server is really stopped.
If I stop the virtual domain resources, and cleanup their errors. I can put s2
on standby and DRBD devices are stopped on s2. And if I then start the
virtual domain resources, they are correctly started on s1. So looks again like
some timing problem with DRBD + virtual domain RAs, or just some error in
my config?
> We'd need access to the files in /var/lib/pengine/ from the DC too.
>
> On Tue, Aug 2, 2011 at 7:08 PM, Matt Anderson <tuxfan at hotmail.com> wrote:
> >
> > Hi!
> >
> > Sorry for the repost, but the links in my previous message expired.
> > Now these new ones shouldn't do that. I also added the DC's log at the end
> > of this message.
> >
> > I've been trying to make a simple HA cluster with 3 servers (the 3rd server
> > is there only to maintain quorum if one node fails). The idea is to run two
> > virtual domains over dedicated DRBD devices in dual-primary mode (so that
> > live migration would be possible).
> >
> > Things worked well for a while, but somewhere during my tests something
> > went wrong and now the DRBD devices don't get promoted to primary mode by
> > pacemaker. Pacemaker just keeps starting and stopping the devices in a loop.
> > If I start DRBD from the init script, both devices are started and
> > automaticly synced. At first I had this problem only with one device, but
> > now it's the same with both devices under pacemaker.
> >
> > Pacemaker and DRBD write a lot of logs [1] [2] [3] (these are made when I
> > try to start ms_drbd_www2, but I don't see a reason why pacemaker doesn't
> > promote any masters.
> >
> > My guess is that this has something to do with my fencing rules in DRBD [4]
> > or then just in my pacemaker config [5]. I used to have STONITH enabled, but
> > since my STONITH devices share the power supply with the server, I've then
> > removed those settings from my pacemaker config.
> >
> > I'm running Debian squeeze on amd64 with pacemaker (1.0.11-1~bpo60+1) and
> > corosync (1.3.0-3~bpo60+1) from backports.
> >
> > Any ideas what's wrong and how to fix it?
> >
> >
> > [1] http://paste.debian.net/124836/ (DRBD log from on node)
> >
> > [2] http://paste.debian.net/124838/ (pacemaker log from the same node as above)
> >
> > [3] http://paste.debian.net/124839/ (pacemaker log from DC)
> >
> > [4] http://paste.debian.net/124845/ (DRBD common config)
> >
> > [5] http://paste.debian.net/124846/ (pacemaker config)
> >
> > Pacemaker log from DC [3]:
> >
> > Jul 28 22:28:01 s3-1 cibadmin: [10292]: info: Invoked: cibadmin -Ql -o resources
> > Jul 28 22:28:01 s3-1 cibadmin: [10295]: info: Invoked: cibadmin -p -R -o resources
> > Jul 28 22:28:01 s3-1 cib: [21918]: info: log_data_element: cib:diff: - <cib admin_epoch="0" epoch="439" num_updates="10" >
> > Jul 28 22:28:01 s3-1 cib: [21918]: info: log_data_element: cib:diff: - <configuration >
> > Jul 28 22:28:01 s3-1 cib: [21918]: info: log_data_element: cib:diff: - <resources >
> > Jul 28 22:28:01 s3-1 cib: [21918]: info: log_data_element: cib:diff: - <master id="ms_drbd_www2" >
> > Jul 28 22:28:01 s3-1 cib: [21918]: info: log_data_element: cib:diff: - <meta_attributes id="ms_drbd_www2-meta_attributes" >
> > Jul 28 22:28:01 s3-1 cib: [21918]: info: log_data_element: cib:diff: - <nvpair value="Stopped" id="ms_drbd_www2-meta_attributes-target-role" />
> > Jul 28 22:28:01 s3-1 cib: [21918]: info: log_data_element: cib:diff: - </meta_attributes>
> > Jul 28 22:28:01 s3-1 cib: [21918]: info: log_data_element: cib:diff: - </master>
> > Jul 28 22:28:01 s3-1 cib: [21918]: info: log_data_element: cib:diff: - </resources>
> > Jul 28 22:28:01 s3-1 cib: [21918]: info: log_data_element: cib:diff: - </configuration>
> > Jul 28 22:28:01 s3-1 cib: [21918]: info: log_data_element: cib:diff: - </cib>
> > Jul 28 22:28:01 s3-1 cib: [21918]: info: log_data_element: cib:diff: + <cib admin_epoch="0" epoch="440" num_updates="1" >
> > Jul 28 22:28:01 s3-1 cib: [21918]: info: log_data_element: cib:diff: + <configuration >
> > Jul 28 22:28:01 s3-1 cib: [21918]: info: log_data_element: cib:diff: + <resources >
> > Jul 28 22:28:01 s3-1 cib: [21918]: info: log_data_element: cib:diff: + <master id="ms_drbd_www2" >
> > Jul 28 22:28:01 s3-1 cib: [21918]: info: log_data_element: cib:diff: + <meta_attributes id="ms_drbd_www2-meta_attributes" >
> > Jul 28 22:28:01 s3-1 cib: [21918]: info: log_data_element: cib:diff: + <nvpair value="Started" id="ms_drbd_www2-meta_attributes-target-role" />
> > Jul 28 22:28:01 s3-1 cib: [21918]: info: log_data_element: cib:diff: + </meta_attributes>
> > Jul 28 22:28:01 s3-1 cib: [21918]: info: log_data_element: cib:diff: + </master>
> > Jul 28 22:28:01 s3-1 cib: [21918]: info: log_data_element: cib:diff: + </resources>
> > Jul 28 22:28:01 s3-1 cib: [21918]: info: log_data_element: cib:diff: + </configuration>
> > Jul 28 22:28:01 s3-1 cib: [21918]: info: log_data_element: cib:diff: + </cib>
> > Jul 28 22:28:01 s3-1 cib: [21918]: info: cib_process_request: Operation complete: op cib_replace for section resources (origin=local/cibadmin/2, version=0.440.1): ok (rc=0)
> > Jul 28 22:28:01 s3-1 crmd: [21922]: info: abort_transition_graph: need_abort:59 - Triggered transition abort (complete=1) : Non-status change
> > Jul 28 22:28:01 s3-1 crmd: [21922]: info: need_abort: Aborting on change to admin_epoch
> > Jul 28 22:28:01 s3-1 crmd: [21922]: info: do_state_transition: State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=abort_transition_graph ]
> > Jul 28 22:28:01 s3-1 crmd: [21922]: info: do_state_transition: All 3 cluster nodes are eligible to run resources.
> > Jul 28 22:28:01 s3-1 crmd: [21922]: info: do_pe_invoke: Query 1845: Requesting the current CIB: S_POLICY_ENGINE
> > Jul 28 22:28:01 s3-1 crmd: [21922]: info: do_pe_invoke_callback: Invoking the PE: query=1845, ref=pe_calc-dc-1311881281-3699, seq=190040, quorate=1
> > Jul 28 22:28:01 s3-1 pengine: [21921]: info: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
> > Jul 28 22:28:01 s3-1 pengine: [21921]: info: determine_online_status: Node s3 is online
> > Jul 28 22:28:01 s3-1 pengine: [21921]: info: determine_online_status: Node s1 is online
> > Jul 28 22:28:01 s3-1 pengine: [21921]: info: determine_online_status: Node s2 is online
> > Jul 28 22:28:01 s3-1 pengine: [21921]: notice: clone_print: Master/Slave Set: ms_drbd_www
> > Jul 28 22:28:01 s3-1 pengine: [21921]: notice: short_print: Stopped: [ drbd_www:0 drbd_www:1 ]
> > Jul 28 22:28:01 s3-1 pengine: [21921]: notice: clone_print: Master/Slave Set: ms_drbd_www2
> > Jul 28 22:28:01 s3-1 pengine: [21921]: notice: short_print: Stopped: [ drbd_www2:0 drbd_www2:1 ]
> > Jul 28 22:28:01 s3-1 pengine: [21921]: notice: native_print: www-server#011(ocf::heartbeat:VirtualDomain):#011Stopped
> > Jul 28 22:28:01 s3-1 pengine: [21921]: notice: native_print: www2-server#011(ocf::heartbeat:VirtualDomain):#011Stopped
> > Jul 28 22:28:01 s3-1 pengine: [21921]: notice: native_print: www2-mailto#011(ocf::heartbeat:MailTo):#011Stopped
> > Jul 28 22:28:01 s3-1 pengine: [21921]: notice: native_print: www-mailto#011(ocf::heartbeat:MailTo):#011Stopped
> > Jul 28 22:28:01 s3-1 pengine: [21921]: info: native_color: Resource drbd_www:0 cannot run anywhere
> > Jul 28 22:28:01 s3-1 pengine: [21921]: info: native_color: Resource drbd_www:1 cannot run anywhere
> > Jul 28 22:28:01 s3-1 pengine: [21921]: info: master_color: ms_drbd_www: Promoted 0 instances of a possible 2 to master
> > Jul 28 22:28:01 s3-1 pengine: [21921]: info: master_color: ms_drbd_www2: Promoted 0 instances of a possible 2 to master
> > Jul 28 22:28:01 s3-1 pengine: [21921]: info: master_color: ms_drbd_www: Promoted 0 instances of a possible 2 to master
> > Jul 28 22:28:01 s3-1 pengine: [21921]: info: rsc_merge_weights: www-server: Rolling back scores from www-mailto
> > Jul 28 22:28:01 s3-1 pengine: [21921]: info: native_color: Resource www-server cannot run anywhere
> > Jul 28 22:28:01 s3-1 pengine: [21921]: info: master_color: ms_drbd_www2: Promoted 0 instances of a possible 2 to master
> > Jul 28 22:28:01 s3-1 pengine: [21921]: info: rsc_merge_weights: www2-server: Rolling back scores from www2-mailto
> > Jul 28 22:28:01 s3-1 pengine: [21921]: info: native_color: Resource www2-server cannot run anywhere
> > Jul 28 22:28:01 s3-1 pengine: [21921]: info: native_color: Resource www2-mailto cannot run anywhere
> > Jul 28 22:28:01 s3-1 pengine: [21921]: info: native_color: Resource www-mailto cannot run anywhere
> > Jul 28 22:28:01 s3-1 pengine: [21921]: notice: RecurringOp: Start recurring monitor (15s) for drbd_www2:0 on s2
> > Jul 28 22:28:01 s3-1 pengine: [21921]: notice: RecurringOp: Start recurring monitor (15s) for drbd_www2:1 on s1
> > Jul 28 22:28:01 s3-1 pengine: [21921]: notice: RecurringOp: Start recurring monitor (15s) for drbd_www2:0 on s2
> > Jul 28 22:28:01 s3-1 pengine: [21921]: notice: RecurringOp: Start recurring monitor (15s) for drbd_www2:1 on s1
> > Jul 28 22:28:01 s3-1 pengine: [21921]: ERROR: clone_rsc_order_rh_non_clone: Unknown action: www-server_demote_0
> > Jul 28 22:28:01 s3-1 pengine: [21921]: ERROR: clone_rsc_order_rh_non_clone: Unknown action: www2-server_demote_0
> > Jul 28 22:28:01 s3-1 pengine: [21921]: notice: LogActions: Leave resource drbd_www:0#011(Stopped)
> > Jul 28 22:28:01 s3-1 pengine: [21921]: notice: LogActions: Leave resource drbd_www:1#011(Stopped)
> > Jul 28 22:28:01 s3-1 pengine: [21921]: notice: LogActions: Start drbd_www2:0#011(s2)
> > Jul 28 22:28:01 s3-1 pengine: [21921]: notice: LogActions: Start drbd_www2:1#011(s1)
> > Jul 28 22:28:01 s3-1 pengine: [21921]: notice: LogActions: Leave resource www-server#011(Stopped)
> > Jul 28 22:28:01 s3-1 pengine: [21921]: notice: LogActions: Leave resource www2-server#011(Stopped)
> > Jul 28 22:28:01 s3-1 pengine: [21921]: notice: LogActions: Leave resource www2-mailto#011(Stopped)
> > Jul 28 22:28:01 s3-1 pengine: [21921]: notice: LogActions: Leave resource www-mailto#011(Stopped)
> > Jul 28 22:28:01 s3-1 crmd: [21922]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
> > Jul 28 22:28:01 s3-1 crmd: [21922]: info: unpack_graph: Unpacked transition 1548: 12 actions in 12 synapses
> > Jul 28 22:28:01 s3-1 crmd: [21922]: info: do_te_invoke: Processing graph 1548 (ref=pe_calc-dc-1311881281-3699) derived from /var/lib/pengine/pe-input-9218.bz2
> > Jul 28 22:28:01 s3-1 crmd: [21922]: info: te_pseudo_action: Pseudo action 36 fired and confirmed
> > Jul 28 22:28:01 s3-1 crmd: [21922]: info: te_pseudo_action: Pseudo action 37 fired and confirmed
> > Jul 28 22:28:01 s3-1 crmd: [21922]: info: te_pseudo_action: Pseudo action 34 fired and confirmed
> > Jul 28 22:28:01 s3-1 crmd: [21922]: info: te_rsc_command: Initiating action 30: start drbd_www2:0_start_0 on s2
> > Jul 28 22:28:01 s3-1 crmd: [21922]: info: match_graph_event: Action drbd_www2:0_start_0 (30) confirmed on s2 (rc=0)
> > Jul 28 22:28:01 s3-1 crmd: [21922]: info: te_rsc_command: Initiating action 32: start drbd_www2:1_start_0 on s1
> > Jul 28 22:28:01 s3-1 crmd: [21922]: info: abort_transition_graph: te_update_diff:150 - Triggered transition abort (complete=0, tag=nvpair, id=status-s1-master-drbd_www2:1, magic=NA, cib=0.440.3) : Transient attribute: update
> > Jul 28 22:28:01 s3-1 crmd: [21922]: info: update_abort_priority: Abort priority upgraded from 0 to 1000000
> > Jul 28 22:28:01 s3-1 crmd: [21922]: info: update_abort_priority: Abort action done superceeded by restart
> > Jul 28 22:28:01 s3-1 crmd: [21922]: info: match_graph_event: Action drbd_www2:1_start_0 (32) confirmed on s1 (rc=0)
> > Jul 28 22:28:01 s3-1 crmd: [21922]: info: te_pseudo_action: Pseudo action 35 fired and confirmed
> > Jul 28 22:28:01 s3-1 crmd: [21922]: info: te_pseudo_action: Pseudo action 38 fired and confirmed
> > Jul 28 22:28:01 s3-1 crmd: [21922]: info: te_rsc_command: Initiating action 80: notify drbd_www2:0_post_notify_start_0 on s2
> > Jul 28 22:28:01 s3-1 crmd: [21922]: info: match_graph_event: Action drbd_www2:0_post_notify_start_0 (80) confirmed on s2 (rc=0)
> > Jul 28 22:28:01 s3-1 crmd: [21922]: info: te_rsc_command: Initiating action 81: notify drbd_www2:1_post_notify_start_0 on s1
> > Jul 28 22:28:01 s3-1 crmd: [21922]: info: match_graph_event: Action drbd_www2:1_post_notify_start_0 (81) confirmed on s1 (rc=0)
> > Jul 28 22:28:01 s3-1 crmd: [21922]: info: te_pseudo_action: Pseudo action 39 fired and confirmed
> > Jul 28 22:28:01 s3-1 crmd: [21922]: info: run_graph: ====================================================
> > Jul 28 22:28:01 s3-1 crmd: [21922]: notice: run_graph: Transition 1548 (Complete=10, Pending=0, Fired=0, Skipped=2, Incomplete=0, Source=/var/lib/pengine/pe-input-9218.bz2): Stopped
> > Jul 28 22:28:01 s3-1 crmd: [21922]: info: te_graph_trigger: Transition 1548 is now complete
> > Jul 28 22:28:01 s3-1 crmd: [21922]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_crmd ]
> > Jul 28 22:28:01 s3-1 crmd: [21922]: info: do_state_transition: All 3 cluster nodes are eligible to run resources.
> > Jul 28 22:28:01 s3-1 crmd: [21922]: info: do_pe_invoke: Query 1846: Requesting the current CIB: S_POLICY_ENGINE
> > Jul 28 22:28:01 s3-1 crmd: [21922]: info: do_pe_invoke_callback: Invoking the PE: query=1846, ref=pe_calc-dc-1311881281-3704, seq=190040, quorate=1
> > Jul 28 22:28:01 s3-1 cib: [10296]: info: write_cib_contents: Archived previous version as /var/lib/heartbeat/crm/cib-77.raw
> > Jul 28 22:28:01 s3-1 pengine: [21921]: info: process_pe_message: Transition 1548: PEngine Input stored in: /var/lib/pengine/pe-input-9218.bz2
> > Jul 28 22:28:01 s3-1 pengine: [21921]: info: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
> > Jul 28 22:28:01 s3-1 pengine: [21921]: info: determine_online_status: Node s3 is online
> > Jul 28 22:28:01 s3-1 pengine: [21921]: info: determine_online_status: Node s1 is online
> > Jul 28 22:28:01 s3-1 pengine: [21921]: info: determine_online_status: Node s2 is online
> > Jul 28 22:28:01 s3-1 pengine: [21921]: notice: clone_print: Master/Slave Set: ms_drbd_www
> > Jul 28 22:28:01 s3-1 pengine: [21921]: notice: short_print: Stopped: [ drbd_www:0 drbd_www:1 ]
> > Jul 28 22:28:01 s3-1 pengine: [21921]: notice: clone_print: Master/Slave Set: ms_drbd_www2
> > Jul 28 22:28:01 s3-1 pengine: [21921]: notice: short_print: Slaves: [ s2 s1 ]
> > Jul 28 22:28:01 s3-1 pengine: [21921]: notice: native_print: www-server#011(ocf::heartbeat:VirtualDomain):#011Stopped
> > Jul 28 22:28:01 s3-1 pengine: [21921]: notice: native_print: www2-server#011(ocf::heartbeat:VirtualDomain):#011Stopped
> > Jul 28 22:28:01 s3-1 pengine: [21921]: notice: native_print: www2-mailto#011(ocf::heartbeat:MailTo):#011Stopped
> > Jul 28 22:28:01 s3-1 pengine: [21921]: notice: native_print: www-mailto#011(ocf::heartbeat:MailTo):#011Stopped
> > Jul 28 22:28:01 s3-1 pengine: [21921]: info: native_color: Resource drbd_www:0 cannot run anywhere
> > Jul 28 22:28:01 s3-1 pengine: [21921]: info: native_color: Resource drbd_www:1 cannot run anywhere
> > Jul 28 22:28:01 s3-1 pengine: [21921]: info: master_color: ms_drbd_www: Promoted 0 instances of a possible 2 to master
> > Jul 28 22:28:01 s3-1 pengine: [21921]: info: master_color: ms_drbd_www2: Promoted 0 instances of a possible 2 to master
> > Jul 28 22:28:01 s3-1 pengine: [21921]: info: master_color: ms_drbd_www: Promoted 0 instances of a possible 2 to master
> > Jul 28 22:28:01 s3-1 pengine: [21921]: info: rsc_merge_weights: www-server: Rolling back scores from www-mailto
> > Jul 28 22:28:01 s3-1 pengine: [21921]: info: native_color: Resource www-server cannot run anywhere
> > Jul 28 22:28:01 s3-1 pengine: [21921]: info: master_color: ms_drbd_www2: Promoted 0 instances of a possible 2 to master
> > Jul 28 22:28:01 s3-1 pengine: [21921]: info: rsc_merge_weights: www2-server: Rolling back scores from www2-mailto
> > Jul 28 22:28:01 s3-1 pengine: [21921]: info: native_color: Resource www2-server cannot run anywhere
> > Jul 28 22:28:01 s3-1 pengine: [21921]: info: native_color: Resource www2-mailto cannot run anywhere
> > Jul 28 22:28:01 s3-1 pengine: [21921]: info: native_color: Resource www-mailto cannot run anywhere
> > Jul 28 22:28:01 s3-1 pengine: [21921]: notice: RecurringOp: Start recurring monitor (15s) for drbd_www2:0 on s1
> > Jul 28 22:28:01 s3-1 pengine: [21921]: notice: RecurringOp: Start recurring monitor (15s) for drbd_www2:1 on s2
> > Jul 28 22:28:01 s3-1 pengine: [21921]: notice: RecurringOp: Start recurring monitor (15s) for drbd_www2:0 on s1
> > Jul 28 22:28:01 s3-1 pengine: [21921]: notice: RecurringOp: Start recurring monitor (15s) for drbd_www2:1 on s2
> > Jul 28 22:28:01 s3-1 pengine: [21921]: ERROR: clone_rsc_order_rh_non_clone: Unknown action: www-server_demote_0
> > Jul 28 22:28:01 s3-1 pengine: [21921]: ERROR: clone_rsc_order_rh_non_clone: Unknown action: www2-server_demote_0
> > Jul 28 22:28:01 s3-1 pengine: [21921]: notice: LogActions: Leave resource drbd_www:0#011(Stopped)
> > Jul 28 22:28:01 s3-1 pengine: [21921]: notice: LogActions: Leave resource drbd_www:1#011(Stopped)
> > Jul 28 22:28:01 s3-1 pengine: [21921]: notice: LogActions: Move resource drbd_www2:0#011(Slave s2 -> s1)
> > Jul 28 22:28:01 s3-1 pengine: [21921]: notice: LogActions: Move resource drbd_www2:1#011(Slave s1 -> s2)
> > Jul 28 22:28:01 s3-1 pengine: [21921]: notice: LogActions: Leave resource www-server#011(Stopped)
> > Jul 28 22:28:01 s3-1 pengine: [21921]: notice: LogActions: Leave resource www2-server#011(Stopped)
> > Jul 28 22:28:01 s3-1 pengine: [21921]: notice: LogActions: Leave resource www2-mailto#011(Stopped)
> > Jul 28 22:28:01 s3-1 pengine: [21921]: notice: LogActions: Leave resource www-mailto#011(Stopped)
> > Jul 28 22:28:01 s3-1 crmd: [21922]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
> > Jul 28 22:28:01 s3-1 crmd: [21922]: info: unpack_graph: Unpacked transition 1549: 23 actions in 23 synapses
> > Jul 28 22:28:01 s3-1 crmd: [21922]: info: do_te_invoke: Processing graph 1549 (ref=pe_calc-dc-1311881281-3704) derived from /var/lib/pengine/pe-input-9219.bz2
> > Jul 28 22:28:01 s3-1 crmd: [21922]: info: te_pseudo_action: Pseudo action 46 fired and confirmed
> > Jul 28 22:28:01 s3-1 crmd: [21922]: info: te_rsc_command: Initiating action 82: notify drbd_www2:0_pre_notify_stop_0 on s2
> > Jul 28 22:28:01 s3-1 pengine: [21921]: info: process_pe_message: Transition 1549: PEngine Input stored in: /var/lib/pengine/pe-input-9219.bz2
> > Jul 28 22:28:01 s3-1 cib: [10296]: info: write_cib_contents: Wrote version 0.440.0 of the CIB to disk (digest: 3fa86d20299acf9247c14b5760f9b9c3)
> > Jul 28 22:28:01 s3-1 crmd: [21922]: info: match_graph_event: Action drbd_www2:0_pre_notify_stop_0 (82) confirmed on s2 (rc=0)
> > Jul 28 22:28:01 s3-1 crmd: [21922]: info: te_rsc_command: Initiating action 83: notify drbd_www2:1_pre_notify_stop_0 on s1
> > Jul 28 22:28:01 s3-1 cib: [10296]: info: retrieveCib: Reading cluster configuration from: /var/lib/heartbeat/crm/cib.sLA4uT (digest: /var/lib/heartbeat/crm/cib.tXdeLK)
> > Jul 28 22:28:01 s3-1 crmd: [21922]: info: match_graph_event: Action drbd_www2:1_pre_notify_stop_0 (83) confirmed on s1 (rc=0)
> > Jul 28 22:28:01 s3-1 crmd: [21922]: info: te_pseudo_action: Pseudo action 47 fired and confirmed
> > Jul 28 22:28:01 s3-1 crmd: [21922]: info: te_pseudo_action: Pseudo action 44 fired and confirmed
> > Jul 28 22:28:01 s3-1 crmd: [21922]: info: te_rsc_command: Initiating action 31: stop drbd_www2:0_stop_0 on s2
> > Jul 28 22:28:02 s3-1 crmd: [21922]: info: match_graph_event: Action drbd_www2:0_stop_0 (31) confirmed on s2 (rc=0)
> > Jul 28 22:28:02 s3-1 crmd: [21922]: info: te_rsc_command: Initiating action 35: stop drbd_www2:1_stop_0 on s1
> > Jul 28 22:28:02 s3-1 crmd: [21922]: info: abort_transition_graph: te_update_diff:164 - Triggered transition abort (complete=0, tag=transient_attributes, id=s1, magic=NA, cib=0.440.10) : Transient attribute: removal
> > Jul 28 22:28:02 s3-1 crmd: [21922]: info: update_abort_priority: Abort priority upgraded from 0 to 1000000
> > Jul 28 22:28:02 s3-1 crmd: [21922]: info: update_abort_priority: Abort action done superceeded by restart
> > Jul 28 22:28:02 s3-1 crmd: [21922]: info: match_graph_event: Action drbd_www2:1_stop_0 (35) confirmed on s1 (rc=0)
> > Jul 28 22:28:02 s3-1 crmd: [21922]: info: te_pseudo_action: Pseudo action 45 fired and confirmed
> > Jul 28 22:28:02 s3-1 crmd: [21922]: info: te_pseudo_action: Pseudo action 48 fired and confirmed
> > Jul 28 22:28:02 s3-1 crmd: [21922]: info: te_pseudo_action: Pseudo action 49 fired and confirmed
> > Jul 28 22:28:02 s3-1 crmd: [21922]: info: run_graph: ====================================================
> > Jul 28 22:28:02 s3-1 crmd: [21922]: notice: run_graph: Transition 1549 (Complete=10, Pending=0, Fired=0, Skipped=8, Incomplete=5, Source=/var/lib/pengine/pe-input-9219.bz2): Stopped
> > Jul 28 22:28:02 s3-1 crmd: [21922]: info: te_graph_trigger: Transition 1549 is now complete
> > Jul 28 22:28:02 s3-1 crmd: [21922]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_crmd ]
> > Jul 28 22:28:02 s3-1 crmd: [21922]: info: do_state_transition: All 3 cluster nodes are eligible to run resources.
> > Jul 28 22:28:02 s3-1 crmd: [21922]: info: do_pe_invoke: Query 1847: Requesting the current CIB: S_POLICY_ENGINE
> > Jul 28 22:28:02 s3-1 crmd: [21922]: info: do_pe_invoke_callback: Invoking the PE: query=1847, ref=pe_calc-dc-1311881282-3709, seq=190040, quorate=1
> > Jul 28 22:28:02 s3-1 pengine: [21921]: info: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
> > Jul 28 22:28:02 s3-1 pengine: [21921]: info: determine_online_status: Node s3 is online
> > Jul 28 22:28:02 s3-1 pengine: [21921]: info: determine_online_status: Node s1 is online
> > Jul 28 22:28:02 s3-1 pengine: [21921]: info: determine_online_status: Node s2 is online
> > Jul 28 22:28:02 s3-1 pengine: [21921]: notice: clone_print: Master/Slave Set: ms_drbd_www
> > Jul 28 22:28:02 s3-1 pengine: [21921]: notice: short_print: Stopped: [ drbd_www:0 drbd_www:1 ]
> > Jul 28 22:28:02 s3-1 pengine: [21921]: notice: clone_print: Master/Slave Set: ms_drbd_www2
> > Jul 28 22:28:02 s3-1 pengine: [21921]: notice: short_print: Stopped: [ drbd_www2:0 drbd_www2:1 ]
> > Jul 28 22:28:02 s3-1 pengine: [21921]: notice: native_print: www-server#011(ocf::heartbeat:VirtualDomain):#011Stopped
> > Jul 28 22:28:02 s3-1 pengine: [21921]: notice: native_print: www2-server#011(ocf::heartbeat:VirtualDomain):#011Stopped
> > Jul 28 22:28:02 s3-1 pengine: [21921]: notice: native_print: www2-mailto#011(ocf::heartbeat:MailTo):#011Stopped
> > Jul 28 22:28:02 s3-1 pengine: [21921]: notice: native_print: www-mailto#011(ocf::heartbeat:MailTo):#011Stopped
> > Jul 28 22:28:02 s3-1 pengine: [21921]: info: native_color: Resource drbd_www:0 cannot run anywhere
> > Jul 28 22:28:02 s3-1 pengine: [21921]: info: native_color: Resource drbd_www:1 cannot run anywhere
> > Jul 28 22:28:02 s3-1 pengine: [21921]: info: master_color: ms_drbd_www: Promoted 0 instances of a possible 2 to master
> > Jul 28 22:28:02 s3-1 pengine: [21921]: info: master_color: ms_drbd_www2: Promoted 0 instances of a possible 2 to master
> > Jul 28 22:28:02 s3-1 pengine: [21921]: info: master_color: ms_drbd_www: Promoted 0 instances of a possible 2 to master
> > Jul 28 22:28:02 s3-1 pengine: [21921]: info: rsc_merge_weights: www-server: Rolling back scores from www-mailto
> > Jul 28 22:28:02 s3-1 pengine: [21921]: info: native_color: Resource www-server cannot run anywhere
> > Jul 28 22:28:02 s3-1 pengine: [21921]: info: master_color: ms_drbd_www2: Promoted 0 instances of a possible 2 to master
> > Jul 28 22:28:02 s3-1 pengine: [21921]: info: rsc_merge_weights: www2-server: Rolling back scores from www2-mailto
> > Jul 28 22:28:02 s3-1 pengine: [21921]: info: native_color: Resource www2-server cannot run anywhere
> > Jul 28 22:28:02 s3-1 pengine: [21921]: info: native_color: Resource www2-mailto cannot run anywhere
> > Jul 28 22:28:02 s3-1 pengine: [21921]: info: native_color: Resource www-mailto cannot run anywhere
> > Jul 28 22:28:02 s3-1 pengine: [21921]: notice: RecurringOp: Start recurring monitor (15s) for drbd_www2:0 on s2
> > Jul 28 22:28:02 s3-1 pengine: [21921]: notice: RecurringOp: Start recurring monitor (15s) for drbd_www2:1 on s1
> > Jul 28 22:28:02 s3-1 pengine: [21921]: notice: RecurringOp: Start recurring monitor (15s) for drbd_www2:0 on s2
> > Jul 28 22:28:02 s3-1 pengine: [21921]: notice: RecurringOp: Start recurring monitor (15s) for drbd_www2:1 on s1
> > Jul 28 22:28:02 s3-1 pengine: [21921]: ERROR: clone_rsc_order_rh_non_clone: Unknown action: www-server_demote_0
> > Jul 28 22:28:02 s3-1 pengine: [21921]: ERROR: clone_rsc_order_rh_non_clone: Unknown action: www2-server_demote_0
> > Jul 28 22:28:02 s3-1 pengine: [21921]: notice: LogActions: Leave resource drbd_www:0#011(Stopped)
> > Jul 28 22:28:02 s3-1 pengine: [21921]: notice: LogActions: Leave resource drbd_www:1#011(Stopped)
> > Jul 28 22:28:02 s3-1 pengine: [21921]: notice: LogActions: Start drbd_www2:0#011(s2)
> > Jul 28 22:28:02 s3-1 pengine: [21921]: notice: LogActions: Start drbd_www2:1#011(s1)
> > Jul 28 22:28:02 s3-1 pengine: [21921]: notice: LogActions: Leave resource www-server#011(Stopped)
> > Jul 28 22:28:02 s3-1 pengine: [21921]: notice: LogActions: Leave resource www2-server#011(Stopped)
> > Jul 28 22:28:02 s3-1 pengine: [21921]: notice: LogActions: Leave resource www2-mailto#011(Stopped)
> > Jul 28 22:28:02 s3-1 pengine: [21921]: notice: LogActions: Leave resource www-mailto#011(Stopped)
> > Jul 28 22:28:02 s3-1 crmd: [21922]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
> > Jul 28 22:28:02 s3-1 crmd: [21922]: info: unpack_graph: Unpacked transition 1550: 12 actions in 12 synapses
> > Jul 28 22:28:02 s3-1 crmd: [21922]: info: do_te_invoke: Processing graph 1550 (ref=pe_calc-dc-1311881282-3709) derived from /var/lib/pengine/pe-input-9220.bz2
> > Jul 28 22:28:02 s3-1 crmd: [21922]: info: te_pseudo_action: Pseudo action 36 fired and confirmed
> > Jul 28 22:28:02 s3-1 crmd: [21922]: info: te_pseudo_action: Pseudo action 37 fired and confirmed
> > Jul 28 22:28:02 s3-1 crmd: [21922]: info: te_pseudo_action: Pseudo action 34 fired and confirmed
> > Jul 28 22:28:02 s3-1 crmd: [21922]: info: te_rsc_command: Initiating action 30: start drbd_www2:0_start_0 on s2
> > Jul 28 22:28:02 s3-1 pengine: [21921]: info: process_pe_message: Transition 1550: PEngine Input stored in: /var/lib/pengine/pe-input-9220.bz2
> > Jul 28 22:28:02 s3-1 crmd: [21922]: info: match_graph_event: Action drbd_www2:0_start_0 (30) confirmed on s2 (rc=0)
> > Jul 28 22:28:02 s3-1 crmd: [21922]: info: te_rsc_command: Initiating action 32: start drbd_www2:1_start_0 on s1
> > Jul 28 22:28:02 s3-1 crmd: [21922]: info: abort_transition_graph: te_update_diff:150 - Triggered transition abort (complete=0, tag=nvpair, id=status-s1-master-drbd_www2:1, magic=NA, cib=0.440.13) : Transient attribute: update
> > Jul 28 22:28:02 s3-1 crmd: [21922]: info: update_abort_priority: Abort priority upgraded from 0 to 1000000
> > Jul 28 22:28:02 s3-1 crmd: [21922]: info: update_abort_priority: Abort action done superceeded by restart
> > Jul 28 22:28:02 s3-1 crmd: [21922]: info: match_graph_event: Action drbd_www2:1_start_0 (32) confirmed on s1 (rc=0)
> > Jul 28 22:28:02 s3-1 crmd: [21922]: info: te_pseudo_action: Pseudo action 35 fired and confirmed
> > Jul 28 22:28:02 s3-1 crmd: [21922]: info: te_pseudo_action: Pseudo action 38 fired and confirmed
> > Jul 28 22:28:02 s3-1 crmd: [21922]: info: te_rsc_command: Initiating action 80: notify drbd_www2:0_post_notify_start_0 on s2
> > Jul 28 22:28:02 s3-1 crmd: [21922]: info: match_graph_event: Action drbd_www2:0_post_notify_start_0 (80) confirmed on s2 (rc=0)
> > Jul 28 22:28:02 s3-1 crmd: [21922]: info: te_rsc_command: Initiating action 81: notify drbd_www2:1_post_notify_start_0 on s1
> > Jul 28 22:28:02 s3-1 crmd: [21922]: info: match_graph_event: Action drbd_www2:1_post_notify_start_0 (81) confirmed on s1 (rc=0)
> > Jul 28 22:28:02 s3-1 crmd: [21922]: info: te_pseudo_action: Pseudo action 39 fired and confirmed
> > Jul 28 22:28:02 s3-1 crmd: [21922]: info: run_graph: ====================================================
> > Jul 28 22:28:02 s3-1 crmd: [21922]: notice: run_graph: Transition 1550 (Complete=10, Pending=0, Fired=0, Skipped=2, Incomplete=0, Source=/var/lib/pengine/pe-input-9220.bz2): Stopped
> > Jul 28 22:28:02 s3-1 crmd: [21922]: info: te_graph_trigger: Transition 1550 is now complete
> > Jul 28 22:28:02 s3-1 crmd: [21922]: info: do_state_transition: State transition S_TRANSITION_ENGINE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_FSA_INTERNAL origin=notify_crmd ]
> > Jul 28 22:28:02 s3-1 crmd: [21922]: info: do_state_transition: All 3 cluster nodes are eligible to run resources.
> > Jul 28 22:28:02 s3-1 crmd: [21922]: info: do_pe_invoke: Query 1848: Requesting the current CIB: S_POLICY_ENGINE
> > Jul 28 22:28:02 s3-1 crmd: [21922]: info: do_pe_invoke_callback: Invoking the PE: query=1848, ref=pe_calc-dc-1311881282-3714, seq=190040, quorate=1
> > Jul 28 22:28:02 s3-1 pengine: [21921]: info: unpack_config: Node scores: 'red' = -INFINITY, 'yellow' = 0, 'green' = 0
> > Jul 28 22:28:02 s3-1 pengine: [21921]: info: determine_online_status: Node s3 is online
> > Jul 28 22:28:02 s3-1 pengine: [21921]: info: determine_online_status: Node s1 is online
> > Jul 28 22:28:02 s3-1 pengine: [21921]: info: determine_online_status: Node s2 is online
> > Jul 28 22:28:02 s3-1 pengine: [21921]: notice: clone_print: Master/Slave Set: ms_drbd_www
> > Jul 28 22:28:02 s3-1 pengine: [21921]: notice: short_print: Stopped: [ drbd_www:0 drbd_www:1 ]
> > Jul 28 22:28:02 s3-1 pengine: [21921]: notice: clone_print: Master/Slave Set: ms_drbd_www2
> > Jul 28 22:28:02 s3-1 pengine: [21921]: notice: short_print: Slaves: [ s2 s1 ]
> > Jul 28 22:28:02 s3-1 pengine: [21921]: notice: native_print: www-server#011(ocf::heartbeat:VirtualDomain):#011Stopped
> > Jul 28 22:28:02 s3-1 pengine: [21921]: notice: native_print: www2-server#011(ocf::heartbeat:VirtualDomain):#011Stopped
> > Jul 28 22:28:02 s3-1 pengine: [21921]: notice: native_print: www2-mailto#011(ocf::heartbeat:MailTo):#011Stopped
> > Jul 28 22:28:02 s3-1 pengine: [21921]: notice: native_print: www-mailto#011(ocf::heartbeat:MailTo):#011Stopped
> > Jul 28 22:28:02 s3-1 pengine: [21921]: info: native_color: Resource drbd_www:0 cannot run anywhere
> > Jul 28 22:28:02 s3-1 pengine: [21921]: info: native_color: Resource drbd_www:1 cannot run anywhere
> > Jul 28 22:28:02 s3-1 pengine: [21921]: info: master_color: ms_drbd_www: Promoted 0 instances of a possible 2 to master
> > Jul 28 22:28:02 s3-1 pengine: [21921]: info: master_color: ms_drbd_www2: Promoted 0 instances of a possible 2 to master
> > Jul 28 22:28:02 s3-1 pengine: [21921]: info: master_color: ms_drbd_www: Promoted 0 instances of a possible 2 to master
> > Jul 28 22:28:02 s3-1 pengine: [21921]: info: rsc_merge_weights: www-server: Rolling back scores from www-mailto
> > Jul 28 22:28:02 s3-1 pengine: [21921]: info: native_color: Resource www-server cannot run anywhere
> > Jul 28 22:28:02 s3-1 pengine: [21921]: info: master_color: ms_drbd_www2: Promoted 0 instances of a possible 2 to master
> > Jul 28 22:28:02 s3-1 pengine: [21921]: info: rsc_merge_weights: www2-server: Rolling back scores from www2-mailto
> > Jul 28 22:28:02 s3-1 pengine: [21921]: info: native_color: Resource www2-server cannot run anywhere
> > Jul 28 22:28:02 s3-1 pengine: [21921]: info: native_color: Resource www2-mailto cannot run anywhere
> > Jul 28 22:28:02 s3-1 pengine: [21921]: info: native_color: Resource www-mailto cannot run anywhere
> > Jul 28 22:28:02 s3-1 pengine: [21921]: notice: RecurringOp: Start recurring monitor (15s) for drbd_www2:0 on s1
> > Jul 28 22:28:02 s3-1 pengine: [21921]: notice: RecurringOp: Start recurring monitor (15s) for drbd_www2:1 on s2
> > Jul 28 22:28:02 s3-1 pengine: [21921]: notice: RecurringOp: Start recurring monitor (15s) for drbd_www2:0 on s1
> > Jul 28 22:28:02 s3-1 pengine: [21921]: notice: RecurringOp: Start recurring monitor (15s) for drbd_www2:1 on s2
> > Jul 28 22:28:02 s3-1 pengine: [21921]: ERROR: clone_rsc_order_rh_non_clone: Unknown action: www-server_demote_0
> > Jul 28 22:28:02 s3-1 pengine: [21921]: ERROR: clone_rsc_order_rh_non_clone: Unknown action: www2-server_demote_0
> > Jul 28 22:28:02 s3-1 pengine: [21921]: notice: LogActions: Leave resource drbd_www:0#011(Stopped)
> > Jul 28 22:28:02 s3-1 pengine: [21921]: notice: LogActions: Leave resource drbd_www:1#011(Stopped)
> > Jul 28 22:28:02 s3-1 pengine: [21921]: notice: LogActions: Move resource drbd_www2:0#011(Slave s2 -> s1)
> > Jul 28 22:28:02 s3-1 pengine: [21921]: notice: LogActions: Move resource drbd_www2:1#011(Slave s1 -> s2)
> > Jul 28 22:28:02 s3-1 pengine: [21921]: notice: LogActions: Leave resource www-server#011(Stopped)
> > Jul 28 22:28:02 s3-1 pengine: [21921]: notice: LogActions: Leave resource www2-server#011(Stopped)
> > Jul 28 22:28:02 s3-1 pengine: [21921]: notice: LogActions: Leave resource www2-mailto#011(Stopped)
> > Jul 28 22:28:02 s3-1 pengine: [21921]: notice: LogActions: Leave resource www-mailto#011(Stopped)
> > Jul 28 22:28:02 s3-1 crmd: [21922]: info: do_state_transition: State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]
> > Jul 28 22:28:02 s3-1 crmd: [21922]: info: unpack_graph: Unpacked transition 1551: 23 actions in 23 synapses
> > Jul 28 22:28:02 s3-1 crmd: [21922]: info: do_te_invoke: Processing graph 1551 (ref=pe_calc-dc-1311881282-3714) derived from /var/lib/pengine/pe-input-9221.bz2
> > Jul 28 22:28:02 s3-1 crmd: [21922]: info: te_pseudo_action: Pseudo action 46 fired and confirmed
> > Jul 28 22:28:02 s3-1 crmd: [21922]: info: te_rsc_command: Initiating action 82: notify drbd_www2:0_pre_notify_stop_0 on s2
> > Jul 28 22:28:02 s3-1 pengine: [21921]: info: process_pe_message: Transition 1551: PEngine Input stored in: /var/lib/pengine/pe-input-9221.bz2
> > Jul 28 22:28:02 s3-1 crmd: [21922]: info: match_graph_event: Action drbd_www2:0_pre_notify_stop_0 (82) confirmed on s2 (rc=0)
> > Jul 28 22:28:02 s3-1 crmd: [21922]: info: te_rsc_command: Initiating action 83: notify drbd_www2:1_pre_notify_stop_0 on s1
> > Jul 28 22:28:02 s3-1 crmd: [21922]: info: match_graph_event: Action drbd_www2:1_pre_notify_stop_0 (83) confirmed on s1 (rc=0)
> > Jul 28 22:28:02 s3-1 crmd: [21922]: info: te_pseudo_action: Pseudo action 47 fired and confirmed
> > Jul 28 22:28:02 s3-1 crmd: [21922]: info: te_pseudo_action: Pseudo action 44 fired and confirmed
> > Jul 28 22:28:02 s3-1 crmd: [21922]: info: te_rsc_command: Initiating action 31: stop drbd_www2:0_stop_0 on s2
> > Jul 28 22:28:03 s3-1 crmd: [21922]: info: match_graph_event: Action drbd_www2:0_stop_0 (31) confirmed on s2 (rc=0)
> > Jul 28 22:28:03 s3-1 crmd: [21922]: info: te_rsc_command: Initiating action 35: stop drbd_www2:1_stop_0 on s1
> > Jul 28 22:28:03 s3-1 crmd: [21922]: info: abort_transition_graph: te_update_diff:164 - Triggered transition abort (complete=0, tag=transient_attributes, id=s1, magic=NA, cib=0.440.20) : Transient attribute: removal
> > Jul 28 22:28:03 s3-1 crmd: [21922]: info: update_abort_priority: Abort priority upgraded from 0 to 1000000
> > Jul 28 22:28:03 s3-1 crmd: [21922]: info: update_abort_priority: Abort action done superceeded by restart
> > Jul 28 22:28:03 s3-1 crmd: [21922]: info: match_graph_event: Action drbd_www2:1_stop_0 (35) confirmed on s1 (rc=0)
> > Jul 28 22:28:03 s3-1 crmd: [21922]: info: te_pseudo_action: Pseudo action 45 fired and confirmed
> > Jul 28 22:28:03 s3-1 crmd: [21922]: info: te_pseudo_action: Pseudo action 48 fired and confirmed
> > Jul 28 22:28:03 s3-1 crmd: [21922]: info: te_pseudo_action: Pseudo action 49 fired and confirmed
> > Jul 28 22:28:03 s3-1 crmd: [21922]: info: run_graph: ====================================================
> >
> >
> > _______________________________________________
> > Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
> > http://oss.clusterlabs.org/mailman/listinfo/pacemaker
> >
> > Project Home: http://www.clusterlabs.org
> > Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> > Bugs: http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker
> >
More information about the Pacemaker
mailing list