<div dir="ltr">Ah yes, sorry.<div><br></div><div><table class="" summary="Resource Clone Options" style="border-collapse:collapse;border-spacing:0px;width:770px;border:1px solid rgb(170,170,170);table-layout:fixed;word-wrap:break-word;max-width:100%;font-size:15.4px;color:rgb(51,51,51);font-family:'Open Sans','liberation sans','Myriad ','Bitstream Vera Sans','Lucida Grande','Luxi Sans',helvetica,verdana,arial,sans-serif;line-height:21px"><tbody><tr style="border-radius:0px;background-image:initial;background-repeat:initial"><td style="padding:8px;word-wrap:break-word;vertical-align:top;line-height:20px;border-top-width:1px;border-top-style:solid;border-top-color:rgb(221,221,221);background:none"><div class="" style="margin-bottom:1.8em;margin-top:0px;padding-bottom:0px;padding-top:0px;display:inline"><code class="" style="font-family:'dejavu sans mono','liberation mono','bitstream vera mono','dejavu mono',monospace;font-size:13.86px;padding:0px;color:inherit;border-radius:0px;font-weight:bold;white-space:pre-wrap;word-wrap:break-word;display:inline-block;background-color:transparent">clone-node-max</code></div></td><td style="padding:8px;word-wrap:break-word;vertical-align:top;line-height:20px;border-top-width:1px;border-top-style:solid;border-top-color:rgb(221,221,221);background:none"><div class="" style="margin-bottom:1.8em;margin-top:0px;padding-bottom:0px;padding-top:0px;display:inline">How many copies of the resource can be started on a single node; the default value is <code class="" style="font-family:'dejavu sans mono','liberation mono','bitstream vera mono','dejavu mono',monospace;font-size:13.86px;padding:0px;color:inherit;border-radius:0px;font-weight:bold;white-space:pre-wrap;word-wrap:break-word;display:inline-block;background-color:transparent">1</code>.</div></td></tr></tbody></table><br></div><div>So yes, a value of 1 here is correct.</div></div><div class="gmail_extra"><br clear="all"><div><div class="gmail_signature"><p style="font-family:verdana,sans-serif"><span style="font-weight:bold">Luke Pasc</span><span style="font-weight:bold">oe</span></p><p style="font-family:verdana,sans-serif"><img src="http://osnz.co.nz/logo_blue_80.png" width="96" height="28"><font size="1"><br><b><br>
</b></font></p><p style="font-family:verdana,sans-serif"><font size="1"><b>E</b> <a href="mailto:luke@osnz.co.nz" target="_blank">luke@osnz.co.nz</a><br><b>
P</b> +64 (9) 296 2961<br><b>
M</b> +64 (27) 426 6649<br><b>
W</b> <a href="http://www.osnz.co.nz/" target="_blank">www.osnz.co.nz</a><br>
<br>
24 Wellington St<br>
Papakura<br>
Auckland, 2110
<br>New Zealand</font></p></div></div>
<br><div class="gmail_quote">On 18 September 2015 at 11:36, Jason Gress <span dir="ltr"><<a href="mailto:jgress@accertify.com" target="_blank">jgress@accertify.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div style="word-wrap:break-word;color:rgb(0,0,0);font-size:14px;font-family:Calibri,sans-serif">
<div>I can't say whether or not you are right or wrong (you may be!) but I followed the Cluster From Scratch tutorial closely, and it only had a clone-node-max=1 there. (Page 106 of the pdf, for the curious.)</div>
<div><br>
</div>
<div>Thanks,</div>
<div><br>
</div>
<div>Jason</div>
<div><br>
</div>
<span>
<div style="font-family:Calibri;font-size:11pt;text-align:left;color:black;BORDER-BOTTOM:medium none;BORDER-LEFT:medium none;PADDING-BOTTOM:0in;PADDING-LEFT:0in;PADDING-RIGHT:0in;BORDER-TOP:#b5c4df 1pt solid;BORDER-RIGHT:medium none;PADDING-TOP:3pt">
<span style="font-weight:bold">From: </span>Luke Pascoe <<a href="mailto:luke@osnz.co.nz" target="_blank">luke@osnz.co.nz</a>><br>
<span style="font-weight:bold">Reply-To: </span>Cluster Labs - All topics related to open-source clustering welcomed <<a href="mailto:users@clusterlabs.org" target="_blank">users@clusterlabs.org</a>><br>
<span style="font-weight:bold">Date: </span>Thursday, September 17, 2015 at 6:29 PM<br>
<span style="font-weight:bold">To: </span>Cluster Labs - All topics related to open-source clustering welcomed <<a href="mailto:users@clusterlabs.org" target="_blank">users@clusterlabs.org</a>><br>
<span style="font-weight:bold">Subject: </span>Re: [ClusterLabs] Pacemaker/pcs & DRBD not demoting secondary node to Slave (always Stopped)<br>
</div><div><div class="h5">
<div><br>
</div>
<div>
<div>
<div dir="ltr">I may be wrong, but shouldn't "<span style="color:rgb(0,0,0);font-family:Calibri,sans-serif;font-size:14px">clone-node-max" be 2 on the </span><span style="color:rgb(0,0,0);font-family:Calibri,sans-serif;font-size:14px">ms_drbd_vmfs
resource?</span></div>
<div class="gmail_extra"><br clear="all">
<div>
<div>
<p style="font-family:verdana,sans-serif"><span style="font-weight:bold">Luke Pasc</span><span style="font-weight:bold">oe</span></p>
<p style="font-family:verdana,sans-serif"><img src="http://osnz.co.nz/logo_blue_80.png" width="96" height="28"><font size="1"><br>
<b><br>
</b></font></p>
<p style="font-family:verdana,sans-serif"><font size="1"><b>E</b> <a href="mailto:luke@osnz.co.nz" target="_blank">
luke@osnz.co.nz</a><br>
<b>P</b> <a href="tel:%2B64%20%289%29%20296%202961" value="+6492962961" target="_blank">+64 (9) 296 2961</a><br>
<b>M</b> +64 (27) 426 6649<br>
<b>W</b> <a href="http://www.osnz.co.nz/" target="_blank">www.osnz.co.nz</a><br>
<br>
24 Wellington St<br>
Papakura<br>
Auckland, 2110 <br>
New Zealand</font></p>
</div>
</div>
<br>
<div class="gmail_quote">On 18 September 2015 at 11:02, Jason Gress <span dir="ltr">
<<a href="mailto:jgress@accertify.com" target="_blank">jgress@accertify.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div style="word-wrap:break-word;color:rgb(0,0,0);font-size:14px;font-family:Calibri,sans-serif">
<div>I have a simple DRBD + filesystem + NFS configuration that works properly when I manually start/stop DRBD, but will not start the DRBD slave resource properly on failover or recovery. I cannot ever get the Master/Slave set to say anything but 'Stopped'.
I am running CentOS 7.1 with the latest packages as of today:</div>
<div><br>
</div>
<div>
<div>[root@fx201-1a log]# rpm -qa | grep -e pcs -e pacemaker -e drbd</div>
<div>pacemaker-cluster-libs-1.1.12-22.el7_1.4.x86_64</div>
<div>pacemaker-1.1.12-22.el7_1.4.x86_64</div>
<div>pcs-0.9.137-13.el7_1.4.x86_64</div>
<div>pacemaker-libs-1.1.12-22.el7_1.4.x86_64</div>
<div>drbd84-utils-8.9.3-1.1.el7.elrepo.x86_64</div>
<div>pacemaker-cli-1.1.12-22.el7_1.4.x86_64</div>
<div>kmod-drbd84-8.4.6-1.el7.elrepo.x86_64</div>
</div>
<div><br>
</div>
<div>Here is my pcs config output:</div>
<div><br>
</div>
<div>
<div>[root@fx201-1a log]# pcs config</div>
<div>Cluster Name: fx201-vmcl</div>
<div>Corosync Nodes:</div>
<div> fx201-1a.ams fx201-1b.ams</div>
<div>Pacemaker Nodes:</div>
<div> fx201-1a.ams fx201-1b.ams</div>
<div><br>
</div>
<div>Resources:</div>
<div> Resource: ClusterIP (class=ocf provider=heartbeat type=IPaddr2)</div>
<div> Attributes: ip=10.XX.XX.XX cidr_netmask=24</div>
<div> Operations: start interval=0s timeout=20s (ClusterIP-start-timeout-20s)</div>
<div> stop interval=0s timeout=20s (ClusterIP-stop-timeout-20s)</div>
<div> monitor interval=15s (ClusterIP-monitor-interval-15s)</div>
<div> Master: ms_drbd_vmfs</div>
<div> Meta Attrs: master-max=1 master-node-max=1 clone-max=2 clone-node-max=1 notify=true</div>
<div> Resource: drbd_vmfs (class=ocf provider=linbit type=drbd)</div>
<div> Attributes: drbd_resource=vmfs</div>
<div> Operations: start interval=0s timeout=240 (drbd_vmfs-start-timeout-240)</div>
<div> promote interval=0s timeout=90 (drbd_vmfs-promote-timeout-90)</div>
<div> demote interval=0s timeout=90 (drbd_vmfs-demote-timeout-90)</div>
<div> stop interval=0s timeout=100 (drbd_vmfs-stop-timeout-100)</div>
<div> monitor interval=30s (drbd_vmfs-monitor-interval-30s)</div>
<div> Resource: vmfsFS (class=ocf provider=heartbeat type=Filesystem)</div>
<div> Attributes: device=/dev/drbd0 directory=/exports/vmfs fstype=xfs</div>
<div> Operations: start interval=0s timeout=60 (vmfsFS-start-timeout-60)</div>
<div> stop interval=0s timeout=60 (vmfsFS-stop-timeout-60)</div>
<div> monitor interval=20 timeout=40 (vmfsFS-monitor-interval-20)</div>
<div> Resource: nfs-server (class=systemd type=nfs-server)</div>
<div> Operations: monitor interval=60s (nfs-server-monitor-interval-60s)</div>
<div><br>
</div>
<div>Stonith Devices:</div>
<div>Fencing Levels:</div>
<div><br>
</div>
<div>Location Constraints:</div>
<div>Ordering Constraints:</div>
<div> promote ms_drbd_vmfs then start vmfsFS (kind:Mandatory) (id:order-ms_drbd_vmfs-vmfsFS-mandatory)</div>
<div> start vmfsFS then start nfs-server (kind:Mandatory) (id:order-vmfsFS-nfs-server-mandatory)</div>
<div> start ClusterIP then start nfs-server (kind:Mandatory) (id:order-ClusterIP-nfs-server-mandatory)</div>
<div>Colocation Constraints:</div>
<div> ms_drbd_vmfs with ClusterIP (score:INFINITY) (id:colocation-ms_drbd_vmfs-ClusterIP-INFINITY)</div>
<div> vmfsFS with ms_drbd_vmfs (score:INFINITY) (with-rsc-role:Master) (id:colocation-vmfsFS-ms_drbd_vmfs-INFINITY)</div>
<div> nfs-server with vmfsFS (score:INFINITY) (id:colocation-nfs-server-vmfsFS-INFINITY)</div>
<div><br>
</div>
<div>Cluster Properties:</div>
<div> cluster-infrastructure: corosync</div>
<div> cluster-name: fx201-vmcl</div>
<div> dc-version: 1.1.13-a14efad</div>
<div> have-watchdog: false</div>
<div> last-lrm-refresh: 1442528181</div>
<div> stonith-enabled: false</div>
</div>
<div><br>
</div>
<div>And status:</div>
<div><br>
</div>
<div>
<div>[root@fx201-1a log]# pcs status --full</div>
<div>Cluster name: fx201-vmcl</div>
<div>Last updated: Thu Sep 17 17:55:56 2015<span style="white-space:pre-wrap"> </span>
Last change: Thu Sep 17 17:18:10 2015 by root via crm_attribute on fx201-1b.ams</div>
<div>Stack: corosync</div>
<div>Current DC: fx201-1b.ams (2) (version 1.1.13-a14efad) - partition with quorum</div>
<div>2 nodes and 5 resources configured</div>
<div><br>
</div>
<div>Online: [ fx201-1a.ams (1) fx201-1b.ams (2) ]</div>
<div><br>
</div>
<div>Full list of resources:</div>
<div><br>
</div>
<div> ClusterIP<span style="white-space:pre-wrap"> </span>(ocf::heartbeat:IPaddr2):<span style="white-space:pre-wrap"></span>Started fx201-1a.ams</div>
<div> Master/Slave Set: ms_drbd_vmfs [drbd_vmfs]</div>
<div> drbd_vmfs<span style="white-space:pre-wrap"> </span>(ocf::linbit:drbd):<span style="white-space:pre-wrap"></span>Master fx201-1a.ams</div>
<div> drbd_vmfs<span style="white-space:pre-wrap"> </span>(ocf::linbit:drbd):<span style="white-space:pre-wrap"></span>Stopped</div>
<div> Masters: [ fx201-1a.ams ]</div>
<div> Stopped: [ fx201-1b.ams ]</div>
<div> vmfsFS<span style="white-space:pre-wrap"> </span>(ocf::heartbeat:Filesystem):<span style="white-space:pre-wrap"></span>Started fx201-1a.ams</div>
<div> nfs-server<span style="white-space:pre-wrap"> </span>(systemd:nfs-server):<span style="white-space:pre-wrap"></span>Started fx201-1a.ams</div>
<div><br>
</div>
<div>PCSD Status:</div>
<div> fx201-1a.ams: Online</div>
<div> fx201-1b.ams: Online</div>
<div><br>
</div>
<div>Daemon Status:</div>
<div> corosync: active/enabled</div>
<div> pacemaker: active/enabled</div>
<div> pcsd: active/enabled</div>
</div>
<div><br>
</div>
<div>If I do a failover, after manually confirming that the DRBD data is synchronized completely, it does work, but then never reconnects the secondary side, and in order to get the resource synchronized again, I have to manually correct it, ad infinitum.
I have tried standby/unstandby, pcs resource debug-start (with undesirable results), and so on. </div>
<div><br>
</div>
<div>Here are some relevant log messages from pacemaker.log:</div>
<div><br>
</div>
<div>
<div>Sep 17 17:48:10 [13954] <a href="http://fx201-1b.ams.accertify.net" target="_blank">
fx201-1b.ams.accertify.net</a> crmd: info: crm_timer_popped:<span style="white-space:pre-wrap"></span>PEngine Recheck Timer (I_PE_CALC) just popped (900000ms)</div>
<div>Sep 17 17:48:10 [13954] <a href="http://fx201-1b.ams.accertify.net" target="_blank">
fx201-1b.ams.accertify.net</a> crmd: notice: do_state_transition:<span style="white-space:pre-wrap"></span>State transition S_IDLE -> S_POLICY_ENGINE [ input=I_PE_CALC cause=C_TIMER_POPPED origin=crm_timer_popped ]</div>
<div>Sep 17 17:48:10 [13954] <a href="http://fx201-1b.ams.accertify.net" target="_blank">
fx201-1b.ams.accertify.net</a> crmd: info: do_state_transition:<span style="white-space:pre-wrap"></span>Progressed to state S_POLICY_ENGINE after C_TIMER_POPPED</div>
<div>Sep 17 17:48:10 [5662] <a href="http://fx201-1b.ams.accertify.net" target="_blank">
fx201-1b.ams.accertify.net</a> pengine: info: process_pe_message:<span style="white-space:pre-wrap"></span>Input has not changed since last time, not saving to disk</div>
<div>Sep 17 17:48:10 [5662] <a href="http://fx201-1b.ams.accertify.net" target="_blank">
fx201-1b.ams.accertify.net</a> pengine: info: determine_online_status:<span style="white-space:pre-wrap"></span>Node fx201-1b.ams is online</div>
<div>Sep 17 17:48:10 [5662] <a href="http://fx201-1b.ams.accertify.net" target="_blank">
fx201-1b.ams.accertify.net</a> pengine: info: determine_online_status:<span style="white-space:pre-wrap"></span>Node fx201-1a.ams is online</div>
<div>Sep 17 17:48:10 [5662] <a href="http://fx201-1b.ams.accertify.net" target="_blank">
fx201-1b.ams.accertify.net</a> pengine: info: determine_op_status:<span style="white-space:pre-wrap"></span>Operation monitor found resource drbd_vmfs:0 active in master mode on fx201-1b.ams</div>
<div>Sep 17 17:48:10 [5662] <a href="http://fx201-1b.ams.accertify.net" target="_blank">
fx201-1b.ams.accertify.net</a> pengine: info: determine_op_status:<span style="white-space:pre-wrap"></span>Operation monitor found resource drbd_vmfs:0 active on fx201-1a.ams</div>
<div>Sep 17 17:48:10 [5662] <a href="http://fx201-1b.ams.accertify.net" target="_blank">
fx201-1b.ams.accertify.net</a> pengine: info: native_print:<span style="white-space:pre-wrap"></span>ClusterIP<span style="white-space:pre-wrap">
</span>(ocf::heartbeat:IPaddr2):<span style="white-space:pre-wrap"></span>Started fx201-1a.ams</div>
<div>Sep 17 17:48:10 [5662] <a href="http://fx201-1b.ams.accertify.net" target="_blank">
fx201-1b.ams.accertify.net</a> pengine: info: clone_print:<span style="white-space:pre-wrap"></span>Master/Slave Set: ms_drbd_vmfs [drbd_vmfs]</div>
<div>Sep 17 17:48:10 [5662] <a href="http://fx201-1b.ams.accertify.net" target="_blank">
fx201-1b.ams.accertify.net</a> pengine: info: short_print:<span style="white-space:pre-wrap"></span> Masters: [ fx201-1a.ams ]</div>
<div>Sep 17 17:48:10 [5662] <a href="http://fx201-1b.ams.accertify.net" target="_blank">
fx201-1b.ams.accertify.net</a> pengine: info: short_print:<span style="white-space:pre-wrap"></span> Stopped: [ fx201-1b.ams ]</div>
<div>Sep 17 17:48:10 [5662] <a href="http://fx201-1b.ams.accertify.net" target="_blank">
fx201-1b.ams.accertify.net</a> pengine: info: native_print:<span style="white-space:pre-wrap"></span>vmfsFS<span style="white-space:pre-wrap">
</span>(ocf::heartbeat:Filesystem):<span style="white-space:pre-wrap"></span>Started fx201-1a.ams</div>
<div>Sep 17 17:48:10 [5662] <a href="http://fx201-1b.ams.accertify.net" target="_blank">
fx201-1b.ams.accertify.net</a> pengine: info: native_print:<span style="white-space:pre-wrap"></span>nfs-server<span style="white-space:pre-wrap">
</span>(systemd:nfs-server):<span style="white-space:pre-wrap"></span>Started fx201-1a.ams</div>
<div>Sep 17 17:48:10 [5662] <a href="http://fx201-1b.ams.accertify.net" target="_blank">
fx201-1b.ams.accertify.net</a> pengine: info: native_color:<span style="white-space:pre-wrap"></span>Resource drbd_vmfs:1 cannot run anywhere</div>
<div>Sep 17 17:48:10 [5662] <a href="http://fx201-1b.ams.accertify.net" target="_blank">
fx201-1b.ams.accertify.net</a> pengine: info: master_color:<span style="white-space:pre-wrap"></span>Promoting drbd_vmfs:0 (Master fx201-1a.ams)</div>
<div>Sep 17 17:48:10 [5662] <a href="http://fx201-1b.ams.accertify.net" target="_blank">
fx201-1b.ams.accertify.net</a> pengine: info: master_color:<span style="white-space:pre-wrap"></span>ms_drbd_vmfs: Promoted 1 instances of a possible 1 to master</div>
<div>Sep 17 17:48:10 [5662] <a href="http://fx201-1b.ams.accertify.net" target="_blank">
fx201-1b.ams.accertify.net</a> pengine: info: LogActions:<span style="white-space:pre-wrap"></span>Leave ClusterIP<span style="white-space:pre-wrap">
</span>(Started fx201-1a.ams)</div>
<div>Sep 17 17:48:10 [5662] <a href="http://fx201-1b.ams.accertify.net" target="_blank">
fx201-1b.ams.accertify.net</a> pengine: info: LogActions:<span style="white-space:pre-wrap"></span>Leave drbd_vmfs:0<span style="white-space:pre-wrap">
</span>(Master fx201-1a.ams)</div>
<div>Sep 17 17:48:10 [5662] <a href="http://fx201-1b.ams.accertify.net" target="_blank">
fx201-1b.ams.accertify.net</a> pengine: info: LogActions:<span style="white-space:pre-wrap"></span>Leave drbd_vmfs:1<span style="white-space:pre-wrap">
</span>(Stopped)</div>
<div>Sep 17 17:48:10 [5662] <a href="http://fx201-1b.ams.accertify.net" target="_blank">
fx201-1b.ams.accertify.net</a> pengine: info: LogActions:<span style="white-space:pre-wrap"></span>Leave vmfsFS<span style="white-space:pre-wrap">
</span>(Started fx201-1a.ams)</div>
<div>Sep 17 17:48:10 [5662] <a href="http://fx201-1b.ams.accertify.net" target="_blank">
fx201-1b.ams.accertify.net</a> pengine: info: LogActions:<span style="white-space:pre-wrap"></span>Leave nfs-server<span style="white-space:pre-wrap">
</span>(Started fx201-1a.ams)</div>
<div>Sep 17 17:48:10 [5662] <a href="http://fx201-1b.ams.accertify.net" target="_blank">
fx201-1b.ams.accertify.net</a> pengine: notice: process_pe_message:<span style="white-space:pre-wrap"></span>Calculated Transition 16: /var/lib/pacemaker/pengine/pe-input-61.bz2</div>
<div>Sep 17 17:48:10 [13954] <a href="http://fx201-1b.ams.accertify.net" target="_blank">
fx201-1b.ams.accertify.net</a> crmd: info: do_state_transition:<span style="white-space:pre-wrap"></span>State transition S_POLICY_ENGINE -> S_TRANSITION_ENGINE [ input=I_PE_SUCCESS cause=C_IPC_MESSAGE origin=handle_response ]</div>
<div>Sep 17 17:48:10 [13954] <a href="http://fx201-1b.ams.accertify.net" target="_blank">
fx201-1b.ams.accertify.net</a> crmd: info: do_te_invoke:<span style="white-space:pre-wrap"></span>Processing graph 16 (ref=pe_calc-dc-1442530090-97) derived from /var/lib/pacemaker/pengine/pe-input-61.bz2</div>
<div>Sep 17 17:48:10 [13954] <a href="http://fx201-1b.ams.accertify.net" target="_blank">
fx201-1b.ams.accertify.net</a> crmd: notice: run_graph:<span style="white-space:pre-wrap"></span>Transition 16 (Complete=0, Pending=0, Fired=0, Skipped=0, Incomplete=0, Source=/var/lib/pacemaker/pengine/pe-input-61.bz2): Complete</div>
<div>Sep 17 17:48:10 [13954] <a href="http://fx201-1b.ams.accertify.net" target="_blank">
fx201-1b.ams.accertify.net</a> crmd: info: do_log:<span style="white-space:pre-wrap"></span>FSA: Input I_TE_SUCCESS from notify_crmd() received in state S_TRANSITION_ENGINE</div>
<div>Sep 17 17:48:10 [13954] <a href="http://fx201-1b.ams.accertify.net" target="_blank">
fx201-1b.ams.accertify.net</a> crmd: notice: do_state_transition:<span style="white-space:pre-wrap"></span>State transition S_TRANSITION_ENGINE -> S_IDLE [ input=I_TE_SUCCESS cause=C_FSA_INTERNAL origin=notify_crmd ]</div>
</div>
<div><br>
</div>
<div>Thank you all for your help,</div>
<div><br>
</div>
<div>Jason</div>
</div>
<pre>"This message and any attachments may contain confidential information. If you
have received this message in error, any use or distribution is prohibited.
Please notify us by reply e-mail if you have mistakenly received this message,
and immediately and permanently delete it and any attachments. Thank you."</pre>
<br>
_______________________________________________<br>
Users mailing list: <a href="mailto:Users@clusterlabs.org" target="_blank">Users@clusterlabs.org</a><br>
<a href="http://clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://clusterlabs.org/mailman/listinfo/users</a><br>
<br>
Project Home: <a href="http://www.clusterlabs.org" rel="noreferrer" target="_blank">
http://www.clusterlabs.org</a><br>
Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" rel="noreferrer" target="_blank">
http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br>
Bugs: <a href="http://bugs.clusterlabs.org" rel="noreferrer" target="_blank">http://bugs.clusterlabs.org</a><br>
<br>
</blockquote>
</div>
<br>
</div>
</div>
</div>
</div></div></span>
</div><div class="HOEnZb"><div class="h5">
<pre>
"This message and any attachments may contain confidential information. If you
have received this message in error, any use or distribution is prohibited.
Please notify us by reply e-mail if you have mistakenly received this message,
and immediately and permanently delete it and any attachments. Thank you."</pre>
</div></div><br>_______________________________________________<br>
Users mailing list: <a href="mailto:Users@clusterlabs.org">Users@clusterlabs.org</a><br>
<a href="http://clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://clusterlabs.org/mailman/listinfo/users</a><br>
<br>
Project Home: <a href="http://www.clusterlabs.org" rel="noreferrer" target="_blank">http://www.clusterlabs.org</a><br>
Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" rel="noreferrer" target="_blank">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br>
Bugs: <a href="http://bugs.clusterlabs.org" rel="noreferrer" target="_blank">http://bugs.clusterlabs.org</a><br>
<br></blockquote></div><br></div>