<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
</head>
<body text="#000000" bgcolor="#FFFFFF">
<div class="moz-cite-prefix">On 07/24/2017 08:27 PM, Prasad,
Shashank wrote:<br>
</div>
<blockquote type="cite"
cite="mid:8A8F98BF0CDFD5438985D4397358E6C7ECC85F@idc.vanu.com">
<meta http-equiv="Content-Type" content="text/html; charset=utf-8">
<meta name="Generator" content="Microsoft Word 12 (filtered
medium)">
<style><!--
/* Font Definitions */
@font-face
{font-family:"Cambria Math";
panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
{font-family:Calibri;
panose-1:2 15 5 2 2 2 4 3 2 4;}
@font-face
{font-family:Tahoma;
panose-1:2 11 6 4 3 5 4 4 2 4;}
@font-face
{font-family:Consolas;
panose-1:2 11 6 9 2 2 4 3 2 4;}
@font-face
{font-family:Baskerville;
panose-1:0 0 0 0 0 0 0 0 0 0;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
{margin:0in;
margin-bottom:.0001pt;
font-size:12.0pt;
font-family:"Times New Roman","serif";
color:black;}
a:link, span.MsoHyperlink
{mso-style-priority:99;
color:blue;
text-decoration:underline;}
a:visited, span.MsoHyperlinkFollowed
{mso-style-priority:99;
color:purple;
text-decoration:underline;}
pre
{mso-style-priority:99;
mso-style-link:"HTML Preformatted Char";
margin:0in;
margin-bottom:.0001pt;
font-size:10.0pt;
font-family:"Courier New";
color:black;}
p.MsoAcetate, li.MsoAcetate, div.MsoAcetate
{mso-style-priority:99;
mso-style-link:"Balloon Text Char";
margin:0in;
margin-bottom:.0001pt;
font-size:8.0pt;
font-family:"Tahoma","sans-serif";
color:black;}
p.MsoListParagraph, li.MsoListParagraph, div.MsoListParagraph
{mso-style-priority:34;
margin-top:0in;
margin-right:0in;
margin-bottom:0in;
margin-left:.5in;
margin-bottom:.0001pt;
font-size:12.0pt;
font-family:"Times New Roman","serif";
color:black;}
span.HTMLPreformattedChar
{mso-style-name:"HTML Preformatted Char";
mso-style-priority:99;
mso-style-link:"HTML Preformatted";
font-family:Consolas;
color:black;}
span.BalloonTextChar
{mso-style-name:"Balloon Text Char";
mso-style-priority:99;
mso-style-link:"Balloon Text";
font-family:"Tahoma","sans-serif";
color:black;}
span.apple-converted-space
{mso-style-name:apple-converted-space;}
span.EmailStyle23
{mso-style-type:personal;
font-family:"Calibri","sans-serif";
color:#1F497D;}
span.EmailStyle24
{mso-style-type:personal-reply;
font-family:"Calibri","sans-serif";
color:#1F497D;}
.MsoChpDefault
{mso-style-type:export-only;
font-size:10.0pt;}
@page WordSection1
{size:8.5in 11.0in;
margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext="edit" spidmax="1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext="edit">
<o:idmap v:ext="edit" data="1" />
</o:shapelayout></xml><![endif]-->
<div class="WordSection1">
<p class="MsoNormal"><span
style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D">My
understanding is that SBD will need a shared storage
between clustered nodes.<o:p></o:p></span></p>
<p class="MsoNormal"><span
style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D">And
that, SBD will need at least 3 nodes in a cluster, if using
w/o shared storage.</span></p>
</div>
</blockquote>
<br>
Haven't tried to be honest but reason for 3 nodes is that without<br>
shared disk you need a real quorum-source and not something<br>
'faked' as with 2-node-feature in corosync.<br>
But I don't see anything speaking against getting the proper<br>
quorum via qdevice instead with a third full cluster-node.<br>
<br>
<blockquote type="cite"
cite="mid:8A8F98BF0CDFD5438985D4397358E6C7ECC85F@idc.vanu.com">
<div class="WordSection1">
<p class="MsoNormal"><span
style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D"><o:p></o:p></span></p>
<p class="MsoNormal"><span
style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D"><o:p> </o:p></span></p>
<p class="MsoNormal"><span
style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D">Therefore,
for systems which do NOT use shared storage between 1+1 HA
clustered nodes, SBD may NOT be an option.<o:p></o:p></span></p>
<p class="MsoNormal"><span
style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D">Correct
me, if I am wrong.<o:p></o:p></span></p>
<p class="MsoNormal"><span
style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D"><o:p> </o:p></span></p>
<p class="MsoNormal"><span
style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D">For
cluster systems using the likes of iDRAC/IMM2 fencing
agents, which have redundant but shared power supply units
with the nodes, the normal fencing mechanisms should work
for all resiliency scenarios, but for IMM2/iDRAC are being
NOT reachable for whatsoever reasons. And, to bail out of
those situations in the absence of SBD, I believe using
used-defined failover hooks (via scripts) into Pacemaker
Alerts, with sudo permissions for ‘hacluster’, should help.</span></p>
</div>
</blockquote>
<br>
If you don't see your fencing device assuming after some time<br>
the the corresponding node will probably be down is quite risky<br>
in my opinion.<br>
But why not assure it to be down using a watchdog?<br>
<br>
<blockquote type="cite"
cite="mid:8A8F98BF0CDFD5438985D4397358E6C7ECC85F@idc.vanu.com">
<div class="WordSection1">
<p class="MsoNormal"><span
style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D"><o:p></o:p></span></p>
<p class="MsoNormal"><span
style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D"><o:p> </o:p></span></p>
<p class="MsoNormal"><span
style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D">Thanx.<o:p></o:p></span></p>
<p class="MsoNormal"><span
style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D"><o:p> </o:p></span></p>
<p class="MsoNormal"><span
style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D"><o:p> </o:p></span></p>
<div style="border:none;border-left:solid blue 1.5pt;padding:0in
0in 0in 4.0pt">
<div>
<div style="border:none;border-top:solid #B5C4DF
1.0pt;padding:3.0pt 0in 0in 0in">
<p class="MsoNormal"><b><span
style="font-size:10.0pt;font-family:"Tahoma","sans-serif";color:windowtext">From:</span></b><span
style="font-size:10.0pt;font-family:"Tahoma","sans-serif";color:windowtext">
Klaus Wenninger [<a class="moz-txt-link-freetext" href="mailto:kwenning@redhat.com">mailto:kwenning@redhat.com</a>] <br>
<b>Sent:</b> Monday, July 24, 2017 11:31 PM<br>
<b>To:</b> Cluster Labs - All topics related to
open-source clustering welcomed; Prasad, Shashank<br>
<b>Subject:</b> Re: [ClusterLabs] Two nodes cluster
issue<o:p></o:p></span></p>
</div>
</div>
<p class="MsoNormal"><o:p> </o:p></p>
<div>
<p class="MsoNormal">On 07/24/2017 07:32 PM, Prasad,
Shashank wrote:<o:p></o:p></p>
</div>
<blockquote style="margin-top:5.0pt;margin-bottom:5.0pt">
<p class="MsoNormal"><span
style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D">Sometimes
IPMI fence devices use shared power of the node, and it
cannot be avoided.</span><o:p></o:p></p>
<p class="MsoNormal"><span
style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D">In
such scenarios the HA cluster is NOT able to handle the
power failure of a node, since the power is shared with
its own fence device.</span><o:p></o:p></p>
<p class="MsoNormal"><span
style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D">The
failure of IPMI based fencing can also exist due to
other reasons also.</span><o:p></o:p></p>
<p class="MsoNormal"><span
style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D"> </span><o:p></o:p></p>
<p class="MsoNormal"><span
style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D">A
failure to fence the failed node will cause cluster to
be marked UNCLEAN.</span><o:p></o:p></p>
<p class="MsoNormal"><span
style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D">To
get over it, the following command needs to be invoked
on the surviving node.</span><o:p></o:p></p>
<p class="MsoNormal"><span
style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D"> </span><o:p></o:p></p>
<p class="MsoNormal"><span
style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D">pcs
stonith confirm <failed_node_name> --force</span><o:p></o:p></p>
<p class="MsoNormal"><span
style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D"> </span><o:p></o:p></p>
<p class="MsoNormal"><span
style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D">This
can be automated by hooking a recovery script, when the
the Stonith resource ‘Timed Out’ event.</span><o:p></o:p></p>
<p class="MsoNormal"><span
style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D">To
be more specific, the Pacemaker Alerts can be used for
watch for Stonith timeouts and failures.</span><o:p></o:p></p>
<p class="MsoNormal"><span
style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D">In
that script, all that’s essentially to be executed is
the aforementioned command.</span><o:p></o:p></p>
</blockquote>
<p class="MsoNormal"><br>
If I get you right here you can disable fencing then in the
first place.<br>
Actually quorum-based-watchdog-fencing is the way to do this
in a<br>
safe manner. This of course assumes you have a proper source
for<br>
quorum in your 2-node-setup with e.g. qdevice or using a
shared<br>
disk with sbd (not directly pacemaker quorum here but
similar thing<br>
handled inside sbd).<br>
<br>
<br>
<o:p></o:p></p>
<p class="MsoNormal"><span
style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D">Since
the alerts are issued from ‘hacluster’ login, sudo
permissions for ‘hacluster’ needs to be configured.</span><o:p></o:p></p>
<p class="MsoNormal"><span
style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D"> </span><o:p></o:p></p>
<p class="MsoNormal"><span
style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D">Thanx.</span><o:p></o:p></p>
<p class="MsoNormal"><span
style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D"> </span><o:p></o:p></p>
<p class="MsoNormal"><span
style="font-size:11.0pt;font-family:"Calibri","sans-serif";color:#1F497D"> </span><o:p></o:p></p>
<div style="border:none;border-left:solid blue
1.5pt;padding:0in 0in 0in 4.0pt">
<div>
<div style="border:none;border-top:solid #B5C4DF
1.0pt;padding:3.0pt 0in 0in 0in">
<p class="MsoNormal"><b><span
style="font-size:10.0pt;font-family:"Tahoma","sans-serif";color:windowtext">From:</span></b><span
style="font-size:10.0pt;font-family:"Tahoma","sans-serif";color:windowtext">
Klaus Wenninger [<a
href="mailto:kwenning@redhat.com"
moz-do-not-send="true">mailto:kwenning@redhat.com</a>]
<br>
<b>Sent:</b> Monday, July 24, 2017 9:24 PM<br>
<b>To:</b> Kristián Feldsam; Cluster Labs - All
topics related to open-source clustering welcomed<br>
<b>Subject:</b> Re: [ClusterLabs] Two nodes cluster
issue</span><o:p></o:p></p>
</div>
</div>
<p class="MsoNormal"> <o:p></o:p></p>
<div>
<p class="MsoNormal">On 07/24/2017 05:37 PM, Kristián
Feldsam wrote:<o:p></o:p></p>
</div>
<blockquote style="margin-top:5.0pt;margin-bottom:5.0pt">
<p class="MsoNormal">I personally think that power off
node by switched pdu is more safe, or not?<o:p></o:p></p>
</blockquote>
<p class="MsoNormal"><br>
True if that is working in you environment. If you can't
do a physical setup<br>
where you aren't simultaneously loosing connection to both
your node and<br>
the switch-device (or you just want to cover cases where
that happens)<br>
you have to come up with something else.<br>
<br>
<br>
<br>
<o:p></o:p></p>
<div>
<p class="MsoNormal"><br>
<span
style="font-family:"Baskerville","serif"">S
pozdravem Kristián Feldsam<br>
Tel.: +420 773 303 353, +421 944 137 535<br>
E-mail.: <a href="mailto:support@feldhost.cz"
moz-do-not-send="true">support@feldhost.cz</a><br>
<br>
<a href="http://www.feldhost.cz"
moz-do-not-send="true">www.feldhost.cz</a> -<span
class="apple-converted-space"> </span><b>Feld</b>Host</span>™<span
class="apple-converted-space"><span
style="font-family:"Baskerville","serif""> </span></span><span
style="font-family:"Baskerville","serif"">–
profesionální hostingové a serverové služby za
adekvátní ceny.<br>
<br>
FELDSAM s.r.o.<br>
V rohu 434/3<br>
Praha 4 – Libuš, PSČ 142 00<br>
IČ: 290 60 958, DIČ: CZ290 60 958<br>
C 200350 vedená u Městského soudu v Praze<br>
<br>
Banka: Fio banka a.s.<br>
Číslo účtu: 2400330446/2010<br>
BIC: FIOBCZPPXX<br>
IBAN: CZ82 2010 0000 0024 0033 0446</span> <o:p></o:p></p>
</div>
<p class="MsoNormal"> <o:p></o:p></p>
<div>
<blockquote style="margin-top:5.0pt;margin-bottom:5.0pt">
<div>
<p class="MsoNormal">On 24 Jul 2017, at 17:27, Klaus
Wenninger <<a href="mailto:kwenning@redhat.com"
moz-do-not-send="true">kwenning@redhat.com</a>>
wrote:<o:p></o:p></p>
</div>
<p class="MsoNormal"> <o:p></o:p></p>
<div>
<div>
<p class="MsoNormal" style="background:white"><span
style="font-family:"Baskerville","serif"">On
07/24/2017 05:15 PM, Tomer Azran wrote:</span><o:p></o:p></p>
</div>
<blockquote
style="margin-top:5.0pt;margin-bottom:5.0pt;font-variant-caps:
normal;orphans: auto;text-align:start;widows:
auto;-webkit-text-size-adjust:
auto;-webkit-text-stroke-width:
0px;background-color:rgb(255,
255,
255);word-spacing:0px">
<div>
<p class="MsoNormal" style="background:white"><span
style="font-size:11.0pt;font-family:"Arial","sans-serif"">I
still don't understand why the qdevice concept
doesn't help on this situation. Since the
master node is down, I would expect the quorum
to declare it as dead.</span><o:p></o:p></p>
</div>
<div>
<p class="MsoNormal" style="background:white"><span
style="font-size:11.0pt;font-family:"Arial","sans-serif"">Why
doesn't it happens?</span><o:p></o:p></p>
</div>
</blockquote>
<p class="MsoNormal"><span
style="font-family:"Baskerville","serif""><br>
That is not how quorum works. It just limits the
decision-making to the quorate subset of the
cluster.<br>
Still the unknown nodes are not sure to be down.<br>
That is why I suggested to have quorum-based
watchdog-fencing with sbd.<br>
That would assure that within a certain time all
nodes of the non-quorate part<br>
of the cluster are down.<br>
<br>
<br>
<br>
</span><o:p></o:p></p>
<p class="MsoNormal" style="margin-bottom:12.0pt"><span
style="font-family:"Baskerville","serif""><br>
<br>
<br>
</span><o:p></o:p></p>
<div>
<p class="MsoNormal" style="margin-bottom:12.0pt"><span
style="font-family:"Baskerville","serif"">On Mon,
Jul 24, 2017 at 4:15 PM +0300, "Dmitri Maziuk"<span
class="apple-converted-space"> </span><<a
href="mailto:dmitri.maziuk@gmail.com"
target="_blank" moz-do-not-send="true">dmitri.maziuk@gmail.com</a>><span
class="apple-converted-space"> </span>wrote:</span><o:p></o:p></p>
<div>
<pre>On 2017-07-24 07:51, Tomer Azran wrote:<o:p></o:p></pre>
<pre>> We don't have the ability to use it.<o:p></o:p></pre>
<pre>> Is that the only solution?<o:p></o:p></pre>
<pre> <o:p></o:p></pre>
<pre>No, but I'd recommend thinking about it first. Are you sure you will <o:p></o:p></pre>
<pre>care about your cluster working when your server room is on fire? 'Cause <o:p></o:p></pre>
<pre>unless you have halon suppression, your server room is a complete <o:p></o:p></pre>
<pre>write-off anyway. (Think water from sprinklers hitting rich chunky volts <o:p></o:p></pre>
<pre>in the servers.)<o:p></o:p></pre>
<pre> <o:p></o:p></pre>
<pre>Dima<o:p></o:p></pre>
<pre> <o:p></o:p></pre>
<pre>_______________________________________________<o:p></o:p></pre>
<pre>Users mailing list: <a href="mailto:Users@clusterlabs.org" moz-do-not-send="true">Users@clusterlabs.org</a><o:p></o:p></pre>
<pre><a href="http://lists.clusterlabs.org/mailman/listinfo/users" moz-do-not-send="true">http://lists.clusterlabs.org/mailman/listinfo/users</a><o:p></o:p></pre>
<pre> <o:p></o:p></pre>
<pre>Project Home: <a href="http://www.clusterlabs.org/" moz-do-not-send="true">http://www.clusterlabs.org</a><o:p></o:p></pre>
<pre>Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" moz-do-not-send="true">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><o:p></o:p></pre>
<pre>Bugs: <a href="http://bugs.clusterlabs.org/" moz-do-not-send="true">http://bugs.clusterlabs.org</a><o:p></o:p></pre>
</div>
</div>
<p class="MsoNormal"><span
style="font-family:"Baskerville","serif""><br>
<br>
<br>
<br>
</span><o:p></o:p></p>
<pre>_______________________________________________<o:p></o:p></pre>
<pre>Users mailing list: <a href="mailto:Users@clusterlabs.org" moz-do-not-send="true">Users@clusterlabs.org</a><o:p></o:p></pre>
<pre><a href="http://lists.clusterlabs.org/mailman/listinfo/users" moz-do-not-send="true">http://lists.clusterlabs.org/mailman/listinfo/users</a><o:p></o:p></pre>
<pre> <o:p></o:p></pre>
<pre>Project Home: <a href="http://www.clusterlabs.org/" moz-do-not-send="true">http://www.clusterlabs.org</a><o:p></o:p></pre>
<pre>Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" moz-do-not-send="true">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><o:p></o:p></pre>
<pre>Bugs: <a href="http://bugs.clusterlabs.org/" moz-do-not-send="true">http://bugs.clusterlabs.org</a><o:p></o:p></pre>
<p class="MsoNormal"
style="mso-margin-top-alt:auto;mso-margin-bottom-alt:auto;font-variant-caps:
normal;text-align:start;-webkit-text-stroke-width:
0px;background-color:rgb(255,
255,
255);word-spacing:0px"><span
style="font-family:"Baskerville","serif""> </span><o:p></o:p></p>
<pre style="background:white;font-variant-caps: normal;text-align:start;-webkit-text-stroke-width: 0px;word-spacing:0px"><span style="font-size:12.0pt">-- </span><o:p></o:p></pre>
<pre style="background:white"><span style="font-size:12.0pt">Klaus Wenninger</span><o:p></o:p></pre>
<pre style="background:white"><span style="font-size:12.0pt"> </span><o:p></o:p></pre>
<pre style="background:white"><span style="font-size:12.0pt">Senior Software Engineer, EMEA ENG Openstack Infrastructure</span><o:p></o:p></pre>
<pre style="background:white"><span style="font-size:12.0pt"> </span><o:p></o:p></pre>
<pre style="background:white"><span style="font-size:12.0pt">Red Hat</span><o:p></o:p></pre>
<pre style="background:white"><span style="font-size:12.0pt"> </span><o:p></o:p></pre>
<pre style="background:white"><span style="font-size:12.0pt"><a href="mailto:kwenning@redhat.com" moz-do-not-send="true">kwenning@redhat.com</a> </span><o:p></o:p></pre>
<p class="MsoNormal"><span
style="font-family:"Baskerville","serif"">_______________________________________________<br>
Users mailing list:<span
class="apple-converted-space"> </span></span><a
href="mailto:Users@clusterlabs.org"
moz-do-not-send="true"><span
style="font-family:"Baskerville","serif";background:white">Users@clusterlabs.org</span></a><span
style="font-family:"Baskerville","serif""><br>
</span><a
href="http://lists.clusterlabs.org/mailman/listinfo/users"
moz-do-not-send="true"><span
style="font-family:"Baskerville","serif"">http://lists.clusterlabs.org/mailman/listinfo/users</span></a><span
style="font-family:"Baskerville","serif""><br>
<br>
Project Home:<span class="apple-converted-space"> </span></span><a
href="http://www.clusterlabs.org/"
moz-do-not-send="true"><span
style="font-family:"Baskerville","serif";background:white">http://www.clusterlabs.org</span></a><span
style="font-family:"Baskerville","serif""><br>
Getting started:<span
class="apple-converted-space"> </span></span><a
href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf"
moz-do-not-send="true"><span
style="font-family:"Baskerville","serif"">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</span></a><span
style="font-family:"Baskerville","serif""><br>
Bugs:<span class="apple-converted-space"> </span></span><a
href="http://bugs.clusterlabs.org/"
moz-do-not-send="true"><span
style="font-family:"Baskerville","serif";background:white">http://bugs.clusterlabs.org</span></a><o:p></o:p></p>
</div>
</blockquote>
</div>
<p class="MsoNormal"> <o:p></o:p></p>
</div>
<p class="MsoNormal"><br>
<br>
<br>
<o:p></o:p></p>
<pre>_______________________________________________<o:p></o:p></pre>
<pre>Users mailing list: <a href="mailto:Users@clusterlabs.org" moz-do-not-send="true">Users@clusterlabs.org</a><o:p></o:p></pre>
<pre><a href="http://lists.clusterlabs.org/mailman/listinfo/users" moz-do-not-send="true">http://lists.clusterlabs.org/mailman/listinfo/users</a><o:p></o:p></pre>
<pre><o:p> </o:p></pre>
<pre>Project Home: <a href="http://www.clusterlabs.org" moz-do-not-send="true">http://www.clusterlabs.org</a><o:p></o:p></pre>
<pre>Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" moz-do-not-send="true">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><o:p></o:p></pre>
<pre>Bugs: <a href="http://bugs.clusterlabs.org" moz-do-not-send="true">http://bugs.clusterlabs.org</a><o:p></o:p></pre>
<pre> <o:p></o:p></pre>
</div>
</div>
</blockquote>
<br>
</body>
</html>