<div dir="ltr">Thanks for your response Dejan.<div><br></div><div>I do not know yet whether this has anything to do with endianness. </div><div>FWIW, there could be something quirky with the system so keeping all options open. :)</div><div><br></div><div>I added some debug prints to understand what's happening under the hood.</div><div><br></div><div><b>Success case: (on x86 machine): </b></div><div><div>[TOTEM ] entering OPERATIONAL state.</div><div>[TOTEM ] A new membership (<a href="http://10.206.1.7:137220">10.206.1.7:137220</a>) was formed. Members joined: 181272839</div><div>[TOTEM ] Nikhil: Inside messages_deliver_to_app. end_point=0, my_high_delivered=0</div><div>[TOTEM ] Nikhil: Inside messages_deliver_to_app. end_point=1, my_high_delivered=0</div><div>[TOTEM ] Delivering 0 to 1</div><div>[TOTEM ] Delivering MCAST message with seq 1 to pending delivery queue</div><div>[SYNC ] Nikhil: Inside sync_deliver_fn. header->id=1<br></div><div>[TOTEM ] Nikhil: Inside messages_deliver_to_app. end_point=2, my_high_delivered=1</div><div>[TOTEM ] Delivering 1 to 2</div><div>[TOTEM ] Delivering MCAST message with seq 2 to pending delivery queue</div><div>[SYNC ] Nikhil: Inside sync_deliver_fn. header->id=0<br></div><div>[SYNC ] Nikhil: Entering sync_barrier_handler</div><div>[SYNC ] Committing synchronization for corosync configuration map access</div></div><div>.<br></div><div><div>[TOTEM ] Delivering 2 to 4</div><div>[TOTEM ] Delivering MCAST message with seq 3 to pending delivery queue</div><div>[TOTEM ] Delivering MCAST message with seq 4 to pending delivery queue</div><div>[CPG ] comparing: sender r(0) ip(10.206.1.7) ; members(old:0 left:0)</div><div>[CPG ] chosen downlist: sender r(0) ip(10.206.1.7) ; members(old:0 left:0)</div><div>[SYNC ] Committing synchronization for corosync cluster closed process group service v1.01</div><div><b>[MAIN ] Completed service synchronization, ready to provide service.</b></div></div><div><br></div><div><br></div><div><b>Failure case: (on ppc)</b>:</div><div><div>[TOTEM ] entering OPERATIONAL state.</div><div>[TOTEM ] A new membership (<a href="http://10.207.24.101:16">10.207.24.101:16</a>) was formed. Members joined: 181344357</div><div>[TOTEM ] Nikhil: Inside messages_deliver_to_app. end_point=0, my_high_delivered=0</div><div>[TOTEM ] Nikhil: Inside messages_deliver_to_app. end_point=1, my_high_delivered=0</div><div>[TOTEM ] Delivering 0 to 1</div><div>[TOTEM ] Delivering MCAST message with seq 1 to pending delivery queue</div><div>[SYNC ] Nikhil: Inside sync_deliver_fn header->id=1<br></div><div>[TOTEM ] Nikhil: Inside messages_deliver_to_app. end_point=1, my_high_delivered=1</div><div>[TOTEM ] Nikhil: Inside messages_deliver_to_app. end_point=1, my_high_delivered=1</div></div><div>Above message repeats continuously.</div><div><br></div><div>So it appears that in failure case I do not receive messages with sequence number 2-4.</div><div>If somebody can throw some ideas that'll help a lot.</div><div><br></div><div>-Thanks</div><div>Nikhil</div></div><div class="gmail_extra"><br><div class="gmail_quote">On Tue, May 3, 2016 at 5:26 PM, Dejan Muhamedagic <span dir="ltr"><<a href="mailto:dejanmm@fastmail.fm" target="_blank">dejanmm@fastmail.fm</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Hi,<br>
<span class=""><br>
On Mon, May 02, 2016 at 08:54:09AM +0200, Jan Friesse wrote:<br>
> >As your hardware is probably capable of running ppcle and if you have an<br>
> >environment<br>
> >at hand without too much effort it might pay off to try that.<br>
> >There are of course distributions out there support corosync on<br>
> >big-endian architectures<br>
> >but I don't know if there is an automatized regression for corosync on<br>
> >big-endian that<br>
> >would catch big-endian-issues right away with something as current as<br>
> >your 2.3.5.<br>
><br>
> No we are not testing big-endian.<br>
><br>
> So totally agree with Klaus. Give a try to ppcle. Also make sure all<br>
> nodes are little-endian. Corosync should work in mixed BE/LE<br>
> environment but because it's not tested, it may not work (and it's a<br>
> bug, so if ppcle works I will try to fix BE).<br>
<br>
</span>I tested a cluster consisting of big endian/little endian nodes<br>
(s390 and x86-64), but that was a while ago. IIRC, all relevant<br>
bugs in corosync got fixed at that time. Don't know what is the<br>
situation with the latest version.<br>
<br>
Thanks,<br>
<br>
Dejan<br>
<div class="HOEnZb"><div class="h5"><br>
> Regards,<br>
> Honza<br>
><br>
> ><br>
> >Regards,<br>
> >Klaus<br>
> ><br>
> >On 05/02/2016 06:44 AM, Nikhil Utane wrote:<br>
> >>Re-sending as I don't see my post on the thread.<br>
> >><br>
> >>On Sun, May 1, 2016 at 4:21 PM, Nikhil Utane<br>
> >><<a href="mailto:nikhil.subscribed@gmail.com">nikhil.subscribed@gmail.com</a> <mailto:<a href="mailto:nikhil.subscribed@gmail.com">nikhil.subscribed@gmail.com</a>>> wrote:<br>
> >><br>
> >> Hi,<br>
> >><br>
> >> Looking for some guidance here as we are completely blocked<br>
> >> otherwise :(.<br>
> >><br>
> >> -Regards<br>
> >> Nikhil<br>
> >><br>
> >> On Fri, Apr 29, 2016 at 6:11 PM, Sriram <<a href="mailto:sriram.ec@gmail.com">sriram.ec@gmail.com</a><br>
> >> <mailto:<a href="mailto:sriram.ec@gmail.com">sriram.ec@gmail.com</a>>> wrote:<br>
> >><br>
> >> Corrected the subject.<br>
> >><br>
> >> We went ahead and captured corosync debug logs for our ppc board.<br>
> >> After log analysis and comparison with the sucessful logs(<br>
> >> from x86 machine) ,<br>
> >> we didnt find *"[ MAIN ] Completed service synchronization,<br>
> >> ready to provide service.*" in ppc logs.<br>
> >> So, looks like corosync is not in a position to accept<br>
> >> connection from Pacemaker.<br>
> >> Even I tried with the new corosync.conf with no success.<br>
> >><br>
> >> Any hints on this issue would be really helpful.<br>
> >><br>
> >> Attaching ppc_notworking.log, x86_working.log, corosync.conf.<br>
> >><br>
> >> Regards,<br>
> >> Sriram<br>
> >><br>
> >><br>
> >><br>
> >> On Fri, Apr 29, 2016 at 2:44 PM, Sriram <<a href="mailto:sriram.ec@gmail.com">sriram.ec@gmail.com</a><br>
> >> <mailto:<a href="mailto:sriram.ec@gmail.com">sriram.ec@gmail.com</a>>> wrote:<br>
> >><br>
> >> Hi,<br>
> >><br>
> >> I went ahead and made some changes in file system(Like I<br>
> >> brought in /etc/init.d/corosync and /etc/init.d/pacemaker,<br>
> >> /etc/sysconfig ), After that I was able to run "pcs<br>
> >> cluster start".<br>
> >> But it failed with the following error<br>
> >> # pcs cluster start<br>
> >> Starting Cluster...<br>
> >> Starting Pacemaker Cluster Manager[FAILED]<br>
> >> Error: unable to start pacemaker<br>
> >><br>
> >> And in the /var/log/pacemaker.log, I saw these errors<br>
> >> pacemakerd: info: mcp_read_config: cmap connection<br>
> >> setup failed: CS_ERR_TRY_AGAIN. Retrying in 4s<br>
> >> Apr 29 08:53:47 [15863] node_cu pacemakerd: info:<br>
> >> mcp_read_config: cmap connection setup failed:<br>
> >> CS_ERR_TRY_AGAIN. Retrying in 5s<br>
> >> Apr 29 08:53:52 [15863] node_cu pacemakerd: warning:<br>
> >> mcp_read_config: Could not connect to Cluster<br>
> >> Configuration Database API, error 6<br>
> >> Apr 29 08:53:52 [15863] node_cu pacemakerd: notice:<br>
> >> main: Could not obtain corosync config data, exiting<br>
> >> Apr 29 08:53:52 [15863] node_cu pacemakerd: info:<br>
> >> crm_xml_cleanup: Cleaning up memory from libxml2<br>
> >><br>
> >><br>
> >> And in the /var/log/Debuglog, I saw these errors coming<br>
> >> from corosync<br>
> >> 20160429 085347.487050 <tel:085347.487050> airv_cu<br>
> >> daemon.warn corosync[12857]: [QB ] Denied connection,<br>
> >> is not ready (12857-15863-14)<br>
> >> 20160429 085347.487067 <tel:085347.487067> airv_cu<br>
> >> <a href="http://daemon.info" rel="noreferrer" target="_blank">daemon.info</a> <<a href="http://daemon.info" rel="noreferrer" target="_blank">http://daemon.info</a>> corosync[12857]: [QB<br>
> >> ] Denied connection, is not ready (12857-15863-14)<br>
> >><br>
> >><br>
> >> I browsed the code of libqb to find that it is failing in<br>
> >><br>
> >> <a href="https://github.com/ClusterLabs/libqb/blob/master/lib/ipc_setup.c" rel="noreferrer" target="_blank">https://github.com/ClusterLabs/libqb/blob/master/lib/ipc_setup.c</a><br>
> >><br>
> >> Line 600 :<br>
> >> handle_new_connection function<br>
> >><br>
> >> Line 637:<br>
> >> if (auth_result == 0 &&<br>
> >> c->service->serv_fns.connection_accept) {<br>
> >> res = c->service->serv_fns.connection_accept(c,<br>
> >> c->euid, c->egid);<br>
> >> }<br>
> >> if (res != 0) {<br>
> >> goto send_response;<br>
> >> }<br>
> >><br>
> >> Any hints on this issue would be really helpful for me to<br>
> >> go ahead.<br>
> >> Please let me know if any logs are required,<br>
> >><br>
> >> Regards,<br>
> >> Sriram<br>
> >><br>
> >> On Thu, Apr 28, 2016 at 2:42 PM, Sriram<br>
> >> <<a href="mailto:sriram.ec@gmail.com">sriram.ec@gmail.com</a> <mailto:<a href="mailto:sriram.ec@gmail.com">sriram.ec@gmail.com</a>>> wrote:<br>
> >><br>
> >> Thanks Ken and Emmanuel.<br>
> >> Its a big endian machine. I will try with running "pcs<br>
> >> cluster setup" and "pcs cluster start"<br>
> >> Inside cluster.py, "service pacemaker start" and<br>
> >> "service corosync start" are executed to bring up<br>
> >> pacemaker and corosync.<br>
> >> Those service scripts and the infrastructure needed to<br>
> >> bring up the processes in the above said manner<br>
> >> doesn't exist in my board.<br>
> >> As it is a embedded board with the limited memory,<br>
> >> full fledged linux is not installed.<br>
> >> Just curious to know, what could be reason the<br>
> >> pacemaker throws that error.<br>
> >><br>
> >> /"cmap connection setup failed: CS_ERR_TRY_AGAIN.<br>
> >> Retrying in 1s"<br>
> >><br>
> >> /<br>
> >> Thanks for response.<br>
> >><br>
> >> Regards,<br>
> >> Sriram.<br>
> >><br>
> >> On Thu, Apr 28, 2016 at 8:55 AM, Ken Gaillot<br>
> >> <<a href="mailto:kgaillot@redhat.com">kgaillot@redhat.com</a> <mailto:<a href="mailto:kgaillot@redhat.com">kgaillot@redhat.com</a>>> wrote:<br>
> >><br>
> >> On 04/27/2016 11:25 AM, emmanuel segura wrote:<br>
> >> > you need to use pcs to do everything, pcs<br>
> >> cluster setup and pcs<br>
> >> > cluster start, try to use the redhat docs for<br>
> >> more information.<br>
> >><br>
> >> Agreed -- pcs cluster setup will create a proper<br>
> >> corosync.conf for you.<br>
> >> Your corosync.conf below uses corosync 1 syntax,<br>
> >> and there were<br>
> >> significant changes in corosync 2. In particular,<br>
> >> you don't need the<br>
> >> file created in step 4, because pacemaker is no<br>
> >> longer launched via a<br>
> >> corosync plugin.<br>
> >><br>
> >> > 2016-04-27 17:28 GMT+02:00 Sriram<br>
> >> <<a href="mailto:sriram.ec@gmail.com">sriram.ec@gmail.com</a> <mailto:<a href="mailto:sriram.ec@gmail.com">sriram.ec@gmail.com</a>>>:<br>
> >> >> Dear All,<br>
> >> >><br>
> >> >> I m trying to use pacemaker and corosync for<br>
> >> the clustering requirement that<br>
> >> >> came up recently.<br>
> >> >> We have cross compiled corosync, pacemaker and<br>
> >> pcs(python) for ppc<br>
> >> >> environment (Target board where pacemaker and<br>
> >> corosync are supposed to run)<br>
> >> >> I m having trouble bringing up pacemaker in<br>
> >> that environment, though I could<br>
> >> >> successfully bring up corosync.<br>
> >> >> Any help is welcome.<br>
> >> >><br>
> >> >> I m using these versions of pacemaker and corosync<br>
> >> >> [root@node_cu pacemaker]# corosync -v<br>
> >> >> Corosync Cluster Engine, version '2.3.5'<br>
> >> >> Copyright (c) 2006-2009 Red Hat, Inc.<br>
> >> >> [root@node_cu pacemaker]# pacemakerd -$<br>
> >> >> Pacemaker 1.1.14<br>
> >> >> Written by Andrew Beekhof<br>
> >> >><br>
> >> >> For running corosync, I did the following.<br>
> >> >> 1. Created the following directories,<br>
> >> >> /var/lib/pacemaker<br>
> >> >> /var/lib/corosync<br>
> >> >> /var/lib/pacemaker<br>
> >> >> /var/lib/pacemaker/cores<br>
> >> >> /var/lib/pacemaker/pengine<br>
> >> >> /var/lib/pacemaker/blackbox<br>
> >> >> /var/lib/pacemaker/cib<br>
> >> >><br>
> >> >><br>
> >> >> 2. Created a file called corosync.conf under<br>
> >> /etc/corosync folder with the<br>
> >> >> following contents<br>
> >> >><br>
> >> >> totem {<br>
> >> >><br>
> >> >> version: 2<br>
> >> >> token: 5000<br>
> >> >> token_retransmits_before_loss_const: 20<br>
> >> >> join: 1000<br>
> >> >> consensus: 7500<br>
> >> >> vsftype: none<br>
> >> >> max_messages: 20<br>
> >> >> secauth: off<br>
> >> >> cluster_name: mycluster<br>
> >> >> transport: udpu<br>
> >> >> threads: 0<br>
> >> >> clear_node_high_bit: yes<br>
> >> >><br>
> >> >> interface {<br>
> >> >> ringnumber: 0<br>
> >> >> # The following three values<br>
> >> need to be set based on your<br>
> >> >> environment<br>
> >> >> bindnetaddr: 10.x.x.x<br>
> >> >> mcastaddr: 226.94.1.1<br>
> >> >> mcastport: 5405<br>
> >> >> }<br>
> >> >> }<br>
> >> >><br>
> >> >> logging {<br>
> >> >> fileline: off<br>
> >> >> to_syslog: yes<br>
> >> >> to_stderr: no<br>
> >> >> to_syslog: yes<br>
> >> >> logfile: /var/log/corosync.log<br>
> >> >> syslog_facility: daemon<br>
> >> >> debug: on<br>
> >> >> timestamp: on<br>
> >> >> }<br>
> >> >><br>
> >> >> amf {<br>
> >> >> mode: disabled<br>
> >> >> }<br>
> >> >><br>
> >> >> quorum {<br>
> >> >> provider: corosync_votequorum<br>
> >> >> }<br>
> >> >><br>
> >> >> nodelist {<br>
> >> >> node {<br>
> >> >> ring0_addr: node_cu<br>
> >> >> nodeid: 1<br>
> >> >> }<br>
> >> >> }<br>
> >> >><br>
> >> >> 3. Created authkey under /etc/corosync<br>
> >> >><br>
> >> >> 4. Created a file called pcmk under<br>
> >> /etc/corosync/service.d and contents as<br>
> >> >> below,<br>
> >> >> cat pcmk<br>
> >> >> service {<br>
> >> >> # Load the Pacemaker Cluster Resource<br>
> >> Manager<br>
> >> >> name: pacemaker<br>
> >> >> ver: 1<br>
> >> >> }<br>
> >> >><br>
> >> >> 5. Added the node name "node_cu" in /etc/hosts<br>
> >> with 10.X.X.X ip<br>
> >> >><br>
> >> >> 6. ./corosync -f -p & --> this step started<br>
> >> corosync<br>
> >> >><br>
> >> >> [root@node_cu pacemaker]# netstat -alpn | grep<br>
> >> -i coros<br>
> >> >> udp 0 0 10.X.X.X:61841 0.0.0.0:*<br>
> >> >> 9133/corosync<br>
> >> >> udp 0 0 10.X.X.X:5405 0.0.0.0:*<br>
> >> >> 9133/corosync<br>
> >> >> unix 2 [ ACC ] STREAM LISTENING<br>
> >> 148888 9133/corosync<br>
> >> >> @quorum<br>
> >> >> unix 2 [ ACC ] STREAM LISTENING<br>
> >> 148884 9133/corosync<br>
> >> >> @cmap<br>
> >> >> unix 2 [ ACC ] STREAM LISTENING<br>
> >> 148887 9133/corosync<br>
> >> >> @votequorum<br>
> >> >> unix 2 [ ACC ] STREAM LISTENING<br>
> >> 148885 9133/corosync<br>
> >> >> @cfg<br>
> >> >> unix 2 [ ACC ] STREAM LISTENING<br>
> >> 148886 9133/corosync<br>
> >> >> @cpg<br>
> >> >> unix 2 [ ] DGRAM<br>
> >> 148840 9133/corosync<br>
> >> >><br>
> >> >> 7. ./pacemakerd -f & gives the following error<br>
> >> and exits.<br>
> >> >> [root@node_cu pacemaker]# pacemakerd -f<br>
> >> >> cmap connection setup failed:<br>
> >> CS_ERR_TRY_AGAIN. Retrying in 1s<br>
> >> >> cmap connection setup failed:<br>
> >> CS_ERR_TRY_AGAIN. Retrying in 2s<br>
> >> >> cmap connection setup failed:<br>
> >> CS_ERR_TRY_AGAIN. Retrying in 3s<br>
> >> >> cmap connection setup failed:<br>
> >> CS_ERR_TRY_AGAIN. Retrying in 4s<br>
> >> >> cmap connection setup failed:<br>
> >> CS_ERR_TRY_AGAIN. Retrying in 5s<br>
> >> >> Could not connect to Cluster Configuration<br>
> >> Database API, error 6<br>
> >> >><br>
> >> >> Can you please point me, what is missing in<br>
> >> these steps ?<br>
> >> >><br>
> >> >> Before trying these steps, I tried running "pcs<br>
> >> cluster start", but that<br>
> >> >> command fails with "service" script not found.<br>
> >> As the root filesystem<br>
> >> >> doesn't contain either /etc/init.d/ or<br>
> >> /sbin/service<br>
> >> >><br>
> >> >> So, the plan is to bring up corosync and<br>
> >> pacemaker manually, later do the<br>
> >> >> cluster configuration using "pcs" commands.<br>
> >> >><br>
> >> >> Regards,<br>
> >> >> Sriram<br>
> >> >><br>
> >> >> _______________________________________________<br>
> >> >> Users mailing list: <a href="mailto:Users@clusterlabs.org">Users@clusterlabs.org</a><br>
> >> <mailto:<a href="mailto:Users@clusterlabs.org">Users@clusterlabs.org</a>><br>
> >> >> <a href="http://clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://clusterlabs.org/mailman/listinfo/users</a><br>
> >> >><br>
> >> >> Project Home: <a href="http://www.clusterlabs.org" rel="noreferrer" target="_blank">http://www.clusterlabs.org</a><br>
> >> >> Getting started:<br>
> >> <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" rel="noreferrer" target="_blank">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br>
> >> >> Bugs: <a href="http://bugs.clusterlabs.org" rel="noreferrer" target="_blank">http://bugs.clusterlabs.org</a><br>
> >> >><br>
> >> ><br>
> >> ><br>
> >> ><br>
> >><br>
> >><br>
> >> _______________________________________________<br>
> >> Users mailing list: <a href="mailto:Users@clusterlabs.org">Users@clusterlabs.org</a><br>
> >> <mailto:<a href="mailto:Users@clusterlabs.org">Users@clusterlabs.org</a>><br>
> >> <a href="http://clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://clusterlabs.org/mailman/listinfo/users</a><br>
> >><br>
> >> Project Home: <a href="http://www.clusterlabs.org" rel="noreferrer" target="_blank">http://www.clusterlabs.org</a><br>
> >> Getting started:<br>
> >> <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" rel="noreferrer" target="_blank">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br>
> >> Bugs: <a href="http://bugs.clusterlabs.org" rel="noreferrer" target="_blank">http://bugs.clusterlabs.org</a><br>
> >><br>
> >><br>
> >><br>
> >><br>
> >><br>
> >> _______________________________________________<br>
> >> Users mailing list: <a href="mailto:Users@clusterlabs.org">Users@clusterlabs.org</a><br>
> >> <mailto:<a href="mailto:Users@clusterlabs.org">Users@clusterlabs.org</a>><br>
> >> <a href="http://clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://clusterlabs.org/mailman/listinfo/users</a><br>
> >><br>
> >> Project Home: <a href="http://www.clusterlabs.org" rel="noreferrer" target="_blank">http://www.clusterlabs.org</a><br>
> >> Getting started:<br>
> >> <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" rel="noreferrer" target="_blank">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br>
> >> Bugs: <a href="http://bugs.clusterlabs.org" rel="noreferrer" target="_blank">http://bugs.clusterlabs.org</a><br>
> >><br>
> >><br>
> >><br>
> >><br>
> >><br>
> >>_______________________________________________<br>
> >>Users mailing list: <a href="mailto:Users@clusterlabs.org">Users@clusterlabs.org</a><br>
> >><a href="http://clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://clusterlabs.org/mailman/listinfo/users</a><br>
> >><br>
> >>Project Home: <a href="http://www.clusterlabs.org" rel="noreferrer" target="_blank">http://www.clusterlabs.org</a><br>
> >>Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" rel="noreferrer" target="_blank">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br>
> >>Bugs: <a href="http://bugs.clusterlabs.org" rel="noreferrer" target="_blank">http://bugs.clusterlabs.org</a><br>
> ><br>
> ><br>
> >_______________________________________________<br>
> >Users mailing list: <a href="mailto:Users@clusterlabs.org">Users@clusterlabs.org</a><br>
> ><a href="http://clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://clusterlabs.org/mailman/listinfo/users</a><br>
> ><br>
> >Project Home: <a href="http://www.clusterlabs.org" rel="noreferrer" target="_blank">http://www.clusterlabs.org</a><br>
> >Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" rel="noreferrer" target="_blank">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br>
> >Bugs: <a href="http://bugs.clusterlabs.org" rel="noreferrer" target="_blank">http://bugs.clusterlabs.org</a><br>
> ><br>
><br>
><br>
> _______________________________________________<br>
> Users mailing list: <a href="mailto:Users@clusterlabs.org">Users@clusterlabs.org</a><br>
> <a href="http://clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://clusterlabs.org/mailman/listinfo/users</a><br>
><br>
> Project Home: <a href="http://www.clusterlabs.org" rel="noreferrer" target="_blank">http://www.clusterlabs.org</a><br>
> Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" rel="noreferrer" target="_blank">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br>
> Bugs: <a href="http://bugs.clusterlabs.org" rel="noreferrer" target="_blank">http://bugs.clusterlabs.org</a><br>
<br>
_______________________________________________<br>
Users mailing list: <a href="mailto:Users@clusterlabs.org">Users@clusterlabs.org</a><br>
<a href="http://clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://clusterlabs.org/mailman/listinfo/users</a><br>
<br>
Project Home: <a href="http://www.clusterlabs.org" rel="noreferrer" target="_blank">http://www.clusterlabs.org</a><br>
Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" rel="noreferrer" target="_blank">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br>
Bugs: <a href="http://bugs.clusterlabs.org" rel="noreferrer" target="_blank">http://bugs.clusterlabs.org</a><br>
</div></div></blockquote></div><br></div>