<div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Jun 28, 2016 at 5:04 AM, Andrew Beekhof <span dir="ltr"><<a href="mailto:abeekhof@redhat.com" target="_blank">abeekhof@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><span class="">On Sun, Jun 26, 2016 at 6:05 AM, Marcin Dulak <<a href="mailto:marcin.dulak@gmail.com">marcin.dulak@gmail.com</a>> wrote:<br>
> Hi,<br>
><br>
> I'm trying to get familiar with STONITH Block Devices (SBD) on a 3-node<br>
> CentOS7 built in VirtualBox.<br>
> The complete setup is available at<br>
> <a href="https://github.com/marcindulak/vagrant-sbd-tutorial-centos7.git" rel="noreferrer" target="_blank">https://github.com/marcindulak/vagrant-sbd-tutorial-centos7.git</a><br>
> so hopefully with some help I'll be able to make it work.<br>
><br>
> Question 1:<br>
> The shared device /dev/sbd1 is the VirtualBox's "shareable hard disk"<br>
> <a href="https://www.virtualbox.org/manual/ch05.html#hdimagewrites" rel="noreferrer" target="_blank">https://www.virtualbox.org/manual/ch05.html#hdimagewrites</a><br>
> will SBD fencing work with that type of storage?<br>
<br>
</span>unknown<br>
<span class=""><br>
><br>
> I start the cluster using vagrant_1.8.1 and virtualbox-4.3 with:<br>
> $ vagrant up # takes ~15 minutes<br>
><br>
> The setup brings up the nodes, installs the necessary packages, and prepares<br>
> for the configuration of the pcs cluster.<br>
> You can see which scripts the nodes execute at the bottom of the<br>
> Vagrantfile.<br>
> While there is 'yum -y install sbd' on CentOS7 the fence_sbd agent has not<br>
> been packaged yet.<br>
<br>
</span>you're not supposed to use it<br>
<span class=""><br>
> Therefore I rebuild Fedora 24 package using the latest<br>
> <a href="https://github.com/ClusterLabs/fence-agents/archive/v4.0.22.tar.gz" rel="noreferrer" target="_blank">https://github.com/ClusterLabs/fence-agents/archive/v4.0.22.tar.gz</a><br>
> plus the update to the fence_sbd from<br>
> <a href="https://github.com/ClusterLabs/fence-agents/pull/73" rel="noreferrer" target="_blank">https://github.com/ClusterLabs/fence-agents/pull/73</a><br>
><br>
> The configuration is inspired by<br>
> <a href="https://www.novell.com/support/kb/doc.php?id=7009485" rel="noreferrer" target="_blank">https://www.novell.com/support/kb/doc.php?id=7009485</a> and<br>
> <a href="https://www.suse.com/documentation/sle-ha-12/book_sleha/data/sec_ha_storage_protect_fencing.html" rel="noreferrer" target="_blank">https://www.suse.com/documentation/sle-ha-12/book_sleha/data/sec_ha_storage_protect_fencing.html</a><br>
><br>
> Question 2:<br>
> After reading <a href="http://blog.clusterlabs.org/blog/2015/sbd-fun-and-profit" rel="noreferrer" target="_blank">http://blog.clusterlabs.org/blog/2015/sbd-fun-and-profit</a> I<br>
> expect with just one stonith resource configured<br>
<br>
</span>there shouldn't be any stonith resources configured<br></blockquote><div><br><div>It's a test setup. Found<span class=""><a href="https://www.suse.com/documentation/sle-ha-12/book_sleha/data/sec_ha_storage_protect_fencing.html" rel="noreferrer" target="_blank"> https://www.suse.com/documentation/sle-ha-12/book_sleha/data/sec_ha_storage_protect_fencing.html</a><br></span><span class="" style="font-family:Consolas,"Liberation Mono",Courier,monospace;font-size:12.5px"><br>crm</span> configure<span class="" style="font-family:Consolas,"Liberation Mono",Courier,monospace;font-size:12.5px"><br>property</span> stonith-enabled="true"<span class="" style="font-family:Consolas,"Liberation Mono",Courier,monospace;font-size:12.5px"><br>property</span> stonith-timeout="40s"<span class="" style="font-family:Consolas,"Liberation Mono",Courier,monospace;font-size:12.5px"><br>primitive</span> stonith_sbd stonith:external/sbd op start interval="0" timeout="15" start-delay="10"<span class="" style="font-family:Consolas,"Liberation Mono",Courier,monospace;font-size:12.5px"><br>commit</span><span class="" style="font-family:Consolas,"Liberation Mono",Courier,monospace;font-size:12.5px"><br>quit</span><span class="" style="font-family:Consolas,"Liberation Mono",Courier,monospace;font-size:12.5px"></span><br><br></div><span class="">and trying to configure CentOS7 similarly.<br></span> <br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<span class=""><br>
> a node will be fenced when I stop pacemaker and corosync `pcs cluster stop<br>
> node-1` or just `stonith_admin -F node-1`, but this is not the case.<br>
><br>
> As can be seen below from uptime, the node-1 is not shutdown by `pcs cluster<br>
> stop node-1` executed on itself.<br>
> I found some discussions on <a href="mailto:users@clusterlabs.org">users@clusterlabs.org</a> about whether a node<br>
> running SBD resource can fence itself,<br>
> but the conclusion was not clear to me.<br>
<br>
</span>on RHEL and derivatives it can ONLY fence itself. the disk based<br>
posion pill isn't supported yet<br></blockquote><div><br></div><div>once it's supported on RHEL I'll be ready :)<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
<span class=""><br>
><br>
> Question 3:<br>
> Neither node-1 is fenced by `stonith_admin -F node-1` executed on node-2,<br>
> despite the fact<br>
> /var/log/messages on node-2 (the one currently running MyStonith) reporting:<br>
> ...<br>
> notice: Operation 'off' [3309] (call 2 from stonith_admin.3288) for host<br>
> 'node-1' with device 'MyStonith' returned: 0 (OK)<br>
> ...<br>
> What is happening here?<br>
<br>
</span>have you tried looking at the sbd logs?<br>
is the watchdog device functioning correctly?<br>
<div><div class="h5"><br></div></div></blockquote><div><br></div><div>it turned out (suggested here <a href="http://clusterlabs.org/pipermail/users/2016-June/003355.html">http://clusterlabs.org/pipermail/users/2016-June/003355.html</a>) that the reason for node-1 not being fenced by <span class="">`stonith_admin -F node-1` executed on node-2<br>was the previously executed `pcs cluster stop node-1`. In my setup SBD seems integrated with corosync/pacemaker and the latter command stopped the sbd service on node-1.<br></span></div><div><span class="">Killing corosync on node-1 instead of </span><span class=""><span class=""> `pcs cluster stop node-1`</span> fences node-1 as expected:<br><br>[root at node-1 ~]# killall -15 corosync<br>Broadcast message from systemd-journald at node-1 (Sat 2016-06-25 21:55:07 EDT):<br>sbd[4761]: /dev/sdb1: emerg: do_exit: Rebooting system: off<br><br></span></div><div>I'm left with further questions: how to setup fence_sbd for the fenced node to shutdown instead of reboot?<br>Both action=off or mode=onoff action=off options passed to fence_sbd when creating the MyStonith resource result in a reboot.<br><br>[root at node-2 ~]# pcs stonith show MyStonith<br> Resource: MyStonith (class=stonith type=fence_sbd)<br> Attributes: devices=/dev/sdb1 power_timeout=21 action=off<br> Operations: monitor interval=60s (MyStonith-monitor-interval-60s)<br><br>[root@node-2 ~]# pcs status<br>Cluster name: mycluster<br>Last updated: Tue Jun 28 04:55:43 2016 Last change: Tue Jun 28 04:48:03 2016 by root via cibadmin on node-1<br>Stack: corosync<br>Current DC: node-3 (version 1.1.13-10.el7_2.2-44eb2dd) - partition with quorum<br>3 nodes and 1 resource configured<br><br>Online: [ node-1 node-2 node-3 ]<br><br>Full list of resources:<br><br> MyStonith (stonith:fence_sbd): Started node-2<br><br>PCSD Status:<br> node-1: Online<br> node-2: Online<br> node-3: Online<br><br>Daemon Status:<br> corosync: active/disabled<br> pacemaker: active/disabled<br> pcsd: active/enabled<br><br></div><div>Starting from the above cluster state:<br></div><div>[root@node-2 ~]# stonith_admin -F node-1<br>results also in a reboot of node-1 instead of shutdown.<br><br>/var/log/messages after the last command show "reboot" on node-2<br>...<br>Jun 28 04:49:39 localhost stonith-ng[3081]: notice: Client
stonith_admin.3179.fbc038ee wants to fence (off) 'node-1' with device
'(any)'<br>Jun 28 04:49:39 localhost stonith-ng[3081]: notice:
Initiating remote operation off for node-1:
8aea4f12-538d-41ab-bf20-0c8b0f72e2a3 (0)<br>Jun 28 04:49:39 localhost stonith-ng[3081]: notice: watchdog can not fence (off) node-1: static-list<br>Jun 28 04:49:40 localhost stonith-ng[3081]: notice: MyStonith can fence (off) node-1: dynamic-list<br>Jun 28 04:49:40 localhost stonith-ng[3081]: notice: watchdog can not fence (off) node-1: static-list<br>Jun 28 04:49:44 localhost stonith-ng[3081]: notice: crm_update_peer_proc: Node node-1[1] - state is now lost (was member)<br>Jun 28 04:49:44 localhost stonith-ng[3081]: notice: Removing node-1/1 from the membership list<br>Jun 28 04:49:44 localhost stonith-ng[3081]: notice: Purged 1 peers with id=1 and/or uname=node-1 from the membership cache<br>Jun 28 04:49:45 localhost stonith-ng[3081]: notice: MyStonith can fence (reboot) node-1: dynamic-list<br>Jun 28 04:49:45 localhost stonith-ng[3081]: notice: watchdog can not fence (reboot) node-1: static-list<br>Jun 28 04:49:46 localhost stonith-ng[3081]: notice: Operation reboot of node-1 by node-3 for crmd.3063@node-3.36859c4e: OK<br>Jun
28 04:50:00 localhost stonith-ng[3081]: notice: Operation 'off' [3200]
(call 2 from stonith_admin.3179) for host 'node-1' with device
'MyStonith' returned: 0 (OK)<br>Jun 28 04:50:00 localhost
stonith-ng[3081]: notice: Operation off of node-1 by node-2 for
stonith_admin.3179@node-2.8aea4f12: OK<br>...<br><br><br></div><div>Another question (I think the question is valid also for a potential SUSE setup): What is the proper way of operating a cluster with SBD after node-1 was fenced?</div><div><br>[root at node-2 ~]# sbd -d /dev/sdb1 list<br>0 node-3 clear<br>1 node-2 clear<br>2 node-1 off node-2<br><br>I found that executing sbd watch on node-1 clears the SBD status:<br>[root at node-1 ~]# sbd -d /dev/sdb1 watch<br>[root at node-1 ~]# sbd -d /dev/sdb1 list<br>0 node-3 clear<br>1 node-2 clear<br>2 node-1 clear<br>Making sure that sbd is not running on node-1 (I can do that because node-1 is currently not a part of the cluster)<br>[root at node-1 ~]# killall -15 sbd<br></div><div>I have to kill sbd because it's integrated with corosync and corosync fails to start on node-1 with sbd already running.<br></div><div><br>I can now join node-1 to the cluster from node-2:<br>[root at node-2 ~]# pcs cluster start node-1<br><br><br></div><div>Marcin<br></div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div><div class="h5">
><br>
> Question 4 (for the future):<br>
> Assuming the node-1 was fenced, what is the way of operating SBD?<br>
> I see the sbd lists now:<br>
> 0 node-3 clear<br>
> 1 node-1 off node-2<br>
> 2 node-2 clear<br>
> How to clear the status of node-1?<br>
><br>
> Question 5 (also for the future):<br>
> While the relation 'stonith-timeout = Timeout (msgwait) + 20%' presented<br>
> at<br>
> <a href="https://www.suse.com/documentation/sle_ha/book_sleha/data/sec_ha_storage_protect_fencing.html" rel="noreferrer" target="_blank">https://www.suse.com/documentation/sle_ha/book_sleha/data/sec_ha_storage_protect_fencing.html</a><br>
> is clearly described, I wonder about the relation of 'stonith-timeout'<br>
> to other timeouts like the 'monitor interval=60s' reported by `pcs stonith<br>
> show MyStonith`.<br>
><br>
> Here is how I configure the cluster and test it. The run.sh script is<br>
> attached.<br>
><br>
> $ sh -x run01.sh 2>&1 | tee run01.txt<br>
><br>
> with the result:<br>
><br>
> $ cat run01.txt<br>
><br>
> Each block below shows the executed ssh command and the result.<br>
><br>
> ############################<br>
> ssh node-1 -c sudo su - -c 'pcs cluster auth -u hacluster -p password node-1<br>
> node-2 node-3'<br>
> node-1: Authorized<br>
> node-3: Authorized<br>
> node-2: Authorized<br>
><br>
><br>
><br>
> ############################<br>
> ssh node-1 -c sudo su - -c 'pcs cluster setup --name mycluster node-1 node-2<br>
> node-3'<br>
> Shutting down pacemaker/corosync services...<br>
> Redirecting to /bin/systemctl stop pacemaker.service<br>
> Redirecting to /bin/systemctl stop corosync.service<br>
> Killing any remaining services...<br>
> Removing all cluster configuration files...<br>
> node-1: Succeeded<br>
> node-2: Succeeded<br>
> node-3: Succeeded<br>
> Synchronizing pcsd certificates on nodes node-1, node-2, node-3...<br>
> node-1: Success<br>
> node-3: Success<br>
> node-2: Success<br>
> Restaring pcsd on the nodes in order to reload the certificates...<br>
> node-1: Success<br>
> node-3: Success<br>
> node-2: Success<br>
><br>
><br>
><br>
> ############################<br>
> ssh node-1 -c sudo su - -c 'pcs cluster start --all'<br>
> node-3: Starting Cluster...<br>
> node-2: Starting Cluster...<br>
> node-1: Starting Cluster...<br>
><br>
><br>
><br>
> ############################<br>
> ssh node-1 -c sudo su - -c 'corosync-cfgtool -s'<br>
> Printing ring status.<br>
> Local node ID 1<br>
> RING ID 0<br>
> id = 192.168.10.11<br>
> status = ring 0 active with no faults<br>
><br>
><br>
> ############################<br>
> ssh node-1 -c sudo su - -c 'pcs status corosync'<br>
> Membership information<br>
> ----------------------<br>
> Nodeid Votes Name<br>
> 1 1 node-1 (local)<br>
> 2 1 node-2<br>
> 3 1 node-3<br>
><br>
><br>
><br>
> ############################<br>
> ssh node-1 -c sudo su - -c 'pcs status'<br>
> Cluster name: mycluster<br>
> WARNING: no stonith devices and stonith-enabled is not false<br>
> Last updated: Sat Jun 25 15:40:51 2016 Last change: Sat Jun 25<br>
> 15:40:33 2016 by hacluster via crmd on node-2<br>
> Stack: corosync<br>
> Current DC: node-2 (version 1.1.13-10.el7_2.2-44eb2dd) - partition with<br>
> quorum<br>
> 3 nodes and 0 resources configured<br>
> Online: [ node-1 node-2 node-3 ]<br>
> Full list of resources:<br>
> PCSD Status:<br>
> node-1: Online<br>
> node-2: Online<br>
> node-3: Online<br>
> Daemon Status:<br>
> corosync: active/disabled<br>
> pacemaker: active/disabled<br>
> pcsd: active/enabled<br>
><br>
><br>
><br>
><br>
> ############################<br>
> ssh node-1 -c sudo su - -c 'sbd -d /dev/sdb1 list'<br>
> 0 node-3 clear<br>
> 1 node-2 clear<br>
> 2 node-1 clear<br>
><br>
><br>
><br>
><br>
> ############################<br>
> ssh node-1 -c sudo su - -c 'sbd -d /dev/sdb1 dump'<br>
> ==Dumping header on disk /dev/sdb1<br>
> Header version : 2.1<br>
> UUID : 79f28167-a207-4f2a-a723-aa1c00bf1dee<br>
> Number of slots : 255<br>
> Sector size : 512<br>
> Timeout (watchdog) : 10<br>
> Timeout (allocate) : 2<br>
> Timeout (loop) : 1<br>
> Timeout (msgwait) : 20<br>
> ==Header on disk /dev/sdb1 is dumped<br>
><br>
><br>
><br>
><br>
> ############################<br>
> ssh node-1 -c sudo su - -c 'pcs stonith list'<br>
> fence_sbd - Fence agent for sbd<br>
><br>
><br>
><br>
><br>
> ############################<br>
> ssh node-1 -c sudo su - -c 'pcs stonith create MyStonith fence_sbd<br>
> devices=/dev/sdb1 power_timeout=21 action=off'<br>
> ssh node-1 -c sudo su - -c 'pcs property set stonith-enabled=true'<br>
> ssh node-1 -c sudo su - -c 'pcs property set stonith-timeout=24s'<br>
> ssh node-1 -c sudo su - -c 'pcs property'<br>
> Cluster Properties:<br>
> cluster-infrastructure: corosync<br>
> cluster-name: mycluster<br>
> dc-version: 1.1.13-10.el7_2.2-44eb2dd<br>
> have-watchdog: true<br>
> stonith-enabled: true<br>
> stonith-timeout: 24s<br>
> stonith-watchdog-timeout: 10s<br>
><br>
><br>
><br>
> ############################<br>
> ssh node-1 -c sudo su - -c 'pcs stonith show MyStonith'<br>
> Resource: MyStonith (class=stonith type=fence_sbd)<br>
> Attributes: devices=/dev/sdb1 power_timeout=21 action=off<br>
> Operations: monitor interval=60s (MyStonith-monitor-interval-60s)<br>
><br>
><br>
><br>
> ############################<br>
> ssh node-1 -c sudo su - -c 'pcs cluster stop node-1 '<br>
> node-1: Stopping Cluster (pacemaker)...<br>
> node-1: Stopping Cluster (corosync)...<br>
><br>
><br>
><br>
> ############################<br>
> ssh node-2 -c sudo su - -c 'pcs status'<br>
> Cluster name: mycluster<br>
> Last updated: Sat Jun 25 15:42:29 2016 Last change: Sat Jun 25<br>
> 15:41:09 2016 by root via cibadmin on node-1<br>
> Stack: corosync<br>
> Current DC: node-2 (version 1.1.13-10.el7_2.2-44eb2dd) - partition with<br>
> quorum<br>
> 3 nodes and 1 resource configured<br>
> Online: [ node-2 node-3 ]<br>
> OFFLINE: [ node-1 ]<br>
> Full list of resources:<br>
> MyStonith (stonith:fence_sbd): Started node-2<br>
> PCSD Status:<br>
> node-1: Online<br>
> node-2: Online<br>
> node-3: Online<br>
> Daemon Status:<br>
> corosync: active/disabled<br>
> pacemaker: active/disabled<br>
> pcsd: active/enabled<br>
><br>
><br>
><br>
> ############################<br>
> ssh node-2 -c sudo su - -c 'stonith_admin -F node-1 '<br>
><br>
><br>
><br>
> ############################<br>
> ssh node-2 -c sudo su - -c 'grep stonith-ng /var/log/messages'<br>
> Jun 25 15:40:11 localhost stonith-ng[3102]: notice: Additional logging<br>
> available in /var/log/cluster/corosync.log<br>
> Jun 25 15:40:11 localhost stonith-ng[3102]: notice: Connecting to cluster<br>
> infrastructure: corosync<br>
> Jun 25 15:40:11 localhost stonith-ng[3102]: notice: crm_update_peer_proc:<br>
> Node node-2[2] - state is now member (was (null))<br>
> Jun 25 15:40:12 localhost stonith-ng[3102]: notice: Watching for stonith<br>
> topology changes<br>
> Jun 25 15:40:12 localhost stonith-ng[3102]: notice: Added 'watchdog' to the<br>
> device list (1 active devices)<br>
> Jun 25 15:40:12 localhost stonith-ng[3102]: notice: crm_update_peer_proc:<br>
> Node node-3[3] - state is now member (was (null))<br>
> Jun 25 15:40:12 localhost stonith-ng[3102]: notice: crm_update_peer_proc:<br>
> Node node-1[1] - state is now member (was (null))<br>
> Jun 25 15:40:12 localhost stonith-ng[3102]: notice: New watchdog timeout<br>
> 10s (was 0s)<br>
> Jun 25 15:41:03 localhost stonith-ng[3102]: notice: Relying on watchdog<br>
> integration for fencing<br>
> Jun 25 15:41:04 localhost stonith-ng[3102]: notice: Added 'MyStonith' to<br>
> the device list (2 active devices)<br>
> Jun 25 15:41:54 localhost stonith-ng[3102]: notice: crm_update_peer_proc:<br>
> Node node-1[1] - state is now lost (was member)<br>
> Jun 25 15:41:54 localhost stonith-ng[3102]: notice: Removing node-1/1 from<br>
> the membership list<br>
> Jun 25 15:41:54 localhost stonith-ng[3102]: notice: Purged 1 peers with<br>
> id=1 and/or uname=node-1 from the membership cache<br>
> Jun 25 15:42:33 localhost stonith-ng[3102]: notice: Client<br>
> stonith_admin.3288.eb400ac9 wants to fence (off) 'node-1' with device<br>
> '(any)'<br>
> Jun 25 15:42:33 localhost stonith-ng[3102]: notice: Initiating remote<br>
> operation off for node-1: 848cd1e9-55e4-4abc-8d7a-3762eaaf9ab4 (0)<br>
> Jun 25 15:42:33 localhost stonith-ng[3102]: notice: watchdog can not fence<br>
> (off) node-1: static-list<br>
> Jun 25 15:42:33 localhost stonith-ng[3102]: notice: MyStonith can fence<br>
> (off) node-1: dynamic-list<br>
> Jun 25 15:42:33 localhost stonith-ng[3102]: notice: watchdog can not fence<br>
> (off) node-1: static-list<br>
> Jun 25 15:42:54 localhost stonith-ng[3102]: notice: Operation 'off' [3309]<br>
> (call 2 from stonith_admin.3288) for host 'node-1' with device 'MyStonith'<br>
> returned: 0 (OK)<br>
> Jun 25 15:42:54 localhost stonith-ng[3102]: notice: Operation off of node-1<br>
> by node-2 for stonith_admin.3288@node-2.848cd1e9: OK<br>
> Jun 25 15:42:54 localhost stonith-ng[3102]: warning: new_event_notification<br>
> <a href="tel:%283102-3288-12" value="+13102328812">(3102-3288-12</a>): Broken pipe (32)<br>
> Jun 25 15:42:54 localhost stonith-ng[3102]: warning: st_notify_fence<br>
> notification of client stonith_admin.3288.eb400a failed: Broken pipe (-32)<br>
><br>
><br>
><br>
> ############################<br>
> ssh node-1 -c sudo su - -c 'sbd -d /dev/sdb1 list'<br>
> 0 node-3 clear<br>
> 1 node-2 clear<br>
> 2 node-1 off node-2<br>
><br>
><br>
><br>
> ############################<br>
> ssh node-1 -c sudo su - -c 'uptime'<br>
> 15:43:31 up 21 min, 2 users, load average: 0.25, 0.18, 0.11<br>
><br>
><br>
><br>
> Cheers,<br>
><br>
> Marcin<br>
><br>
><br>
</div></div>> _______________________________________________<br>
> Users mailing list: <a href="mailto:Users@clusterlabs.org">Users@clusterlabs.org</a><br>
> <a href="http://clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://clusterlabs.org/mailman/listinfo/users</a><br>
><br>
> Project Home: <a href="http://www.clusterlabs.org" rel="noreferrer" target="_blank">http://www.clusterlabs.org</a><br>
> Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" rel="noreferrer" target="_blank">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br>
> Bugs: <a href="http://bugs.clusterlabs.org" rel="noreferrer" target="_blank">http://bugs.clusterlabs.org</a><br>
><br>
<br>
_______________________________________________<br>
Users mailing list: <a href="mailto:Users@clusterlabs.org">Users@clusterlabs.org</a><br>
<a href="http://clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://clusterlabs.org/mailman/listinfo/users</a><br>
<br>
Project Home: <a href="http://www.clusterlabs.org" rel="noreferrer" target="_blank">http://www.clusterlabs.org</a><br>
Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" rel="noreferrer" target="_blank">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br>
Bugs: <a href="http://bugs.clusterlabs.org" rel="noreferrer" target="_blank">http://bugs.clusterlabs.org</a><br>
</blockquote></div><br></div></div>