<div dir="ltr">Hi, <div><br></div><div>After applying last changes... it looks not appear de huge amount of errors related to vdicone01 vm.</div><div><br></div><div>I apologice for my maybe simple question, but can you explain which is the difference between the following commands:</div><div><br></div><div><span style="font-family:monospace,monospace;font-size:12.8px">pcs resource op remove vm-vdicone01 monitor role=</span><span class="gmail-il" style="font-family:monospace,monospace;font-size:12.8px;background-color:rgb(255,255,255)">Stopped</span><br></div><div><span class="gmail-il" style="font-family:monospace,monospace;font-size:12.8px;background-color:rgb(255,255,255)"><span style="font-size:12.8px">pcs resource op remove vm-vdicone01 </span><span class="gmail-il" style="font-size:12.8px;background-color:rgb(255,255,255)">stop</span><span style="font-size:12.8px"> interval=0s timeout=90</span><br></span></div><div><br></div><div>After executing both commands I have experienced that sometimes (not always) in virt-manager I can see the vdicone01 started on hypervisor1 and stopped in hypervisor2. I can delete it from hypervisor2 (not deleting storage) but it appears. This behaviour can be caused by that commands?</div><div><br></div><div>Thanks in advance!</div></div><div class="gmail_extra"><br><div class="gmail_quote">2017-02-17 8:33 GMT+01:00 Ulrich Windl <span dir="ltr"><<a href="mailto:Ulrich.Windl@rz.uni-regensburg.de" target="_blank">Ulrich.Windl@rz.uni-regensburg.de</a>></span>:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">>>> Oscar Segarra <<a href="mailto:oscar.segarra@gmail.com">oscar.segarra@gmail.com</a>> schrieb am 16.02.2017 um 13:55 in<br>
Nachricht<br>
<<a href="mailto:CAJq8taHh1iDd62b-ApKVVZrzerh5cUoNNHGSJ4Z_C-C%2BwaUM_w@mail.gmail.com">CAJq8taHh1iDd62b-<wbr>ApKVVZrzerh5cUoNNHGSJ4Z_C-C+<wbr>waUM_w@mail.gmail.com</a>>:<br>
<span class="">> Hi Klaus,<br>
><br>
> Thanks a lot, I will try to delete the stop monitor.<br>
><br>
> Nevertheless, I have 6 domains configured exactly the same... Is there any<br>
> reason why just this domain has this behaviour ?<br>
<br>
</span>Some years ago I was playing with NPIV, and it worked perfectly for one and<br>
for several VMs. However when multiple VMs were started or stopped at the same<br>
time (this NPIV added/removed), I had "interesting" failures due to<br>
concurrency, even a kernel lockup (which is fixed meanwhile). So most likely<br>
"something is not correct".<br>
I know it doesn't help you the way you would like, but it's how life is.<br>
<br>
Regards,<br>
Ulrich<br>
<div class="HOEnZb"><div class="h5"><br>
><br>
> Thanks a lot.<br>
><br>
> 2017-02-16 11:12 GMT+01:00 Klaus Wenninger <<a href="mailto:kwenning@redhat.com">kwenning@redhat.com</a>>:<br>
><br>
>> On 02/16/2017 11:02 AM, Oscar Segarra wrote:<br>
>> > Hi Kaluss<br>
>> ><br>
>> > Which is your proposal to fix this behavior?<br>
>><br>
>> First you can try to remove the monitor op for role=stopped.<br>
>> Then the startup-probing will probably still fail but for that<br>
>> the behaviour is different.<br>
>> The startup-probing can be disabled globally via cluster-property<br>
>> enable-startup-probes that is defaulting to true.<br>
>> But be aware that the cluster then wouldn't be able to react<br>
>> properly if services are already up when pacemaker is starting.<br>
>> It should be possible to disable the probing on a per resource<br>
>> or node basis as well iirc. But I can't tell you out of my mind<br>
>> how that worked - there was a discussion a few weeks ago<br>
>> on the list iirc.<br>
>><br>
>> Regards,<br>
>> Klaus<br>
>><br>
>> ><br>
>> > Thanks a lot!<br>
>> ><br>
>> ><br>
>> > El 16 feb. 2017 10:57 a. m., "Klaus Wenninger" <<a href="mailto:kwenning@redhat.com">kwenning@redhat.com</a><br>
>> > <mailto:<a href="mailto:kwenning@redhat.com">kwenning@redhat.com</a>>> escribió:<br>
>> ><br>
>> > On 02/16/2017 09:05 AM, Oscar Segarra wrote:<br>
>> > > Hi,<br>
>> > ><br>
>> > > In my environment I have deployed 5 VirtualDomains as one can<br>
>> > see below:<br>
>> > > [root@vdicnode01 ~]# pcs status<br>
>> > > Cluster name: vdic-cluster<br>
>> > > Stack: corosync<br>
>> > > Current DC: vdicnode01-priv (version 1.1.15-11.el7_3.2-e174ec8) -<br>
>> > > partition with quorum<br>
>> > > Last updated: Thu Feb 16 09:02:53 2017 Last change: Thu<br>
>> Feb<br>
>> > > 16 08:20:53 2017 by root via crm_attribute on vdicnode02-priv<br>
>> > ><br>
>> > > 2 nodes and 14 resources configured: 5 resources DISABLED and 0<br>
>> > > BLOCKED from being started due to failures<br>
>> > ><br>
>> > > Online: [ vdicnode01-priv vdicnode02-priv ]<br>
>> > ><br>
>> > > Full list of resources:<br>
>> > ><br>
>> > > nfs-vdic-mgmt-vm-vip (ocf::heartbeat:IPaddr): Started<br>
>> > > vdicnode01-priv<br>
>> > > Clone Set: nfs_setup-clone [nfs_setup]<br>
>> > > Started: [ vdicnode01-priv vdicnode02-priv ]<br>
>> > > Clone Set: nfs-mon-clone [nfs-mon]<br>
>> > > Started: [ vdicnode01-priv vdicnode02-priv ]<br>
>> > > Clone Set: nfs-grace-clone [nfs-grace]<br>
>> > > Started: [ vdicnode01-priv vdicnode02-priv ]<br>
>> > > vm-vdicone01 (ocf::heartbeat:VirtualDomain)<wbr>: FAILED (disabled)[<br>
>> > > vdicnode02-priv vdicnode01-priv ]<br>
>> > > vm-vdicsunstone01 (ocf::heartbeat:VirtualDomain)<wbr>: FAILED<br>
>> > > vdicnode01-priv (disabled)<br>
>> > > vm-vdicdb01 (ocf::heartbeat:VirtualDomain)<wbr>: FAILED (disabled)[<br>
>> > > vdicnode02-priv vdicnode01-priv ]<br>
>> > > vm-vdicudsserver (ocf::heartbeat:VirtualDomain)<wbr>: FAILED<br>
>> > > (disabled)[ vdicnode02-priv vdicnode01-priv ]<br>
>> > > vm-vdicudstuneler (ocf::heartbeat:VirtualDomain)<wbr>: FAILED<br>
>> > > vdicnode01-priv (disabled)<br>
>> > > Clone Set: nfs-vdic-images-vip-clone [nfs-vdic-images-vip]<br>
>> > > Stopped: [ vdicnode01-priv vdicnode02-priv ]<br>
>> > ><br>
>> > > Failed Actions:<br>
>> > > * vm-vdicone01_monitor_20000 on vdicnode02-priv 'not installed'<br>
>> (5):<br>
>> > > call=2322, status=complete, exitreason='Configuration file<br>
>> > > /mnt/nfs-vdic-mgmt-vm/<wbr>vdicone01.xml does not exist or is not<br>
>> > readable.',<br>
>> > > last-rc-change='Thu Feb 16 09:02:07 2017', queued=0ms,<br>
>> exec=21ms<br>
>> > > * vm-vdicsunstone01_monitor_<wbr>20000 on vdicnode02-priv 'not<br>
>> installed'<br>
>> > > (5): call=2310, status=complete, exitreason='Configuration file<br>
>> > > /mnt/nfs-vdic-mgmt-vm/<wbr>vdicsunstone01.xml does not exist or is not<br>
>> > > readable.',<br>
>> > > last-rc-change='Thu Feb 16 09:02:07 2017', queued=0ms,<br>
>> exec=37ms<br>
>> > > * vm-vdicdb01_monitor_20000 on vdicnode02-priv 'not installed'<br>
(5):<br>
>> > > call=2320, status=complete, exitreason='Configuration file<br>
>> > > /mnt/nfs-vdic-mgmt-vm/<wbr>vdicdb01.xml does not exist or is not<br>
>> > readable.',<br>
>> > > last-rc-change='Thu Feb 16 09:02:07 2017', queued=0ms,<br>
>> exec=35ms<br>
>> > > * vm-vdicudsserver_monitor_20000 on vdicnode02-priv 'not<br>
installed'<br>
>> > > (5): call=2321, status=complete, exitreason='Configuration file<br>
>> > > /mnt/nfs-vdic-mgmt-vm/<wbr>vdicudsserver.xml does not exist or is not<br>
>> > > readable.',<br>
>> > > last-rc-change='Thu Feb 16 09:02:07 2017', queued=0ms,<br>
>> exec=42ms<br>
>> > > * vm-vdicudstuneler_monitor_<wbr>20000 on vdicnode01-priv 'not<br>
>> installed'<br>
>> > > (5): call=1987183, status=complete, exitreason='Configuration file<br>
>> > > /mnt/nfs-vdic-mgmt-vm/<wbr>vdicudstuneler.xml does not exist or is not<br>
>> > > readable.',<br>
>> > > last-rc-change='Thu Feb 16 04:00:25 2017', queued=0ms,<br>
>> exec=30ms<br>
>> > > * vm-vdicdb01_monitor_20000 on vdicnode01-priv 'not installed'<br>
(5):<br>
>> > > call=2550049, status=complete, exitreason='Configuration file<br>
>> > > /mnt/nfs-vdic-mgmt-vm/<wbr>vdicdb01.xml does not exist or is not<br>
>> > readable.',<br>
>> > > last-rc-change='Thu Feb 16 08:13:37 2017', queued=0ms,<br>
>> exec=44ms<br>
>> > > * nfs-mon_monitor_10000 on vdicnode01-priv 'unknown error' (1):<br>
>> > > call=1984009, status=Timed Out, exitreason='none',<br>
>> > > last-rc-change='Thu Feb 16 04:24:30 2017', queued=0ms,<br>
exec=0ms<br>
>> > > * vm-vdicsunstone01_monitor_<wbr>20000 on vdicnode01-priv 'not<br>
>> installed'<br>
>> > > (5): call=2552050, status=complete, exitreason='Configuration file<br>
>> > > /mnt/nfs-vdic-mgmt-vm/<wbr>vdicsunstone01.xml does not exist or is not<br>
>> > > readable.',<br>
>> > > last-rc-change='Thu Feb 16 08:14:07 2017', queued=0ms,<br>
>> exec=22ms<br>
>> > > * vm-vdicone01_monitor_20000 on vdicnode01-priv 'not installed'<br>
>> (5):<br>
>> > > call=2620052, status=complete, exitreason='Configuration file<br>
>> > > /mnt/nfs-vdic-mgmt-vm/<wbr>vdicone01.xml does not exist or is not<br>
>> > readable.',<br>
>> > > last-rc-change='Thu Feb 16 09:02:53 2017', queued=0ms,<br>
>> exec=45ms<br>
>> > > * vm-vdicudsserver_monitor_20000 on vdicnode01-priv 'not<br>
installed'<br>
>> > > (5): call=2550052, status=complete, exitreason='Configuration file<br>
>> > > /mnt/nfs-vdic-mgmt-vm/<wbr>vdicudsserver.xml does not exist or is not<br>
>> > > readable.',<br>
>> > > last-rc-change='Thu Feb 16 08:13:37 2017', queued=0ms,<br>
>> exec=48ms<br>
>> > ><br>
>> > ><br>
>> > > Al VirtualDomain resources are configured the same:<br>
>> > ><br>
>> > > [root@vdicnode01 cluster]# pcs resource show vm-vdicone01<br>
>> > > Resource: vm-vdicone01 (class=ocf provider=heartbeat<br>
>> > type=VirtualDomain)<br>
>> > > Attributes: hypervisor=qemu:///system<br>
>> > > config=/mnt/nfs-vdic-mgmt-vm/<wbr>vdicone01.xml<br>
>> > > migration_network_suffix=tcp:/<wbr>/ migration_transport=ssh<br>
>> > > Meta Attrs: allow-migrate=true target-role=Stopped<br>
>> > > Utilization: cpu=1 hv_memory=512<br>
>> > > Operations: start interval=0s timeout=90<br>
>> > > (vm-vdicone01-start-interval-<wbr>0s)<br>
>> > > stop interval=0s timeout=90<br>
>> > (vm-vdicone01-stop-interval-<wbr>0s)<br>
>> > > monitor interval=20s role=Stopped<br>
>> > > (vm-vdicone01-monitor-<wbr>interval-20s)<br>
>> > > monitor interval=30s<br>
>> > (vm-vdicone01-monitor-<wbr>interval-30s)<br>
>> > > [root@vdicnode01 cluster]# pcs resource show vm-vdicdb01<br>
>> > > Resource: vm-vdicdb01 (class=ocf provider=heartbeat<br>
>> > type=VirtualDomain)<br>
>> > > Attributes: hypervisor=qemu:///system<br>
>> > > config=/mnt/nfs-vdic-mgmt-vm/<wbr>vdicdb01.xml<br>
>> > > migration_network_suffix=tcp:/<wbr>/ migration_transport=ssh<br>
>> > > Meta Attrs: allow-migrate=true target-role=Stopped<br>
>> > > Utilization: cpu=1 hv_memory=512<br>
>> > > Operations: start interval=0s timeout=90<br>
>> > (vm-vdicdb01-start-interval-<wbr>0s)<br>
>> > > stop interval=0s timeout=90<br>
>> > (vm-vdicdb01-stop-interval-0s)<br>
>> > > monitor interval=20s role=Stopped<br>
>> > > (vm-vdicdb01-monitor-interval-<wbr>20s)<br>
>> > > monitor interval=30s<br>
>> > (vm-vdicdb01-monitor-interval-<wbr>30s)<br>
>> > ><br>
>> > ><br>
>> > ><br>
>> > > Nevertheless, one of the virtual domains is logging hardly and<br>
>> > > fulfilling my hard disk:<br>
>> > ><br>
>> > > VirtualDomain(vm-vdicone01)[<wbr>116359]: 2017/02/16_08:52:27 INFO:<br>
>> > > Configuration file /mnt/nfs-vdic-mgmt-vm/<wbr>vdicone01.xml not<br>
>> readable,<br>
>> > > resource considered stopped.<br>
>> > > VirtualDomain(vm-vdicone01)[<wbr>116401]: 2017/02/16_08:52:27 ERROR:<br>
>> > > Configuration file /mnt/nfs-vdic-mgmt-vm/<wbr>vdicone01.xml does not<br>
>> > exist<br>
>> > > or is not readable.<br>
>> > > VirtualDomain(vm-vdicone01)[<wbr>116423]: 2017/02/16_08:52:27 INFO:<br>
>> > > Configuration file /mnt/nfs-vdic-mgmt-vm/<wbr>vdicone01.xml not<br>
>> readable,<br>
>> > > resource considered stopped.<br>
>> > > VirtualDomain(vm-vdicone01)[<wbr>116444]: 2017/02/16_08:52:27 ERROR:<br>
>> > > Configuration file /mnt/nfs-vdic-mgmt-vm/<wbr>vdicone01.xml does not<br>
>> > exist<br>
>> > > or is not readable.<br>
>> > > VirtualDomain(vm-vdicone01)[<wbr>116466]: 2017/02/16_08:52:27 INFO:<br>
>> > > Configuration file /mnt/nfs-vdic-mgmt-vm/<wbr>vdicone01.xml not<br>
>> readable,<br>
>> > > resource considered stopped.<br>
>> > > VirtualDomain(vm-vdicone01)[<wbr>116487]: 2017/02/16_08:52:27 ERROR:<br>
>> > > Configuration file /mnt/nfs-vdic-mgmt-vm/<wbr>vdicone01.xml does not<br>
>> > exist<br>
>> > > or is not readable.<br>
>> > > VirtualDomain(vm-vdicone01)[<wbr>116509]: 2017/02/16_08:52:27 INFO:<br>
>> > > Configuration file /mnt/nfs-vdic-mgmt-vm/<wbr>vdicone01.xml not<br>
>> readable,<br>
>> > > resource considered stopped.<br>
>> > > VirtualDomain(vm-vdicone01)[<wbr>116530]: 2017/02/16_08:52:27 ERROR:<br>
>> > > Configuration file /mnt/nfs-vdic-mgmt-vm/<wbr>vdicone01.xml does not<br>
>> > exist<br>
>> > > or is not readable.<br>
>> > > VirtualDomain(vm-vdicone01)[<wbr>116552]: 2017/02/16_08:52:27 INFO:<br>
>> > > Configuration file /mnt/nfs-vdic-mgmt-vm/<wbr>vdicone01.xml not<br>
>> readable,<br>
>> > > resource considered stopped.<br>
>> > > VirtualDomain(vm-vdicone01)[<wbr>116573]: 2017/02/16_08:52:27 ERROR:<br>
>> > > Configuration file /mnt/nfs-vdic-mgmt-vm/<wbr>vdicone01.xml does not<br>
>> > exist<br>
>> > > or is not readable.<br>
>> > > VirtualDomain(vm-vdicone01)[<wbr>116595]: 2017/02/16_08:52:27 INFO:<br>
>> > > Configuration file /mnt/nfs-vdic-mgmt-vm/<wbr>vdicone01.xml not<br>
>> readable,<br>
>> > > resource considered stopped.<br>
>> > > VirtualDomain(vm-vdicone01)[<wbr>116616]: 2017/02/16_08:52:27 ERROR:<br>
>> > > Configuration file /mnt/nfs-vdic-mgmt-vm/<wbr>vdicone01.xml does not<br>
>> > exist<br>
>> > > or is not readable.<br>
>> > > VirtualDomain(vm-vdicone01)[<wbr>116638]: 2017/02/16_08:52:27 INFO:<br>
>> > > Configuration file /mnt/nfs-vdic-mgmt-vm/<wbr>vdicone01.xml not<br>
>> readable,<br>
>> > > resource considered stopped.<br>
>> > > VirtualDomain(vm-vdicone01)[<wbr>116659]: 2017/02/16_08:52:27 ERROR:<br>
>> > > Configuration file /mnt/nfs-vdic-mgmt-vm/<wbr>vdicone01.xml does not<br>
>> > exist<br>
>> > > or is not readable.<br>
>> > > VirtualDomain(vm-vdicone01)[<wbr>116681]: 2017/02/16_08:52:27 INFO:<br>
>> > > Configuration file /mnt/nfs-vdic-mgmt-vm/<wbr>vdicone01.xml not<br>
>> readable,<br>
>> > > resource considered stopped.<br>
>> > > VirtualDomain(vm-vdicone01)[<wbr>116702]: 2017/02/16_08:52:27 ERROR:<br>
>> > > Configuration file /mnt/nfs-vdic-mgmt-vm/<wbr>vdicone01.xml does not<br>
>> > exist<br>
>> > > or is not readable.<br>
>> > > [root@vdicnode01 cluster]# pcs status<br>
>> > ><br>
>> > ><br>
>> > > Note! Is it normal the error as I have not mounted the nfs<br>
>> > > resource /mnt/nfs-vdic-mgmt-vm/<wbr>vdicone01.xml yet.<br>
>> ><br>
>> > Well that is probably the explanation already:<br>
>> > The resource should be stopped. The config-file is not available.<br>
>> > But the resource needs the config-file to verify that it is really<br>
>> > stopped.<br>
>> > So the probe is failing and as you have a monitoring op for<br>
>> > role="stopped" it<br>
>> > is doing that over and over again.<br>
>> ><br>
>> > ><br>
>> > > ¿Is there any explanation for this heavy logging?<br>
>> > ><br>
>> > > Thanks a lot!<br>
>> > ><br>
>> > ><br>
>> > > ______________________________<wbr>_________________<br>
>> > > Users mailing list: <a href="mailto:Users@clusterlabs.org">Users@clusterlabs.org</a><br>
>> > <mailto:<a href="mailto:Users@clusterlabs.org">Users@clusterlabs.org</a>><br>
>> > > <a href="http://lists.clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.clusterlabs.org/<wbr>mailman/listinfo/users</a><br>
>> > <<a href="http://lists.clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.clusterlabs.org/<wbr>mailman/listinfo/users</a>><br>
>> > ><br>
>> > > Project Home: <a href="http://www.clusterlabs.org" rel="noreferrer" target="_blank">http://www.clusterlabs.org</a><br>
>> > > Getting started:<br>
>> > <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" rel="noreferrer" target="_blank">http://www.clusterlabs.org/<wbr>doc/Cluster_from_Scratch.pdf</a><br>
>> > <<a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" rel="noreferrer" target="_blank">http://www.clusterlabs.org/<wbr>doc/Cluster_from_Scratch.pdf</a>><br>
>> > > Bugs: <a href="http://bugs.clusterlabs.org" rel="noreferrer" target="_blank">http://bugs.clusterlabs.org</a><br>
>> ><br>
>> ><br>
>> ><br>
>> > ______________________________<wbr>_________________<br>
>> > Users mailing list: <a href="mailto:Users@clusterlabs.org">Users@clusterlabs.org</a><br>
>> > <mailto:<a href="mailto:Users@clusterlabs.org">Users@clusterlabs.org</a>><br>
>> > <a href="http://lists.clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.clusterlabs.org/<wbr>mailman/listinfo/users</a><br>
>> > <<a href="http://lists.clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.clusterlabs.org/<wbr>mailman/listinfo/users</a>><br>
>> ><br>
>> > Project Home: <a href="http://www.clusterlabs.org" rel="noreferrer" target="_blank">http://www.clusterlabs.org</a><br>
>> > Getting started:<br>
>> > <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" rel="noreferrer" target="_blank">http://www.clusterlabs.org/<wbr>doc/Cluster_from_Scratch.pdf</a><br>
>> > <<a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" rel="noreferrer" target="_blank">http://www.clusterlabs.org/<wbr>doc/Cluster_from_Scratch.pdf</a>><br>
>> > Bugs: <a href="http://bugs.clusterlabs.org" rel="noreferrer" target="_blank">http://bugs.clusterlabs.org</a><br>
>> ><br>
>> ><br>
>><br>
>><br>
>> ______________________________<wbr>_________________<br>
>> Users mailing list: <a href="mailto:Users@clusterlabs.org">Users@clusterlabs.org</a><br>
>> <a href="http://lists.clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.clusterlabs.org/<wbr>mailman/listinfo/users</a><br>
>><br>
>> Project Home: <a href="http://www.clusterlabs.org" rel="noreferrer" target="_blank">http://www.clusterlabs.org</a><br>
>> Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" rel="noreferrer" target="_blank">http://www.clusterlabs.org/<wbr>doc/Cluster_from_Scratch.pdf</a><br>
>> Bugs: <a href="http://bugs.clusterlabs.org" rel="noreferrer" target="_blank">http://bugs.clusterlabs.org</a><br>
>><br>
<br>
<br>
<br>
<br>
______________________________<wbr>_________________<br>
Users mailing list: <a href="mailto:Users@clusterlabs.org">Users@clusterlabs.org</a><br>
<a href="http://lists.clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.clusterlabs.org/<wbr>mailman/listinfo/users</a><br>
<br>
Project Home: <a href="http://www.clusterlabs.org" rel="noreferrer" target="_blank">http://www.clusterlabs.org</a><br>
Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" rel="noreferrer" target="_blank">http://www.clusterlabs.org/<wbr>doc/Cluster_from_Scratch.pdf</a><br>
Bugs: <a href="http://bugs.clusterlabs.org" rel="noreferrer" target="_blank">http://bugs.clusterlabs.org</a><br>
</div></div></blockquote></div><br></div>