<div dir="ltr">Hi,<div><br></div><div>I attach cluster configuration:</div><div><br></div>Note that "migration_network_suffix=tcp://" is correct in my environment as I have edited the VirtualDomain resource agent in order to build the correct url "tcp://vdicnode01-priv"<div><br></div><div><div><font face="monospace, monospace">mk_migrateuri() {</font></div><div><font face="monospace, monospace"> local target_node</font></div><div><font face="monospace, monospace"> local migrate_target</font></div><div><font face="monospace, monospace"> local hypervisor</font></div><div><font face="monospace, monospace"><br></font></div><div><font face="monospace, monospace"> target_node="$OCF_RESKEY_CRM_meta_migrate_target"</font></div><div><font face="monospace, monospace"><br></font></div><div><font face="monospace, monospace"> # A typical migration URI via a special migration network looks</font></div><div><font face="monospace, monospace"> # like "tcp://bar-mig:49152". The port would be randomly chosen</font></div><div><font face="monospace, monospace"> # by libvirt from the range 49152-49215 if omitted, at least since</font></div><div><font face="monospace, monospace"> # version 0.7.4 ...</font></div><div><font face="monospace, monospace"> if [ -n "${OCF_RESKEY_migration_network_suffix}" ]; then</font></div><div><font face="monospace, monospace"> hypervisor="${OCF_RESKEY_hypervisor%%[+:]*}"</font></div><div><font face="monospace, monospace"> # Hostname might be a FQDN</font></div><div><font face="monospace, monospace"> <font color="#ff0000"> #migrate_target=$(echo ${target_node} | sed -e "s,^\([^.]\+\),\1${OCF_RESKEY_migration_network_suffix},")</font></font></div><div><font face="monospace, monospace"> <font color="#38761d">migrate_target=${target_node}</font></font></div><div><br></div><div>My cluster configuration:<br><br><div style="font-family:monospace,monospace">[root@vdicnode01 ~]# pcs config</div><div style="font-family:monospace,monospace">Cluster Name: vdic-cluster</div><div style="font-family:monospace,monospace">Corosync Nodes:</div><div style="font-family:monospace,monospace"> vdicnode01-priv vdicnode02-priv</div><div style="font-family:monospace,monospace">Pacemaker Nodes:</div><div style="font-family:monospace,monospace"> vdicnode01-priv vdicnode02-priv</div><div style="font-family:monospace,monospace"><br></div><div style="font-family:monospace,monospace">Resources:</div><div style="font-family:monospace,monospace"> Resource: nfs-vdic-mgmt-vm-vip (class=ocf provider=heartbeat type=IPaddr)</div><div style="font-family:monospace,monospace"> Attributes: ip=192.168.100.200 cidr_netmask=24</div><div style="font-family:monospace,monospace"> Operations: start interval=0s timeout=20s (nfs-vdic-mgmt-vm-vip-start-interval-0s)</div><div style="font-family:monospace,monospace"> stop interval=0s timeout=20s (nfs-vdic-mgmt-vm-vip-stop-interval-0s)</div><div style="font-family:monospace,monospace"> monitor interval=10s (nfs-vdic-mgmt-vm-vip-monitor-interval-10s)</div><div style="font-family:monospace,monospace"> Resource: nfs-vdic-images-vip (class=ocf provider=heartbeat type=IPaddr)</div><div style="font-family:monospace,monospace"> Attributes: ip=192.168.100.201 cidr_netmask=24</div><div style="font-family:monospace,monospace"> Operations: start interval=0s timeout=20s (nfs-vdic-images-vip-start-interval-0s)</div><div style="font-family:monospace,monospace"> stop interval=0s timeout=20s (nfs-vdic-images-vip-stop-interval-0s)</div><div style="font-family:monospace,monospace"> monitor interval=10s (nfs-vdic-images-vip-monitor-interval-10s)</div><div style="font-family:monospace,monospace"> Clone: nfs_setup-clone</div><div style="font-family:monospace,monospace"> Resource: nfs_setup (class=ocf provider=heartbeat type=ganesha_nfsd)</div><div style="font-family:monospace,monospace"> Attributes: ha_vol_mnt=/var/run/gluster/shared_storage</div><div style="font-family:monospace,monospace"> Operations: start interval=0s timeout=5s (nfs_setup-start-interval-0s)</div><div style="font-family:monospace,monospace"> stop interval=0s timeout=5s (nfs_setup-stop-interval-0s)</div><div style="font-family:monospace,monospace"> monitor interval=0 timeout=5s (nfs_setup-monitor-interval-0)</div><div style="font-family:monospace,monospace"> Clone: nfs-mon-clone</div><div style="font-family:monospace,monospace"> Resource: nfs-mon (class=ocf provider=heartbeat type=ganesha_mon)</div><div style="font-family:monospace,monospace"> Operations: start interval=0s timeout=40s (nfs-mon-start-interval-0s)</div><div style="font-family:monospace,monospace"> stop interval=0s timeout=40s (nfs-mon-stop-interval-0s)</div><div style="font-family:monospace,monospace"> monitor interval=10s timeout=10s (nfs-mon-monitor-interval-10s)</div><div style="font-family:monospace,monospace"> Clone: nfs-grace-clone</div><div style="font-family:monospace,monospace"> Meta Attrs: notify=true</div><div style="font-family:monospace,monospace"> Resource: nfs-grace (class=ocf provider=heartbeat type=ganesha_grace)</div><div style="font-family:monospace,monospace"> Meta Attrs: notify=true</div><div style="font-family:monospace,monospace"> Operations: start interval=0s timeout=40s (nfs-grace-start-interval-0s)</div><div style="font-family:monospace,monospace"> stop interval=0s timeout=40s (nfs-grace-stop-interval-0s)</div><div style="font-family:monospace,monospace"> monitor interval=5s timeout=10s (nfs-grace-monitor-interval-5s)</div><div style="font-family:monospace,monospace"> Resource: vm-vdicone01 (class=ocf provider=heartbeat type=VirtualDomain)</div><div style="font-family:monospace,monospace"> Attributes: hypervisor=qemu:///system config=/mnt/nfs-vdic-mgmt-vm/vdicone01.xml migration_transport=ssh migration_network_suffix=tcp://</div><div style="font-family:monospace,monospace"> Meta Attrs: allow-migrate=true</div><div style="font-family:monospace,monospace"> Utilization: cpu=1 hv_memory=512</div><div style="font-family:monospace,monospace"> Operations: start interval=0s timeout=90 (vm-vdicone01-start-interval-0s)</div><div style="font-family:monospace,monospace"> stop interval=0s timeout=90 (vm-vdicone01-stop-interval-0s)</div><div style="font-family:monospace,monospace"> monitor interval=10 timeout=30 (vm-vdicone01-monitor-interval-10)</div><div style="font-family:monospace,monospace"> Resource: vm-vdicdb01 (class=ocf provider=heartbeat type=VirtualDomain)</div><div style="font-family:monospace,monospace"> Attributes: hypervisor=qemu:///system config=/mnt/nfs-vdic-mgmt-vm/vdicdb01.xml migration_transport=ssh migration_network_suffix=tcp://</div><div style="font-family:monospace,monospace"> Meta Attrs: allow-migrate=true</div><div style="font-family:monospace,monospace"> Utilization: cpu=1 hv_memory=512</div><div style="font-family:monospace,monospace"> Operations: start interval=0s timeout=90 (vm-vdicdb01-start-interval-0s)</div><div style="font-family:monospace,monospace"> stop interval=0s timeout=90 (vm-vdicdb01-stop-interval-0s)</div><div style="font-family:monospace,monospace"> monitor interval=10 timeout=30 (vm-vdicdb01-monitor-interval-10)</div><div style="font-family:monospace,monospace"> Resource: vm-vdicdb02 (class=ocf provider=heartbeat type=VirtualDomain)</div><div style="font-family:monospace,monospace"> Attributes: hypervisor=qemu:///system config=/mnt/nfs-vdic-mgmt-vm/vdicdb02.xml migration_network_suffix=tcp:// migration_transport=ssh</div><div style="font-family:monospace,monospace"> Meta Attrs: allow-migrate=true</div><div style="font-family:monospace,monospace"> Utilization: cpu=1 hv_memory=512</div><div style="font-family:monospace,monospace"> Operations: start interval=0s timeout=90 (vm-vdicdb02-start-interval-0s)</div><div style="font-family:monospace,monospace"> stop interval=0s timeout=90 (vm-vdicdb02-stop-interval-0s)</div><div style="font-family:monospace,monospace"> monitor interval=10 timeout=30 (vm-vdicdb02-monitor-interval-10)</div><div style="font-family:monospace,monospace"> Resource: vm-vdicdb03 (class=ocf provider=heartbeat type=VirtualDomain)</div><div style="font-family:monospace,monospace"> Attributes: hypervisor=qemu:///system config=/mnt/nfs-vdic-mgmt-vm/vdicdb03.xml migration_network_suffix=tcp:// migration_transport=ssh</div><div style="font-family:monospace,monospace"> Meta Attrs: allow-migrate=true</div><div style="font-family:monospace,monospace"> Utilization: cpu=1 hv_memory=512</div><div style="font-family:monospace,monospace"> Operations: start interval=0s timeout=90 (vm-vdicdb03-start-interval-0s)</div><div style="font-family:monospace,monospace"> stop interval=0s timeout=90 (vm-vdicdb03-stop-interval-0s)</div><div style="font-family:monospace,monospace"> monitor interval=10 timeout=30 (vm-vdicdb03-monitor-interval-10)</div><div style="font-family:monospace,monospace"><br></div><div style="font-family:monospace,monospace">Stonith Devices:</div><div style="font-family:monospace,monospace">Fencing Levels:</div><div style="font-family:monospace,monospace"><br></div><div style="font-family:monospace,monospace">Location Constraints:</div><div style="font-family:monospace,monospace"> Resource: nfs-grace-clone</div><div style="font-family:monospace,monospace"> Constraint: location-nfs-grace-clone</div><div style="font-family:monospace,monospace"> Rule: score=-INFINITY (id:location-nfs-grace-clone-rule)</div><div style="font-family:monospace,monospace"> Expression: grace-active ne 1 (id:location-nfs-grace-clone-rule-expr)</div><div style="font-family:monospace,monospace"> Resource: nfs-vdic-images-vip</div><div style="font-family:monospace,monospace"> Constraint: location-nfs-vdic-images-vip</div><div style="font-family:monospace,monospace"> Rule: score=-INFINITY (id:location-nfs-vdic-images-vip-rule)</div><div style="font-family:monospace,monospace"> Expression: ganesha-active ne 1 (id:location-nfs-vdic-images-vip-rule-expr)</div><div style="font-family:monospace,monospace"> Resource: nfs-vdic-mgmt-vm-vip</div><div style="font-family:monospace,monospace"> Constraint: location-nfs-vdic-mgmt-vm-vip</div><div style="font-family:monospace,monospace"> Rule: score=-INFINITY (id:location-nfs-vdic-mgmt-vm-vip-rule)</div><div style="font-family:monospace,monospace"> Expression: ganesha-active ne 1 (id:location-nfs-vdic-mgmt-vm-vip-rule-expr)</div><div style="font-family:monospace,monospace">Ordering Constraints:</div><div style="font-family:monospace,monospace"> start vm-vdicdb01 then start vm-vdicone01 (kind:Mandatory) (id:order-vm-vdicdb01-vm-vdicone01-mandatory)</div><div style="font-family:monospace,monospace"> start vm-vdicdb02 then start vm-vdicone01 (kind:Mandatory) (id:order-vm-vdicdb02-vm-vdicone01-mandatory)</div><div style="font-family:monospace,monospace"> start vm-vdicdb03 then start vm-vdicone01 (kind:Mandatory) (id:order-vm-vdicdb03-vm-vdicone01-mandatory)</div><div style="font-family:monospace,monospace">Colocation Constraints:</div><div style="font-family:monospace,monospace"> nfs-vdic-mgmt-vm-vip with nfs-vdic-images-vip (score:-1) (id:colocation-nfs-vdic-mgmt-vm-vip-nfs-vdic-images-vip-INFINITY)</div><div style="font-family:monospace,monospace"> vm-vdicdb03 with vm-vdicdb02 (score:-100) (id:colocation-vm-vdicdb03-vm-vdicdb02--100)</div><div style="font-family:monospace,monospace"> vm-vdicdb01 with vm-vdicdb03 (score:-100) (id:colocation-vm-vdicdb01-vm-vdicdb03--100)</div><div style="font-family:monospace,monospace"> vm-vdicdb01 with vm-vdicdb02 (score:-100) (id:colocation-vm-vdicdb01-vm-vdicdb02--100)</div><div style="font-family:monospace,monospace"> vm-vdicone01 with vm-vdicdb01 (score:-10) (id:colocation-vm-vdicone01-vm-vdicdb01--10)</div><div style="font-family:monospace,monospace"> vm-vdicone01 with vm-vdicdb02 (score:-10) (id:colocation-vm-vdicone01-vm-vdicdb02--10)</div><div style="font-family:monospace,monospace"> vm-vdicone01 with vm-vdicdb03 (score:-10) (id:colocation-vm-vdicone01-vm-vdicdb03--10)</div><div style="font-family:monospace,monospace">Ticket Constraints:</div><div style="font-family:monospace,monospace"><br></div><div style="font-family:monospace,monospace">Alerts:</div><div style="font-family:monospace,monospace"> No alerts defined</div><div style="font-family:monospace,monospace"><br></div><div style="font-family:monospace,monospace">Resources Defaults:</div><div style="font-family:monospace,monospace"> No defaults set</div><div style="font-family:monospace,monospace">Operations Defaults:</div><div style="font-family:monospace,monospace"> No defaults set</div><div style="font-family:monospace,monospace"><br></div><div style="font-family:monospace,monospace">Cluster Properties:</div><div style="font-family:monospace,monospace"> cluster-infrastructure: corosync</div><div style="font-family:monospace,monospace"> cluster-name: vdic-cluster</div><div style="font-family:monospace,monospace"> dc-version: 1.1.15-11.el7_3.2-e174ec8</div><div style="font-family:monospace,monospace"> have-watchdog: false</div><div style="font-family:monospace,monospace"> last-lrm-refresh: 1484610370</div><div style="font-family:monospace,monospace"> start-failure-is-fatal: false</div><div style="font-family:monospace,monospace"> stonith-enabled: false</div><div style="font-family:monospace,monospace">Node Attributes:</div><div style="font-family:monospace,monospace"> vdicnode01-priv: grace-active=1</div><div style="font-family:monospace,monospace"> vdicnode02-priv: grace-active=1</div><div style="font-family:monospace,monospace"><br></div><div style="font-family:monospace,monospace">Quorum:</div><div style="font-family:monospace,monospace"> Options:</div><div style="font-family:monospace,monospace">[root@vdicnode01 ~]#</div><div style="font-family:monospace,monospace"><br></div><div><font face="monospace, monospace"><br></font></div><div><font face="monospace, monospace">[root@vdicnode01 ~]# pcs status<br>Cluster name: vdic-cluster<br>Stack: corosync<br>Current DC: vdicnode02-priv (version 1.1.15-11.el7_3.2-e174ec8) - partition with quorum<br>Last updated: Tue Jan 17 12:25:27 2017 Last change: Tue Jan 17 12:25:09 2017 by root via crm_resource on vdicnode01-priv<br><br>2 nodes and 12 resources configured<br><br>Online: [ vdicnode01-priv vdicnode02-priv ]<br><br>Full list of resources:<br><br> nfs-vdic-mgmt-vm-vip (ocf::heartbeat:IPaddr): Started vdicnode02-priv<br> nfs-vdic-images-vip (ocf::heartbeat:IPaddr): Started vdicnode01-priv<br> Clone Set: nfs_setup-clone [nfs_setup]<br> Started: [ vdicnode01-priv vdicnode02-priv ]<br> Clone Set: nfs-mon-clone [nfs-mon]<br> Started: [ vdicnode01-priv vdicnode02-priv ]<br> Clone Set: nfs-grace-clone [nfs-grace]<br> Started: [ vdicnode01-priv vdicnode02-priv ]<br> vm-vdicone01 (ocf::heartbeat:VirtualDomain): Started vdicnode01-priv<br> vm-vdicdb01 (ocf::heartbeat:VirtualDomain): Started vdicnode02-priv<br> vm-vdicdb02 (ocf::heartbeat:VirtualDomain): Started vdicnode01-priv<br> vm-vdicdb03 (ocf::heartbeat:VirtualDomain): Started vdicnode02-priv<br><br>Daemon Status:<br> corosync: active/enabled<br> pacemaker: active/enabled<br> pcsd: active/enabled<br>[root@vdicnode01 ~]#</font><br><div><br></div><div><br></div><div>-------------</div><div><br></div><div><div><font face="monospace, monospace">[root@vdicnode01 ~]# pcs property list --all</font></div><div><font face="monospace, monospace">Cluster Properties:</font></div><div><font face="monospace, monospace"> batch-limit: 0</font></div><div><font face="monospace, monospace"> cluster-delay: 60s</font></div><div><font face="monospace, monospace"> cluster-infrastructure: corosync</font></div><div><font face="monospace, monospace"> cluster-name: vdic-cluster</font></div><div><font face="monospace, monospace"> cluster-recheck-interval: 15min</font></div><div><font face="monospace, monospace"> concurrent-fencing: false</font></div><div><font face="monospace, monospace"> crmd-finalization-timeout: 30min</font></div><div><font face="monospace, monospace"> crmd-integration-timeout: 3min</font></div><div><font face="monospace, monospace"> crmd-transition-delay: 0s</font></div><div><font face="monospace, monospace"> dc-deadtime: 20s</font></div><div><font face="monospace, monospace"> dc-version: 1.1.15-11.el7_3.2-e174ec8</font></div><div><font face="monospace, monospace"> default-action-timeout: 20s</font></div><div><font face="monospace, monospace"> default-resource-stickiness: 0</font></div><div><font face="monospace, monospace"> election-timeout: 2min</font></div><div><font face="monospace, monospace"> enable-acl: false</font></div><div><font face="monospace, monospace"> enable-startup-probes: true</font></div><div><font face="monospace, monospace"> have-watchdog: false</font></div><div><font face="monospace, monospace"> is-managed-default: true</font></div><div><font face="monospace, monospace"> last-lrm-refresh: 1484610370</font></div><div><font face="monospace, monospace"> load-threshold: 80%</font></div><div><font face="monospace, monospace"> maintenance-mode: false</font></div><div><font face="monospace, monospace"> migration-limit: -1</font></div><div><font face="monospace, monospace"> no-quorum-policy: stop</font></div><div><font face="monospace, monospace"> node-action-limit: 0</font></div><div><font face="monospace, monospace"> node-health-green: 0</font></div><div><font face="monospace, monospace"> node-health-red: -INFINITY</font></div><div><font face="monospace, monospace"> node-health-strategy: none</font></div><div><font face="monospace, monospace"> node-health-yellow: 0</font></div><div><font face="monospace, monospace"> notification-agent: /dev/null</font></div><div><font face="monospace, monospace"> notification-recipient:</font></div><div><font face="monospace, monospace"> pe-error-series-max: -1</font></div><div><font face="monospace, monospace"> pe-input-series-max: 4000</font></div><div><font face="monospace, monospace"> pe-warn-series-max: 5000</font></div><div><font face="monospace, monospace"> placement-strategy: default</font></div><div><font face="monospace, monospace"> remove-after-stop: false</font></div><div><font face="monospace, monospace"> shutdown-escalation: 20min</font></div><div><font face="monospace, monospace"> start-failure-is-fatal: false</font></div><div><font face="monospace, monospace"> startup-fencing: true</font></div><div><font face="monospace, monospace"> stonith-action: reboot</font></div><div><font face="monospace, monospace"> stonith-enabled: false</font></div><div><font face="monospace, monospace"> stonith-timeout: 60s</font></div><div><font face="monospace, monospace"> stonith-watchdog-timeout: (null)</font></div><div><font face="monospace, monospace"> stop-all-resources: false</font></div><div><font face="monospace, monospace"> stop-orphan-actions: true</font></div><div><font face="monospace, monospace"> stop-orphan-resources: true</font></div><div><font face="monospace, monospace"> symmetric-cluster: true</font></div><div><font face="monospace, monospace">Node Attributes:</font></div><div><font face="monospace, monospace"> vdicnode01-priv: grace-active=1</font></div><div><font face="monospace, monospace"> vdicnode02-priv: grace-active=1</font></div><div><font face="monospace, monospace">[root@vdicnode01 ~]#</font></div></div><div><br></div><div><br></div></div><div><br></div><div><div style="font-family:monospace,monospace"><br></div></div></div></div></div><div class="gmail_extra"><br><div class="gmail_quote">2017-01-17 11:00 GMT+01:00 emmanuel segura <span dir="ltr"><<a href="mailto:emi2fast@gmail.com" target="_blank">emi2fast@gmail.com</a>></span>:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">show your cluster configuration.<br>
<div class="HOEnZb"><div class="h5"><br>
2017-01-17 10:15 GMT+01:00 Oscar Segarra <<a href="mailto:oscar.segarra@gmail.com">oscar.segarra@gmail.com</a>>:<br>
> Hi,<br>
><br>
> Yes, I will try to explain myself better.<br>
><br>
> Initially<br>
> On node1 (vdicnode01-priv)<br>
>>virsh list<br>
> ==============<br>
> vdicdb01 started<br>
><br>
> On node2 (vdicnode02-priv)<br>
>>virsh list<br>
> ==============<br>
> vdicdb02 started<br>
><br>
> --> Now, I execute the migrate command (outside the cluster <-- not using<br>
> pcs resource move)<br>
> virsh migrate --live vdicdb01 qemu:/// qemu+ssh://vdicnode02-priv<br>
> tcp://vdicnode02-priv<br>
><br>
> Finally<br>
> On node1 (vdicnode01-priv)<br>
>>virsh list<br>
> ==============<br>
> vdicdb01 started<br>
><br>
> On node2 (vdicnode02-priv)<br>
>>virsh list<br>
> ==============<br>
> vdicdb02 started<br>
> vdicdb01 started<br>
><br>
> If I query cluster pcs status, cluster thinks resource vm-vdicdb01 is only<br>
> started on node vdicnode01-priv.<br>
><br>
> Thanks a lot.<br>
><br>
><br>
><br>
> 2017-01-17 10:03 GMT+01:00 emmanuel segura <<a href="mailto:emi2fast@gmail.com">emi2fast@gmail.com</a>>:<br>
>><br>
>> sorry,<br>
>><br>
>> But do you mean, when you say, you migrated the vm outside of the<br>
>> cluster? one server out side of you cluster?<br>
>><br>
>> 2017-01-17 9:27 GMT+01:00 Oscar Segarra <<a href="mailto:oscar.segarra@gmail.com">oscar.segarra@gmail.com</a>>:<br>
>> > Hi,<br>
>> ><br>
>> > I have configured a two node cluster whewe run 4 kvm guests on.<br>
>> ><br>
>> > The hosts are:<br>
>> > vdicnode01<br>
>> > vdicnode02<br>
>> ><br>
>> > And I have created a dedicated network card for cluster management. I<br>
>> > have<br>
>> > created required entries in /etc/hosts:<br>
>> > vdicnode01-priv<br>
>> > vdicnode02-priv<br>
>> ><br>
>> > The four guests have collocation rules in order to make them distribute<br>
>> > proportionally between my two nodes.<br>
>> ><br>
>> > The problem I have is that if I migrate a guest outside the cluster, I<br>
>> > mean<br>
>> > using the virsh migrate - - live... Cluster, instead of moving back<br>
>> > the<br>
>> > guest to its original node (following collocation sets), Cluster starts<br>
>> > again the guest and suddenly I have the same guest running on both nodes<br>
>> > causing xfs corruption in guest.<br>
>> ><br>
>> > Is there any configuration applicable to avoid this unwanted behavior?<br>
>> ><br>
>> > Thanks a lot<br>
>> ><br>
>> > ______________________________<wbr>_________________<br>
>> > Users mailing list: <a href="mailto:Users@clusterlabs.org">Users@clusterlabs.org</a><br>
>> > <a href="http://lists.clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.clusterlabs.org/<wbr>mailman/listinfo/users</a><br>
>> ><br>
>> > Project Home: <a href="http://www.clusterlabs.org" rel="noreferrer" target="_blank">http://www.clusterlabs.org</a><br>
>> > Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" rel="noreferrer" target="_blank">http://www.clusterlabs.org/<wbr>doc/Cluster_from_Scratch.pdf</a><br>
>> > Bugs: <a href="http://bugs.clusterlabs.org" rel="noreferrer" target="_blank">http://bugs.clusterlabs.org</a><br>
>> ><br>
>><br>
>><br>
>><br>
>> --<br>
>> .~.<br>
>> /V\<br>
>> // \\<br>
>> /( )\<br>
>> ^`~'^<br>
>><br>
>> ______________________________<wbr>_________________<br>
>> Users mailing list: <a href="mailto:Users@clusterlabs.org">Users@clusterlabs.org</a><br>
>> <a href="http://lists.clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.clusterlabs.org/<wbr>mailman/listinfo/users</a><br>
>><br>
>> Project Home: <a href="http://www.clusterlabs.org" rel="noreferrer" target="_blank">http://www.clusterlabs.org</a><br>
>> Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" rel="noreferrer" target="_blank">http://www.clusterlabs.org/<wbr>doc/Cluster_from_Scratch.pdf</a><br>
>> Bugs: <a href="http://bugs.clusterlabs.org" rel="noreferrer" target="_blank">http://bugs.clusterlabs.org</a><br>
><br>
><br>
><br>
> ______________________________<wbr>_________________<br>
> Users mailing list: <a href="mailto:Users@clusterlabs.org">Users@clusterlabs.org</a><br>
> <a href="http://lists.clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.clusterlabs.org/<wbr>mailman/listinfo/users</a><br>
><br>
> Project Home: <a href="http://www.clusterlabs.org" rel="noreferrer" target="_blank">http://www.clusterlabs.org</a><br>
> Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" rel="noreferrer" target="_blank">http://www.clusterlabs.org/<wbr>doc/Cluster_from_Scratch.pdf</a><br>
> Bugs: <a href="http://bugs.clusterlabs.org" rel="noreferrer" target="_blank">http://bugs.clusterlabs.org</a><br>
><br>
<br>
<br>
<br>
--<br>
.~.<br>
/V\<br>
// \\<br>
/( )\<br>
^`~'^<br>
<br>
______________________________<wbr>_________________<br>
Users mailing list: <a href="mailto:Users@clusterlabs.org">Users@clusterlabs.org</a><br>
<a href="http://lists.clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.clusterlabs.org/<wbr>mailman/listinfo/users</a><br>
<br>
Project Home: <a href="http://www.clusterlabs.org" rel="noreferrer" target="_blank">http://www.clusterlabs.org</a><br>
Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" rel="noreferrer" target="_blank">http://www.clusterlabs.org/<wbr>doc/Cluster_from_Scratch.pdf</a><br>
Bugs: <a href="http://bugs.clusterlabs.org" rel="noreferrer" target="_blank">http://bugs.clusterlabs.org</a><br>
</div></div></blockquote></div><br></div>