<div dir="ltr">Yes... you are right... but If I migrate vm manually by "virsh migrate" is expected the cluster to monitorize where guests are running on...<div><br></div><div>What happens If I stop pacemaker and corosync services in all nodes and I start them again... ¿will I have all guests running twice?</div><div><br></div><div>Thanks a lot</div></div><div class="gmail_extra"><br><div class="gmail_quote">2017-01-17 15:52 GMT+01:00 Ulrich Windl <span dir="ltr"><<a href="mailto:Ulrich.Windl@rz.uni-regensburg.de" target="_blank">Ulrich.Windl@rz.uni-regensburg.de</a>></span>:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">>>> Oscar Segarra <<a href="mailto:oscar.segarra@gmail.com">oscar.segarra@gmail.com</a>> schrieb am 17.01.2017 um 10:15 in<br>
Nachricht<br>
<<a href="mailto:CAJq8taG8VhX5J1xQpqMRQ-9omFNXKHQs54mBzz491_6df9akzA@mail.gmail.com">CAJq8taG8VhX5J1xQpqMRQ-<wbr>9omFNXKHQs54mBzz491_6df9akzA@<wbr>mail.gmail.com</a>>:<br>
<span class="">> Hi,<br>
><br>
> Yes, I will try to explain myself better.<br>
><br>
</span>> *Initially*<br>
<span class="">> On node1 (vdicnode01-priv)<br>
>>virsh list<br>
> ==============<br>
> vdicdb01 started<br>
><br>
> On node2 (vdicnode02-priv)<br>
>>virsh list<br>
> ==============<br>
> vdicdb02 started<br>
><br>
> --> Now, I execute the migrate command (outside the cluster <-- not using<br>
> pcs resource move)<br>
> virsh migrate --live vdicdb01 qemu:/// qemu+ssh://vdicnode02-priv<br>
> tcp://vdicnode02-priv<br>
<br>
</span>One of the rules of successful clustering is: If resurces are managed by the cluster, they are managed by the cluster only! ;-)<br>
<br>
I guess one node is trying to restart the VM once it vanished, and the other node might try to shut down the VM while it's being migrated.<br>
Or any other undesired combination...<br>
<br>
><br>
> *Finally*<br>
<span class="">> On node1 (vdicnode01-priv)<br>
>>virsh list<br>
> ==============<br>
</span>> *vdicdb01 started*<br>
<div class="HOEnZb"><div class="h5">><br>
> On node2 (vdicnode02-priv)<br>
>>virsh list<br>
> ==============<br>
> vdicdb02 started<br>
> vdicdb01 started<br>
><br>
> If I query cluster pcs status, cluster thinks resource vm-vdicdb01 is only<br>
> started on node vdicnode01-priv.<br>
><br>
> Thanks a lot.<br>
><br>
><br>
><br>
> 2017-01-17 10:03 GMT+01:00 emmanuel segura <<a href="mailto:emi2fast@gmail.com">emi2fast@gmail.com</a>>:<br>
><br>
>> sorry,<br>
>><br>
>> But do you mean, when you say, you migrated the vm outside of the<br>
>> cluster? one server out side of you cluster?<br>
>><br>
>> 2017-01-17 9:27 GMT+01:00 Oscar Segarra <<a href="mailto:oscar.segarra@gmail.com">oscar.segarra@gmail.com</a>>:<br>
>> > Hi,<br>
>> ><br>
>> > I have configured a two node cluster whewe run 4 kvm guests on.<br>
>> ><br>
>> > The hosts are:<br>
>> > vdicnode01<br>
>> > vdicnode02<br>
>> ><br>
>> > And I have created a dedicated network card for cluster management. I<br>
>> have<br>
>> > created required entries in /etc/hosts:<br>
>> > vdicnode01-priv<br>
>> > vdicnode02-priv<br>
>> ><br>
>> > The four guests have collocation rules in order to make them distribute<br>
>> > proportionally between my two nodes.<br>
>> ><br>
>> > The problem I have is that if I migrate a guest outside the cluster, I<br>
>> mean<br>
>> > using the virsh migrate - - live... Cluster, instead of moving back the<br>
>> > guest to its original node (following collocation sets), Cluster starts<br>
>> > again the guest and suddenly I have the same guest running on both nodes<br>
>> > causing xfs corruption in guest.<br>
>> ><br>
>> > Is there any configuration applicable to avoid this unwanted behavior?<br>
>> ><br>
>> > Thanks a lot<br>
>> ><br>
>> > ______________________________<wbr>_________________<br>
>> > Users mailing list: <a href="mailto:Users@clusterlabs.org">Users@clusterlabs.org</a><br>
>> > <a href="http://lists.clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.clusterlabs.org/<wbr>mailman/listinfo/users</a><br>
>> ><br>
>> > Project Home: <a href="http://www.clusterlabs.org" rel="noreferrer" target="_blank">http://www.clusterlabs.org</a><br>
>> > Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" rel="noreferrer" target="_blank">http://www.clusterlabs.org/<wbr>doc/Cluster_from_Scratch.pdf</a><br>
>> > Bugs: <a href="http://bugs.clusterlabs.org" rel="noreferrer" target="_blank">http://bugs.clusterlabs.org</a><br>
>> ><br>
>><br>
>><br>
>><br>
>> --<br>
>> .~.<br>
>> /V\<br>
>> // \\<br>
>> /( )\<br>
>> ^`~'^<br>
>><br>
>> ______________________________<wbr>_________________<br>
>> Users mailing list: <a href="mailto:Users@clusterlabs.org">Users@clusterlabs.org</a><br>
>> <a href="http://lists.clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.clusterlabs.org/<wbr>mailman/listinfo/users</a><br>
>><br>
>> Project Home: <a href="http://www.clusterlabs.org" rel="noreferrer" target="_blank">http://www.clusterlabs.org</a><br>
>> Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" rel="noreferrer" target="_blank">http://www.clusterlabs.org/<wbr>doc/Cluster_from_Scratch.pdf</a><br>
>> Bugs: <a href="http://bugs.clusterlabs.org" rel="noreferrer" target="_blank">http://bugs.clusterlabs.org</a><br>
>><br>
<br>
<br>
<br>
<br>
<br>
______________________________<wbr>_________________<br>
Users mailing list: <a href="mailto:Users@clusterlabs.org">Users@clusterlabs.org</a><br>
<a href="http://lists.clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.clusterlabs.org/<wbr>mailman/listinfo/users</a><br>
<br>
Project Home: <a href="http://www.clusterlabs.org" rel="noreferrer" target="_blank">http://www.clusterlabs.org</a><br>
Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" rel="noreferrer" target="_blank">http://www.clusterlabs.org/<wbr>doc/Cluster_from_Scratch.pdf</a><br>
Bugs: <a href="http://bugs.clusterlabs.org" rel="noreferrer" target="_blank">http://bugs.clusterlabs.org</a><br>
</div></div></blockquote></div><br></div>