[ClusterLabs] Antw: Re: VirtualDomain started in two hosts
Ulrich Windl
Ulrich.Windl at rz.uni-regensburg.de
Tue Jan 17 09:52:47 EST 2017
>>> Oscar Segarra <oscar.segarra at gmail.com> schrieb am 17.01.2017 um 10:15 in
Nachricht
<CAJq8taG8VhX5J1xQpqMRQ-9omFNXKHQs54mBzz491_6df9akzA at mail.gmail.com>:
> Hi,
>
> Yes, I will try to explain myself better.
>
> *Initially*
> On node1 (vdicnode01-priv)
>>virsh list
> ==============
> vdicdb01 started
>
> On node2 (vdicnode02-priv)
>>virsh list
> ==============
> vdicdb02 started
>
> --> Now, I execute the migrate command (outside the cluster <-- not using
> pcs resource move)
> virsh migrate --live vdicdb01 qemu:/// qemu+ssh://vdicnode02-priv
> tcp://vdicnode02-priv
One of the rules of successful clustering is: If resurces are managed by the cluster, they are managed by the cluster only! ;-)
I guess one node is trying to restart the VM once it vanished, and the other node might try to shut down the VM while it's being migrated.
Or any other undesired combination...
>
> *Finally*
> On node1 (vdicnode01-priv)
>>virsh list
> ==============
> *vdicdb01 started*
>
> On node2 (vdicnode02-priv)
>>virsh list
> ==============
> vdicdb02 started
> vdicdb01 started
>
> If I query cluster pcs status, cluster thinks resource vm-vdicdb01 is only
> started on node vdicnode01-priv.
>
> Thanks a lot.
>
>
>
> 2017-01-17 10:03 GMT+01:00 emmanuel segura <emi2fast at gmail.com>:
>
>> sorry,
>>
>> But do you mean, when you say, you migrated the vm outside of the
>> cluster? one server out side of you cluster?
>>
>> 2017-01-17 9:27 GMT+01:00 Oscar Segarra <oscar.segarra at gmail.com>:
>> > Hi,
>> >
>> > I have configured a two node cluster whewe run 4 kvm guests on.
>> >
>> > The hosts are:
>> > vdicnode01
>> > vdicnode02
>> >
>> > And I have created a dedicated network card for cluster management. I
>> have
>> > created required entries in /etc/hosts:
>> > vdicnode01-priv
>> > vdicnode02-priv
>> >
>> > The four guests have collocation rules in order to make them distribute
>> > proportionally between my two nodes.
>> >
>> > The problem I have is that if I migrate a guest outside the cluster, I
>> mean
>> > using the virsh migrate - - live... Cluster, instead of moving back the
>> > guest to its original node (following collocation sets), Cluster starts
>> > again the guest and suddenly I have the same guest running on both nodes
>> > causing xfs corruption in guest.
>> >
>> > Is there any configuration applicable to avoid this unwanted behavior?
>> >
>> > Thanks a lot
>> >
>> > _______________________________________________
>> > Users mailing list: Users at clusterlabs.org
>> > http://lists.clusterlabs.org/mailman/listinfo/users
>> >
>> > Project Home: http://www.clusterlabs.org
>> > Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>> > Bugs: http://bugs.clusterlabs.org
>> >
>>
>>
>>
>> --
>> .~.
>> /V\
>> // \\
>> /( )\
>> ^`~'^
>>
>> _______________________________________________
>> Users mailing list: Users at clusterlabs.org
>> http://lists.clusterlabs.org/mailman/listinfo/users
>>
>> Project Home: http://www.clusterlabs.org
>> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>> Bugs: http://bugs.clusterlabs.org
>>
More information about the Users
mailing list