<div dir="auto">Thank you Andrei. The problem is that I can see with 'pcs status' that resources are runnin on srv2cr1 but its at the same time its telling that the fence_vmware_soap is running on srv1cr1. That's somewhat confusing. Could you possibly explain this?<div dir="auto"><br></div><div dir="auto">Thank you!</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">sob., 16.03.2019, 05:37 użytkownik Andrei Borzenkov <<a href="mailto:arvidjaar@gmail.com">arvidjaar@gmail.com</a>> napisał:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">16.03.2019 1:16, Adam Budziński пишет:<br>
> Hi Tomas,<br>
> <br>
> Ok but how then pacemaker or the fence agent knows which route to take to<br>
> reach the vCenter?<br>
<br>
They do not know or care at all. It is up to your underlying operating<br>
system and its routing tables.<br>
<br>
> Btw. Do I have to add the stonith resource on each of the nodes or is it<br>
> just enough to add it on one as for other resources?<br>
<br>
If your fencing agent can (should) be able to run on any node, it should<br>
be enough to define it just once as long as it can properly determine<br>
"port" to use on fencing "device" for a given node. There are cases when<br>
you may want to restrict fencing agent to only subset on nodes or when<br>
you are forced to set unique parameter for each node (consider IPMI IP<br>
address), in this case you would need separate instance of agent in each<br>
case.<br>
<br>
> Thank you!<br>
> <br>
> pt., 15.03.2019, 15:48 użytkownik Tomas Jelinek <<a href="mailto:tojeline@redhat.com" target="_blank" rel="noreferrer">tojeline@redhat.com</a>><br>
> napisał:<br>
> <br>
>> Dne 15. 03. 19 v 15:09 Adam Budziński napsal(a):<br>
>>> Hello Tomas,<br>
>>><br>
>>> Thank you! So far I need to say how great this community is, would<br>
>>> never expect so much positive vibes! A big thank you your doing a great<br>
>>> job!<br>
>>><br>
>>> Now let's talk business :)<br>
>>><br>
>>> So if pcsd is using ring0 and it fails will ring1 not be used at all?<br>
>><br>
>> Pcs and pcsd never use ring1, but they are just tools for managing<br>
>> clusters. You can have a perfectly functioning cluster without pcs and<br>
>> pcsd running or even installed, it would be just more complicated to set<br>
>> it up and manage it.<br>
>><br>
>> Even if ring0 fails, you will be able to use pcs (in somehow limited<br>
>> manner) as most of its commands don't go through network anyway.<br>
>><br>
>> Corosync, which is the actual cluster messaging layer, will of course<br>
>> use ring1 in case of ring0 failure.<br>
>><br>
>>><br>
>>> So in regards to VMware that would mean that the interface should be<br>
>>> configured with a network that can access the vCenter to fence right?<br>
>>> But wouldn't it then use only ring0 so if that fails it wouldn't switch<br>
>>> to ring1?<br>
>><br>
>> If you are talking about pcmk_host_map, that does not really have<br>
>> anything to do with network interfaces of cluster nodes. It maps node<br>
>> names (parts before :) to "ports" of a fence device (parts after :).<br>
>> Pcs-0.9.x does not support defining custom node names, therefore node<br>
>> names are the same as ring0 addresses.<br>
>><br>
>> I am not an expert on fence agents / devices, but I'm sure someone else<br>
>> on this list will be able to help you with configuring fencing for your<br>
>> cluster.<br>
>><br>
>><br>
>> Tomas<br>
>><br>
>>><br>
>>> Thank you!<br>
>>><br>
>>> pt., 15.03.2019, 13:14 użytkownik Tomas Jelinek <<a href="mailto:tojeline@redhat.com" target="_blank" rel="noreferrer">tojeline@redhat.com</a><br>
>>> <mailto:<a href="mailto:tojeline@redhat.com" target="_blank" rel="noreferrer">tojeline@redhat.com</a>>> napisał:<br>
>>><br>
>>> Dne 15. 03. 19 v 12:32 Adam Budziński napsal(a):<br>
>>> > Hello Folks,____<br>
>>> ><br>
>>> > __ __<br>
>>> ><br>
>>> > Tow node active/passive VMware VM cluster.____<br>
>>> ><br>
>>> > __ __<br>
>>> ><br>
>>> > /etc/hosts____<br>
>>> ><br>
>>> > __ __<br>
>>> ><br>
>>> > 10.116.63.83 srv1____<br>
>>> ><br>
>>> > 10.116.63.84 srv2____<br>
>>> ><br>
>>> > 172.16.21.12 srv2cr1____<br>
>>> ><br>
>>> > 172.16.22.12 srv2cr2____<br>
>>> ><br>
>>> > 172.16.21.11 srv1cr1____<br>
>>> ><br>
>>> > 172.16.22.11 srv1cr2____<br>
>>> ><br>
>>> > __ __<br>
>>> ><br>
>>> > __ __<br>
>>> ><br>
>>> > I have 3 NIC’s on each VM:____<br>
>>> ><br>
>>> > __ __<br>
>>> ><br>
>>> > 10.116.63.83 srv1 and 10.116.63.84 srv2 are networks used<br>
>> to<br>
>>> > access the VM’s via SSH or any resource directly if not via a<br>
>>> VIP.____<br>
>>> ><br>
>>> > __ __<br>
>>> ><br>
>>> > Everything with cr in its name is used for corosync<br>
>>> communication, so<br>
>>> > basically I have two rings (this are two no routable networks<br>
>>> just for<br>
>>> > that).____<br>
>>> ><br>
>>> > __ __<br>
>>> ><br>
>>> > My questions are:____<br>
>>> ><br>
>>> > __ __<br>
>>> ><br>
>>> > __1.__With ‘pcs cluster auth’ which interface / interfaces should<br>
>>> I use<br>
>>> > ?____<br>
>>><br>
>>> Hi Adam,<br>
>>><br>
>>> I can see you are using pcs-0.9.x. In that case you should do:<br>
>>> pcs cluster auth srv1cr1 srv2cr1<br>
>>><br>
>>> In other words, use the first address of each node.<br>
>>> Authenticating all the other addresses should not cause any issues.<br>
>> It<br>
>>> is pointless, though, as pcs only communicates via ring0 addresses.<br>
>>><br>
>>> ><br>
>>> > __2.__With ‘pcs cluster setup –name’ I would use the corosync<br>
>>> interfaces<br>
>>> > e.g. ‘pcs cluster setup –name MyCluster srv1cr1,srv1cr2<br>
>>> srv2cr1,srv2cr2’<br>
>>> > right ?____<br>
>>><br>
>>> Yes, that is correct.<br>
>>><br>
>>> ><br>
>>> > __3.__With fence_vmware_soap inpcmk_host_map="X:VM_C;X:VM:OTRS_D"<br>
>>> which<br>
>>> > interface should replace X ?____<br>
>>><br>
>>> X should be replaced by node names as seen by pacemaker. Once you<br>
>>> set up<br>
>>> and start your cluster, run 'pcs status' to get (amongs other info)<br>
>> the<br>
>>> node names. In your configuration, they should be srv1cr1 and<br>
>> srv2cr1.<br>
>>><br>
>>><br>
>>> Regards,<br>
>>> Tomas<br>
>>><br>
>>> > __ __<br>
>>> ><br>
>>> > Thank you!<br>
>>> ><br>
>>> ><br>
>>> > _______________________________________________<br>
>>> > Users mailing list: <a href="mailto:Users@clusterlabs.org" target="_blank" rel="noreferrer">Users@clusterlabs.org</a><br>
>>> <mailto:<a href="mailto:Users@clusterlabs.org" target="_blank" rel="noreferrer">Users@clusterlabs.org</a>><br>
>>> > <a href="https://lists.clusterlabs.org/mailman/listinfo/users" rel="noreferrer noreferrer" target="_blank">https://lists.clusterlabs.org/mailman/listinfo/users</a><br>
>>> ><br>
>>> > Project Home: <a href="http://www.clusterlabs.org" rel="noreferrer noreferrer" target="_blank">http://www.clusterlabs.org</a><br>
>>> > Getting started:<br>
>>> <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" rel="noreferrer noreferrer" target="_blank">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br>
>>> > Bugs: <a href="http://bugs.clusterlabs.org" rel="noreferrer noreferrer" target="_blank">http://bugs.clusterlabs.org</a><br>
>>> ><br>
>>> _______________________________________________<br>
>>> Users mailing list: <a href="mailto:Users@clusterlabs.org" target="_blank" rel="noreferrer">Users@clusterlabs.org</a> <mailto:<br>
>> <a href="mailto:Users@clusterlabs.org" target="_blank" rel="noreferrer">Users@clusterlabs.org</a>><br>
>>> <a href="https://lists.clusterlabs.org/mailman/listinfo/users" rel="noreferrer noreferrer" target="_blank">https://lists.clusterlabs.org/mailman/listinfo/users</a><br>
>>><br>
>>> Project Home: <a href="http://www.clusterlabs.org" rel="noreferrer noreferrer" target="_blank">http://www.clusterlabs.org</a><br>
>>> Getting started:<br>
>> <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" rel="noreferrer noreferrer" target="_blank">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br>
>>> Bugs: <a href="http://bugs.clusterlabs.org" rel="noreferrer noreferrer" target="_blank">http://bugs.clusterlabs.org</a><br>
>>><br>
>>><br>
>>> _______________________________________________<br>
>>> Users mailing list: <a href="mailto:Users@clusterlabs.org" target="_blank" rel="noreferrer">Users@clusterlabs.org</a><br>
>>> <a href="https://lists.clusterlabs.org/mailman/listinfo/users" rel="noreferrer noreferrer" target="_blank">https://lists.clusterlabs.org/mailman/listinfo/users</a><br>
>>><br>
>>> Project Home: <a href="http://www.clusterlabs.org" rel="noreferrer noreferrer" target="_blank">http://www.clusterlabs.org</a><br>
>>> Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" rel="noreferrer noreferrer" target="_blank">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br>
>>> Bugs: <a href="http://bugs.clusterlabs.org" rel="noreferrer noreferrer" target="_blank">http://bugs.clusterlabs.org</a><br>
>>><br>
>> _______________________________________________<br>
>> Users mailing list: <a href="mailto:Users@clusterlabs.org" target="_blank" rel="noreferrer">Users@clusterlabs.org</a><br>
>> <a href="https://lists.clusterlabs.org/mailman/listinfo/users" rel="noreferrer noreferrer" target="_blank">https://lists.clusterlabs.org/mailman/listinfo/users</a><br>
>><br>
>> Project Home: <a href="http://www.clusterlabs.org" rel="noreferrer noreferrer" target="_blank">http://www.clusterlabs.org</a><br>
>> Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" rel="noreferrer noreferrer" target="_blank">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br>
>> Bugs: <a href="http://bugs.clusterlabs.org" rel="noreferrer noreferrer" target="_blank">http://bugs.clusterlabs.org</a><br>
>><br>
> <br>
> <br>
> _______________________________________________<br>
> Users mailing list: <a href="mailto:Users@clusterlabs.org" target="_blank" rel="noreferrer">Users@clusterlabs.org</a><br>
> <a href="https://lists.clusterlabs.org/mailman/listinfo/users" rel="noreferrer noreferrer" target="_blank">https://lists.clusterlabs.org/mailman/listinfo/users</a><br>
> <br>
> Project Home: <a href="http://www.clusterlabs.org" rel="noreferrer noreferrer" target="_blank">http://www.clusterlabs.org</a><br>
> Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" rel="noreferrer noreferrer" target="_blank">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br>
> Bugs: <a href="http://bugs.clusterlabs.org" rel="noreferrer noreferrer" target="_blank">http://bugs.clusterlabs.org</a><br>
> <br>
<br>
_______________________________________________<br>
Users mailing list: <a href="mailto:Users@clusterlabs.org" target="_blank" rel="noreferrer">Users@clusterlabs.org</a><br>
<a href="https://lists.clusterlabs.org/mailman/listinfo/users" rel="noreferrer noreferrer" target="_blank">https://lists.clusterlabs.org/mailman/listinfo/users</a><br>
<br>
Project Home: <a href="http://www.clusterlabs.org" rel="noreferrer noreferrer" target="_blank">http://www.clusterlabs.org</a><br>
Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" rel="noreferrer noreferrer" target="_blank">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</a><br>
Bugs: <a href="http://bugs.clusterlabs.org" rel="noreferrer noreferrer" target="_blank">http://bugs.clusterlabs.org</a><br>
</blockquote></div>