[ClusterLabs] 答复: 答复: How to configure to make each slave resource has one VIP

Andrei Borzenkov arvidjaar at gmail.com
Sun Feb 25 00:13:07 EST 2018


25.02.2018 05:24, 范国腾 пишет:
> Hello,
> 
> If all of the slave nodes crash, all of the slave vips could not work. 
> 
> Do we have any way to make all of the slave VIPs binds to the master node if there is no slave nodes in the system?
> 
> the user client will not know the system has problem in this way.
> 

If users do not care whether they connect to master or slave, I'd say
setting up single cluster IP would be much easier.

Otherwise using advisory placement (score not equal to (-)INFINITY)
should allow pacemaker to place resources together if there is no other way.


> Thanks
> 
> -----邮件原件-----
> 发件人: Users [mailto:users-bounces at clusterlabs.org] 代表 Tomas Jelinek
> 发送时间: 2018年2月23日 17:37
> 收件人: users at clusterlabs.org
> 主题: Re: [ClusterLabs] 答复: How to configure to make each slave resource has one VIP
> 
> Dne 23.2.2018 v 10:16 范国腾 napsal(a):
>> Tomas,
>>
>> Thank you very much. I do the change according to your suggestion and it works.
>>
>> There is a question: If there are too much nodes (e.g.  total 10 slave nodes ), I need run "pcs constraint colocation add pgsql-slave-ipx with pgsql-slave-ipy -INFINITY" many times. Is there a simple command to do this?
> 
> I think colocation set does the trick:
> pcs constraint colocation set pgsql-slave-ip1 pgsql-slave-ip2
> pgsql-slave-ip3 setoptions score=-INFINITY You may specify as many resources as you need in this command.
> 
> Tomas
> 
>>
>> Master/Slave Set: pgsql-ha [pgsqld]
>>       Masters: [ node1 ]
>>       Slaves: [ node2 node3 ]
>>   pgsql-master-ip        (ocf::heartbeat:IPaddr2):       Started node1
>>   pgsql-slave-ip1        (ocf::heartbeat:IPaddr2):       Started node3
>>   pgsql-slave-ip2        (ocf::heartbeat:IPaddr2):       Started node2
>>
>> Thanks
>> Steven
>>
>> -----邮件原件-----
>> 发件人: Users [mailto:users-bounces at clusterlabs.org] 代表 Tomas Jelinek
>> 发送时间: 2018年2月23日 17:02
>> 收件人: users at clusterlabs.org
>> 主题: Re: [ClusterLabs] How to configure to make each slave resource has 
>> one VIP
>>
>> Dne 23.2.2018 v 08:17 范国腾 napsal(a):
>>> Hi,
>>>
>>> Our system manages the database (one master and multiple slave). We 
>>> use one VIP for multiple Slave resources firstly.
>>>
>>> Now I want to change the configuration that each slave resource has a 
>>> separate VIP. For example, I have 3 slave nodes and my VIP group has 
>>> 2 vip; The 2 vips binds to node1 and node2 now; When the node2 fails, 
>>> the vip could move to the node3.
>>>
>>>
>>> I use the following command to add the VIP
>>>
>>> /      pcs resource group add pgsql-slave-group pgsql-slave-ip1 
>>> pgsql-slave-ip2/
>>>
>>> /      pcs constraint colocation add pgsql-slave-group with slave 
>>> pgsql-ha INFINITY/
>>>
>>> But now the two VIPs are the same nodes:
>>>
>>> /Master/Slave Set: pgsql-ha [pgsqld]/
>>>
>>> /     Masters: [ node1 ]/
>>>
>>> /     Slaves: [ node2 node3 ]/
>>>
>>> /pgsql-master-ip        (ocf::heartbeat:IPaddr2):       Started 
>>> node1/
>>>
>>> /Resource Group: pgsql-slave-group/
>>>
>>> */     pgsql-slave-ip1    (ocf::heartbeat:IPaddr2):       Started
>>> node2/*
>>>
>>> */     pgsql-slave-ip2    (ocf::heartbeat:IPaddr2):       Started
>>> node2/*
>>>
>>> Could anyone tell how to configure to make each slave node has a VIP?
>>
>> Resources in a group always run on the same node. You want the ip resources to run on different nodes so you cannot put them into a group.
>>
>> This will take the resources out of the group:
>> pcs resource ungroup pgsql-slave-group
>>
>> Then you can set colocation constraints for them:
>> pcs constraint colocation add pgsql-slave-ip1 with slave pgsql-ha pcs 
>> constraint colocation add pgsql-slave-ip2 with slave pgsql-ha
>>
>> You may also need to tell pacemaker not to put both ips on the same node:
>> pcs constraint colocation add pgsql-slave-ip1 with pgsql-slave-ip2 
>> -INFINITY
>>
>>
>> Regards,
>> Tomas
>>
>>>
>>> Thanks
>>>
>>>
>>>
>>> _______________________________________________
>>> Users mailing list: Users at clusterlabs.org 
>>> https://lists.clusterlabs.org/mailman/listinfo/users
>>>
>>> Project Home: http://www.clusterlabs.org Getting started:
>>> http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>>> Bugs: http://bugs.clusterlabs.org
>>>
>> _______________________________________________
>> Users mailing list: Users at clusterlabs.org 
>> https://lists.clusterlabs.org/mailman/listinfo/users
>>
>> Project Home: http://www.clusterlabs.org Getting started: 
>> http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>> Bugs: http://bugs.clusterlabs.org
>> _______________________________________________
>> Users mailing list: Users at clusterlabs.org 
>> https://lists.clusterlabs.org/mailman/listinfo/users
>>
>> Project Home: http://www.clusterlabs.org Getting started: 
>> http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>> Bugs: http://bugs.clusterlabs.org
>>
> _______________________________________________
> Users mailing list: Users at clusterlabs.org https://lists.clusterlabs.org/mailman/listinfo/users
> 
> Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
> _______________________________________________
> Users mailing list: Users at clusterlabs.org
> https://lists.clusterlabs.org/mailman/listinfo/users
> 
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
> 




More information about the Users mailing list