[ClusterLabs] 答复: 答复: 答复: How to configure to make each slave resource has one VIP
Ken Gaillot
kgaillot at redhat.com
Mon Mar 5 18:12:20 EST 2018
On Sat, 2018-02-24 at 03:02 +0000, 范国腾 wrote:
> Thank you, Ken,
>
> So I could use the following command: pcs constraint colocation set
> pgsql-slave-ip1 pgsql-slave-ip2 pgsql-slave-ip3 setoptions score=-
> 1000
Correct
(sorry for the late reply)
>
> -----邮件原件-----
> 发件人: Users [mailto:users-bounces at clusterlabs.org] 代表 Ken Gaillot
> 发送时间: 2018年2月23日 23:14
> 收件人: Cluster Labs - All topics related to open-source clustering
> welcomed <users at clusterlabs.org>
> 主题: Re: [ClusterLabs] 答复: 答复: How to configure to make each slave
> resource has one VIP
>
> On Fri, 2018-02-23 at 12:45 +0000, 范国腾 wrote:
> > Thank you very much, Tomas.
> > This resolves my problem.
> >
> > -----邮件原件-----
> > 发件人: Users [mailto:users-bounces at clusterlabs.org] 代表 Tomas Jelinek
> > 发送时间: 2018年2月23日 17:37
> > 收件人: users at clusterlabs.org
> > 主题: Re: [ClusterLabs] 答复: How to configure to make each slave
> > resource
> > has one VIP
> >
> > Dne 23.2.2018 v 10:16 范国腾 napsal(a):
> > > Tomas,
> > >
> > > Thank you very much. I do the change according to your
> > > suggestion
> > > and it works.
>
> One thing to keep in mind: a score of -INFINITY means the IPs will
> *never* run on the same node, even if one or more nodes go down. If
> that's what you want, of course, that's good. If you want the IPs to
> stay on different nodes normally, but be able to run on the same node
> in case of node outage, use a finite negative score.
>
> > >
> > > There is a question: If there are too much nodes (e.g. total 10
> > > slave nodes ), I need run "pcs constraint colocation add pgsql-
> > > slave-ipx with pgsql-slave-ipy -INFINITY" many times. Is there a
> > > simple command to do this?
> >
> > I think colocation set does the trick:
> > pcs constraint colocation set pgsql-slave-ip1 pgsql-slave-ip2
> > pgsql-slave-ip3 setoptions score=-INFINITY You may specify as many
> > resources as you need in this command.
> >
> > Tomas
> >
> > >
> > > Master/Slave Set: pgsql-ha [pgsqld]
> > > Masters: [ node1 ]
> > > Slaves: [ node2 node3 ]
> > > pgsql-master-ip (ocf::heartbeat:IPaddr2): Started
> > > node1
> > > pgsql-slave-ip1 (ocf::heartbeat:IPaddr2): Started
> > > node3
> > > pgsql-slave-ip2 (ocf::heartbeat:IPaddr2): Started
> > > node2
> > >
> > > Thanks
> > > Steven
> > >
> > > -----邮件原件-----
> > > 发件人: Users [mailto:users-bounces at clusterlabs.org] 代表 Tomas
> > > Jelinek
> > > 发送时间: 2018年2月23日 17:02
> > > 收件人: users at clusterlabs.org
> > > 主题: Re: [ClusterLabs] How to configure to make each slave
> > > resource
> > > has one VIP
> > >
> > > Dne 23.2.2018 v 08:17 范国腾 napsal(a):
> > > > Hi,
> > > >
> > > > Our system manages the database (one master and multiple
> > > > slave).
> > > > We
> > > > use one VIP for multiple Slave resources firstly.
> > > >
> > > > Now I want to change the configuration that each slave
> > > > resource
> > > > has a separate VIP. For example, I have 3 slave nodes and my
> > > > VIP
> > > > group has
> > > > 2 vip; The 2 vips binds to node1 and node2 now; When the node2
> > > > fails, the vip could move to the node3.
> > > >
> > > >
> > > > I use the following command to add the VIP
> > > >
> > > > / pcs resource group add pgsql-slave-group pgsql-slave-
> > > > ip1
> > > > pgsql-slave-ip2/
> > > >
> > > > / pcs constraint colocation add pgsql-slave-group with
> > > > slave
> > > > pgsql-ha INFINITY/
> > > >
> > > > But now the two VIPs are the same nodes:
> > > >
> > > > /Master/Slave Set: pgsql-ha [pgsqld]/
> > > >
> > > > / Masters: [ node1 ]/
> > > >
> > > > / Slaves: [ node2 node3 ]/
> > > >
> > > > /pgsql-master-ip (ocf::heartbeat:IPaddr2):
> > > > Started
> > > > node1/
> > > >
> > > > /Resource Group: pgsql-slave-group/
> > > >
> > > > */ pgsql-slave-ip1 (ocf::heartbeat:IPaddr2):
> > > > Started
> > > > node2/*
> > > >
> > > > */ pgsql-slave-ip2 (ocf::heartbeat:IPaddr2):
> > > > Started
> > > > node2/*
> > > >
> > > > Could anyone tell how to configure to make each slave node has
> > > > a
> > > > VIP?
> > >
> > > Resources in a group always run on the same node. You want the
> > > ip
> > > resources to run on different nodes so you cannot put them into
> > > a
> > > group.
> > >
> > > This will take the resources out of the group:
> > > pcs resource ungroup pgsql-slave-group
> > >
> > > Then you can set colocation constraints for them:
> > > pcs constraint colocation add pgsql-slave-ip1 with slave pgsql-
> > > ha
> > > pcs constraint colocation add pgsql-slave-ip2 with slave pgsql-ha
> > >
> > > You may also need to tell pacemaker not to put both ips on the
> > > same
> > > node:
> > > pcs constraint colocation add pgsql-slave-ip1 with pgsql-slave-
> > > ip2
> > > -INFINITY
> > >
> > >
> > > Regards,
> > > Tomas
> > >
> > > >
> > > > Thanks
> > > >
> > > >
> > > >
> > > > _______________________________________________
> > > > Users mailing list: Users at clusterlabs.org
> > > > https://lists.clusterlabs.org/mailman/listinfo/users
> > > >
> > > > Project Home: http://www.clusterlabs.org Getting started:
> > > > http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> > > > Bugs: http://bugs.clusterlabs.org
> > > >
> > >
> > > _______________________________________________
> > > Users mailing list: Users at clusterlabs.org
> > > https://lists.clusterlabs.org/mailman/listinfo/users
> > >
> > > Project Home: http://www.clusterlabs.org Getting started:
> > > http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> > > Bugs: http://bugs.clusterlabs.org
> > > _______________________________________________
> > > Users mailing list: Users at clusterlabs.org
> > > https://lists.clusterlabs.org/mailman/listinfo/users
> > >
> > > Project Home: http://www.clusterlabs.org Getting started:
> > > http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> > > Bugs: http://bugs.clusterlabs.org
> > >
> >
> > _______________________________________________
> > Users mailing list: Users at clusterlabs.org https://lists.clusterlabs
> > .o
> > rg/mailman/listinfo/users
> >
> > Project Home: http://www.clusterlabs.org Getting started:
> > http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> > Bugs: http://bugs.clusterlabs.org
> > _______________________________________________
> > Users mailing list: Users at clusterlabs.org
> > https://lists.clusterlabs.org/mailman/listinfo/users
> >
> > Project Home: http://www.clusterlabs.org Getting started:
> > http://www.clusterlabs.org/doc/Cluster_from_Scratch.
> > pdf
> > Bugs: http://bugs.clusterlabs.org
>
> --
> Ken Gaillot <kgaillot at redhat.com>
> _______________________________________________
> Users mailing list: Users at clusterlabs.org https://lists.clusterlabs.o
> rg/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org Getting started:
> http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
> _______________________________________________
> Users mailing list: Users at clusterlabs.org
> https://lists.clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.
> pdf
> Bugs: http://bugs.clusterlabs.org
--
Ken Gaillot <kgaillot at redhat.com>
More information about the Users
mailing list