[ClusterLabs] 答复: 答复: 答复: 答复: How to configure to make each slave resource has one VIP
范国腾
fanguoteng at highgo.com
Thu Mar 8 19:54:00 EST 2018
Thanks Rorthais, Got it. The following command could make sure that it move to the master if there is no standby alive:
pcs constraint colocation add pgsql-ip-stby1 with slave pgsql-ha 100
pcs constraint colocation add pgsql-ip-stby1 with pgsql-ha 50
-----邮件原件-----
发件人: Jehan-Guillaume de Rorthais [mailto:jgdr at dalibo.com]
发送时间: 2018年3月8日 17:41
收件人: 范国腾 <fanguoteng at highgo.com>
抄送: Cluster Labs - All topics related to open-source clustering welcomed <users at clusterlabs.org>
主题: Re: [ClusterLabs] 答复: 答复: 答复: How to configure to make each slave resource has one VIP
On Thu, 8 Mar 2018 01:45:43 +0000
范国腾 <fanguoteng at highgo.com> wrote:
> Sorry, Rorthais, I have thought that the link and the attachment was
> the same document yesterday.
No problem.
For your information, I merged the draft in the official documentation yesterday.
> I just read the attachment and that is exactly what I ask originally.
Excellent! Glad it could helped.
> I have two questions on the following two command:
> # pcs constraint colocation add pgsql-ip-stby1 with slave pgsql-ha 10
> Q: Does the score 10 means that " move to the master if there is no
> standby alive "?
Kind of. It actually says nothing about moving to the master. It just says the slaves IP should prefers to locate with a slave. If slaves nodes are down or in standby, the IP "can" move to the master as nothing forbid it.
In fact, while writing this sentence, I realize there's nothing to push the slaves IP on the master if other nodes are up, but the pgsql-ha slaves are stopped or banned. The configuration I provided is incomplete.
1. I added the missing constraints in the doc online 2. notice I raised all the scores so they are higher than the stickiness
See:
https://clusterlabs.github.io/PAF/CentOS-7-admin-cookbook.html#adding-ips-on-slaves-nodes
Sorry for this :/
> # pcs constraint order start pgsql-ha then start pgsql-ip-stby1
> kind=Mandatory
> Q: I did not set the order and I did not find the issue until now. So
> I add this constraint? What will happen if I miss it?
The IP address can start before PostgreSQL is up on the node. You will have client connexions being rejected with error "PostgreSQL is not listening on host [...]".
> Here is what I did now:
> pcs resource create pgsql-slave-ip1 ocf:heartbeat:IPaddr2 ip=192.168.199.186
> nic=enp3s0f0 cidr_netmask=24 op monitor interval=10s; pcs resource
> create pgsql-slave-ip2 ocf:heartbeat:IPaddr2 ip=192.168.199.187
> nic=enp3s0f0 cidr_netmask=24 op monitor interval=10s; pcs constraint
> colocation add pgsql-slave-ip1 with pgsql-ha
It misses the score and the role. Without role specification, it can colocates with Master or Slave with no preference.
> pcs constraint colocation add pgsql-slave-ip2 with pgsql-ha
Same, it misses the score and the role.
> pcs constraint colocation set pgsql-slave-ip1 pgsql-slave-ip2
> pgsql-master-ip setoptions score=-1000
The score seems too high in my opinion, compared to other ones.
You should probably remove all the colocation constraints and try with the one I pushed online.
Regards,
> -----邮件原件-----
> 发件人: Jehan-Guillaume de Rorthais [mailto:jgdr at dalibo.com]
> 发送时间: 2018年3月7日 16:29
> 收件人: 范国腾 <fanguoteng at highgo.com>
> 抄送: Cluster Labs - All topics related to open-source clustering
> welcomed <users at clusterlabs.org> 主题: Re: [ClusterLabs] 答复: 答复: 答复: How
> to configure to make each slave resource has one VIP
>
> On Wed, 7 Mar 2018 01:27:16 +0000
> 范国腾 <fanguoteng at highgo.com> wrote:
>
> > Thank you, Rorthais,
> >
> > I read the link and it is very helpful.
>
> Did you read the draft I attached to the email? It was the main
> purpose of my
> answer: helping you with IP on slaves. It seems to me your mail is
> reporting different issues than the original subject.
>
> > There are some issues that I have met when I installed the cluster.
>
> I suppose this is another subject and we should open a new thread with
> the appropriate subject.
>
> > 1. “pcs cluster stop” could not stop the cluster in some times.
>
> You would have to give some more details about the context where "pcs
> cluster stop" timed out.
>
> > 2. when I upgrade the PAF, I could just replace the pgsqlms file.
> > When I upgrade the postgres, I just replace the /usr/local/pgsql/.
>
> I believe both actions are documented with best practices in this
> links I gave you.
>
> > 3. If the cluster does not stop normally, the pgcontroldata status
> > is not "SHUTDOWN",then the PAF would not start the postgresql any
> > more, so I normally change the pgsqlms as below after installing the PAF.
> > [...]
>
> This should be discussed to understand the exact context before
> considering your patch.
>
> At a first glance, your patch seems quite dangerous as it bypass the
> sanity checks.
>
> Please, could you start a new thread with proper subject and add
> extensive informations about this issue? You could open a new issue on
> PAF repository as well: https://github.com/ClusterLabs/PAF/issues
>
> Regards,
--
Jehan-Guillaume de Rorthais
Dalibo
More information about the Users
mailing list