[ClusterLabs] How to setup a simple master/slave cluster in two nodes without stonith resource
Jehan-Guillaume de Rorthais
jgdr at dalibo.com
Tue Apr 3 08:41:56 EDT 2018
On Tue, 3 Apr 2018 02:07:50 +0000
范国腾 <fanguoteng at highgo.com> wrote:
> Hello,
>
> I want to setup a cluster in two nodes. One is master and the other is slave.
> I don’t need the fencing device because my internal network is stable.
How much stable it is? This assumption is frequently wrong.
See: https://aphyr.com/posts/288-the-network-is-reliable
> I use the following command to create the resource, but all of the two nodes
> are slave and cluster don’t promote it to master. Could you please help check
> if there is anything wrong with my configuration?
I didn't dig to far in your setup. But previous answers already pointed you
start your cluster when PostgreSQL instances were already up and running...
During the very first cluster startup, there's no master score and pgsqlms will
guess who should be the master by exploring each instances status while they
are **stopped** as stated in the "Quick start":
https://clusterlabs.github.io/PAF/Quick_Start-CentOS-7.html#postgresql-setup
«Make sure to setup your PostgreSQL master on your preferred node to host the
master: during the very first startup of the cluster, PAF detects the master
based on its shutdown status.»
[...]
> When I execute pcs resource cleanup in one node,
In your situation where PAF were not able to find your master, "cleanup" is not
the way to go. If you want a master to come up, you will have to tell
Pacemaker where it should be by yourself with crm_master. Eg:
crm_master -l forever -r pgsqld -v 1
> there is always one node
> print the following waring message in the /var/log/messages. But the other
> nodes’ log show no error. The resource log(pgsqlms) show the monitor action
> could return 0 but why the crmd log show failed?
You log showed the very first transition failed because it was expecting the
resource to be stopped by default as stated elsewhere. Because your started
your cluster with your instances up as standbies, this first transition failed
and another one is probably computed soon after with new behavior...where both
nodes will stay as standby as pgsqlms is not able anymore to find the master
among your two standbies.
More information about the Users
mailing list