[ClusterLabs] Master/Slave DRBD not active on asymmetric cluster
Bruyninckx Kristof
Kristof.Bruyninckx at cegeka.com
Wed Mar 15 12:07:36 CET 2017
Hallo Klaus,
Yes, indeed collocation was the culprit.
I've removed the constraint and replaced it with a collocation with the master.
#pcs constraint colocation add master drbd-demo-resource-clone with ClusterIP INFINITY
And now it work like a charm, Master & Slave get started the nodes that have permission.
Master/Slave Set: drbd-demo-resource-clone [drbd-demo-resource]
Masters: [ monnod02 ]
Slaves: [ monnod01 ]
# pcs constraint
Colocation Constraints:
db-data with drbd-demo-resource-clone (score:INFINITY) (with-rsc-role:Master)
pgsql_service with db-data (score:INFINITY)
drbd-demo-resource-clone with ClusterIP (score:INFINITY) (rsc-role:Master) (with-rsc-role:Started)
Thanks for your answer !
Cheers,
Kristof Bruyninckx
System Engineer
-----Original Message-----
From: Klaus Wenninger [mailto:kwenning at redhat.com]
Sent: woensdag 15 maart 2017 9:42
To: Cluster Labs - All topics related to open-source clustering welcomed <users at clusterlabs.org>
Subject: Re: [ClusterLabs] Master/Slave DRBD not active on asymmetric cluster
Hi!
I guess the collocation with ClusterIP is the culprit.
It leads to the clone not being started where ClusterIP is not running.
Guess what you'd rather want is a collocation with just the master-role of the clone.
Regards,
Klaus
On 03/14/2017 03:44 PM, Bruyninckx Kristof wrote:
>
> Hello,
>
>
>
> Currently I've tried to setup a 3 node asymmetric cluster, with the
> 3th node only being used as a tie breaker.
>
>
>
> monnod01 & monnod02 :
>
> * centos 7.3
>
> * pacemaker-1.1.15-11.el7_3.2.x86_64,
>
> * corosync-2.4.0-4.el7.x86_64
>
> * drbd84-utils-8.9.8-1.el7.elrepo.x86_64
>
> * PostgreSQL 9.4
>
> monquor:
>
> * centos 7.3
>
> * pacemaker-1.1.15-11.el7_3.2.x86_64
>
> * corosync-2.4.0-4.el7.x86_64
>
> * no drbd installed.
>
>
>
> Now I've noticed that the master/slave drbd resource only activates
> the master side and not also the slave side allowing the drbd to
> actually sync between each other. I've setup a 2 node cluster, and
> there it works without any issue.
>
> But when I try to do the same, but with a 3th node, and
>
>
>
> /pcs property set symmetric-cluster=false/
>
>
>
> For some reason it keeps adding the 3the node as a stopped resource in
> the master slave setup and it doesn't mention a slave resource.
>
>
>
> /pcs status/
>
> /Online: [ monnod01 monnod02 monquor ]/
>
> / /
>
> /Full list of resources:/
>
> / /
>
> /ClusterIP (ocf::heartbeat:IPaddr2): Started monnod01/
>
> /Master/Slave Set: drbd-demo-resource-clone [drbd-demo-resource]/
>
> / Masters: [ monnod01 ]/
>
> / Stopped: [ monquor ]/
>
>
>
> Resource created with the following
>
>
>
> /pcs -f drbd_cfg resource create drbd-demo-resource ocf:linbit:drbd
> drbd_resource=drbd-demo op monitor interval=10s/
>
> /pcs -f drbd_cfg resource master drbd-demo-resource-clone
> drbd-demo-resource master-max=1 master-node-max=1 clone-max=2
> clone-node-max=1 notify=true/
>
>
>
> Even though I've used location constraints on the master slave
> resource allowing it only access to the 2 nodes.
>
>
>
> /[root at monnod01 ~]# pcs constraint/
>
> / Resource: drbd-demo-resource-clone/
>
> / Enabled on: monnod01 (score:INFINITY)/
>
> / Enabled on: monnod02 (score:INFINITY)/
>
>
>
> The actual failover itself works, so it activates the DRBD disk,
> mounts it and starts up the db service which access the files on this
> drbd disk.
>
> But since the slave drbd is never started, it'll never actually
> perform the drbd sync between the disks.
>
> What am I missing to actually make the master/slave resource ignore
> the 3 node and startup the master and slave resource ?
>
> Does DRBD need to be installed on the 3th node as well ?
>
>
>
> I've put the complete output of the commands in the attachment of the
> mail.
>
>
>
> Met vriendelijke groeten / Meilleures salutations / Best regards
>
> *Kristof Bruyninckx*
> *System Engineer*
>
>
>
>
>
>
>
> _______________________________________________
> Users mailing list: Users at clusterlabs.org
> http://lists.clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org Getting started:
> http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
_______________________________________________
Users mailing list: Users at clusterlabs.org http://lists.clusterlabs.org/mailman/listinfo/users
Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org
More information about the Users
mailing list