[ClusterLabs] DRBD on asymmetric-cluster

Ken Gaillot kgaillot at redhat.com
Thu Apr 7 11:11:57 EDT 2016


On 04/06/2016 09:29 PM, Jason Voorhees wrote:
> Hey guys:
> 
> I've been reading a little bit more about rules but there are certain
> things that are not so clear to me yet. First, I've created 3 normal
> resources and one master/slave resource (clusterdataClone). My
> resources and constraints look like this:
> 
> # pcs resource
>  MTA    (systemd:postfix):      Started nodo1
>  Web    (systemd:httpd):        Started nodo1
>  IPService      (ocf::heartbeat:IPaddr2):       Started nodo1
>  Master/Slave Set: clusterdataClone [clusterdata]
>      Masters: [ nodo1 ]
>      Slaves: [ nodo2 ]
> 
> # pcs constraint show --full
> Location Constraints:
>   Resource: IPService
>     Enabled on: nodo1 (score:10) (id:location-IPService-nodo1-10)
>     Enabled on: nodo2 (score:9) (id:location-IPService-nodo2-9)
>     Enabled on: nodo1 (score:INFINITY) (role: Started) (id:cli-prefer-IPService)

FYI, commands that "move" a resource do so by adding location
constraints. The ID of these constraints will start with "cli-". They
override the normal behavior of the cluster, and stay in effect until
you explicitly remove them. (With pcs, you can remove them with "pcs
resource clear".)

>   Resource: MTA
>     Enabled on: nodo1 (score:10) (id:location-MTA-nodo1-10)
>     Enabled on: nodo2 (score:9) (id:location-MTA-nodo2-9)
>   Resource: Web
>     Enabled on: nodo1 (score:10) (id:location-Web-nodo1-10)
>     Enabled on: nodo2 (score:9) (id:location-Web-nodo2-9)
>   Resource: clusterdataClone
>     Constraint: location-clusterdataClone
>       Rule: score=INFINITY boolean-op=or  (id:location-clusterdataClone-rule)
>         Expression: #uname eq nodo1  (id:location-clusterdataClone-rule-expr)
>         Expression: #uname eq nodo2  (id:location-clusterdataClone-rule-expr-1)
> Ordering Constraints:
> Colocation Constraints:
>   Web with IPService (score:INFINITY) (id:colocation-Web-IPService-INFINITY)
>   MTA with IPService (score:INFINITY) (id:colocation-MTA-IPService-INFINITY)
>   clusterdataClone with IPService (score:INFINITY) (rsc-role:Master)
> (with-rsc-role:Started)
> (id:colocation-clusterdataClone-IPService-INFINITY)

Note that colocation constraints only specify that the resources must
run together. It does not imply any order in which they must be started.
If Web and/or MTA should be started after clusterdataClone, configure
explicit ordering constraints for that.

> These are the commands I run to create the master/slave resource and
> its contraints:
> 
> # pcs cluster cib myfile
> # pcs -f myfile resource create clusterdata ocf:linbit:drbd
> drbd_resource=clusterdb op monitor interval=30s role=Master op monitor
> interval=31s role=Slave
> # pcs -f myfile resource master clusterdataClone clusterdata
> master-max=1 master-node-max=1 clone-max=2 clone-node-max=1
> notify=true
> # pcs -f myfile constraint location clusterdataClone rule
> score=INFINITY \#uname eq nodo1 or \#uname eq nodo2

The above constraint as currently worded will have no effect. It says
that clusterdataClone must be located on either nodo1 or nodo2. Since
those are your only nodes, it doesn't really constrain anything.

If you want to prefer one node for the master role, you want to add
role=master, take out the node you don't want to prefer, and set score
to something less than INFINITY.

> # pcs -f myfile constraint colocation add master clusterdataClone with IPService
> # pcs cluster cib-push myfile
> 
> So now, my master/slave resource is started as Master in the same node
> where IPService is already active. So far so good. But the problem is
> that I can't move IPService from nodo1 to nodo2. When I run...
> 
> # pcs resource move IPService nodo2
> 
> it does nothing but... IPService keeps active on nodo1.
> 
> Then I tried to remove all my clusterdataClone constraints and repeat
> the same commands shows lines above (# pcs -f myfile ...) but this
> time without creating a colocation constraint between clusterdataClone
> and IPService.  When I do some tests again running...
> 
> # pcs resource move IPService nodo2
> 
> well, IPService is moved to nodo2, but clusterdataClone keeps active
> as Master in node1. I thought it would be promoted as Master in nodo2
> and demoted to Slave in nodo1.
> 
> Do you know why my master/slave resource is not being "moved as
> master" between nodes?
> 
> How do I "move" the Master role from nodo1 to nodo2 for
> clusterdataClone? I want to make  nodo2 Primary and nodo1 Secondary
> but I have no idea how to do this manually (only for testing)
> 
> I hope someone can help :(
> 
> Thanks in advance
> 
> On Mon, Apr 4, 2016 at 4:50 PM, Jason Voorhees <jvoorhees1 at gmail.com> wrote:
>> I started reading "Pacemaker explained" but as it's so depth I didn't
>> read that section regarding rules yet. I'll take a look at it and test
>> it before asking anything again.
>>
>> Thanks a lot Ken
>>
>> On Mon, Apr 4, 2016 at 9:26 AM, Ken Gaillot <kgaillot at redhat.com> wrote:
>>> On 04/02/2016 01:16 AM, Jason Voorhees wrote:
>>>> Hello guys:
>>>>
>>>> I've been recently reading "Pacemaker - Clusters from scratch" and
>>>> working on a CentOS 7 system with pacemaker 1.1.13, corosync-2.3.4 and
>>>> drbd84-utils-8.9.5.
>>>>
>>>> The PDF instructs how to create a DRBD resource that seems to be
>>>> automatically started due to a symmetric-cluster setup.
>>>>
>>>> However I want to setup an asymmetric-cluster/opt-in
>>>> (symmetric-cluster=false) but I don't know how to configure a
>>>> constraint to prefer node1 over node2 to start my DRBD resource as
>>>> Master (Primary).
>>>
>>> I thought location constraints supported role, but that isn't
>>> documented, so I'm not sure. But it is documented with regard to rules,
>>> which using pcs might look like:
>>>
>>> pcs location clusterdataClone rule \
>>>   role=master \
>>>   score=50 \
>>>   '#uname' eq nodo1
>>>
>>> For a lower-level explanation of rules, see
>>> http://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html-single/Pacemaker_Explained/index.html#idm140617356537136
>>>
>>>> So far this are my resources and constraints:
>>>>
>>>> [root at nodo1 ~]# pcs resource
>>>>  IPService      (ocf::heartbeat:IPaddr2):       Started nodo1
>>>>  Web    (systemd:httpd):        Started nodo1
>>>>  Master/Slave Set: clusterdataClone [clusterdata]
>>>>      Stopped: [ nodo1 nodo2 ]
>>>>
>>>> [root at nodo1 ~]# pcs constraint
>>>> Location Constraints:
>>>>   Resource: IPService
>>>>     Enabled on: nodo2 (score:50)
>>>>     Enabled on: nodo1 (score:100)
>>>>   Resource: Web
>>>>     Enabled on: nodo2 (score:50)
>>>>     Enabled on: nodo1 (score:100)
>>>> Ordering Constraints:
>>>>   start IPService then start Web (kind:Mandatory)
>>>> Colocation Constraints:
>>>>   Web with IPService (score:INFINITY)
>>>>
>>>> My current DRBD status:
>>>>
>>>> [root at nodo1 ~]# drbdadm role clusterdb
>>>> 0: Failure: (127) Device minor not allocated
>>>> additional info from kernel:
>>>> unknown minor
>>>> Command 'drbdsetup-84 role 0' terminated with exit code 10
>>>>
>>>>
>>>> [root at nodo2 ~]# drbdadm role clusterdb
>>>> 0: Failure: (127) Device minor not allocated
>>>> additional info from kernel:
>>>> unknown minor
>>>> Command 'drbdsetup-84 role 0' terminated with exit code 10
>>>>
>>>>
>>>> I know that it's possible to configure my cluster as asymmetric and
>>>> use constraints to avoid a resource running (or becoming master) on
>>>> certain nodes, but this time I would like to learn how to do it with
>>>> an opt-in scenario.
>>>>
>>>> Thanks in advance for your help.
>>>>
>>>> P.D. nodo1 & nodo2 are spanish names for node1 and node2




More information about the Users mailing list