[ClusterLabs] multi-state constraints

Tomas Jelinek tojeline at redhat.com
Thu May 13 04:42:33 EDT 2021


Dne 11. 05. 21 v 20:22 Alastair Basden napsal(a):
>>> Single location constraint may have multiple rules, I would assume pcs
>>> supports it. It is certainly supported by crmsh.
>>
>> Yes, it is supported by pcs. First, create a location rule constraint
>> with 'pcs constraint location ... rule'. Then you can add more rules to
>> it with 'pcs constraint rule add' command.
> 
> So:
> pcs constraint location resourceClone rule role=master score=100 \#uname 
> eq node1
> pcs constraint location resourceClone rule add role=master score=50 
> \#uname eq node2
> 
> Is that the same as:
> pcs constraint location resourceClone rule role=master score=100 \#uname 
> eq node1
> pcs constraint location resourceClone rule role=master score=50 \#uname 
> eq node2
> ?
> 

The first two commands create a single constraint with two rules. The 
other two commands create two constraints with one rule each. So it's 
not the same strictly speaking, even though it has the same effect:
A location constraint may contain one or more top-level rules. The 
cluster will act as if there is a separate location constraint for each 
rule that evaluates as true. [1]

Also note your rule add command doesn't work, the syntax is:
pcs constraint rule add <constraint id> [id=<rule id>] 
[role=master|slave] [score=<score>|score-attribute=<attribute>] <expression>
So you create a constraint, get its id from 'pcs constraint location' 
and then you add the second rule to the constraint using the id.

Regards,
Tomas

[1] 
https://clusterlabs.org/pacemaker/doc/en-US/Pacemaker/2.0/html-single/Pacemaker_Explained/index.html#_using_rules_to_determine_resource_location

> 
>>>>
>>>> On Tue, 11 May 2021, Andrei Borzenkov wrote:
>>>>
>>>>> [EXTERNAL EMAIL]
>>>>>
>>>>> On Tue, May 11, 2021 at 10:50 AM Alastair Basden
>>>>> <a.g.basden at durham.ac.uk> wrote:
>>>>>>
>>>>>> Hi Andrei, all,
>>>>>>
>>>>>> So, what I want to achieve is that if both nodes are up, node1
>>>>>> preferentially has drbd as master.  If that node fails, then node2
>>>>>> should
>>>>>> become master.  If node1 then comes back online, it should become 
>>>>>> master
>>>>>> again.
>>>>>>
>>>>>> I also want to avoid node3 and node4 ever running drbd, since they 
>>>>>> don't
>>>>>> have the disks.
>>>>>>
>>>>>> For the link below about promotion scores, what is the pcs command to
>>>>>> achieve this?  I'm unfamiliar with where the xml goes...
>>>>>>
>>>>>
>>>>> I do not normally use PCS so am not familiar with its syntax. I assume
>>>>> there should be documentation that describes how to define location
>>>>> constraints with rules. Maybe someone who is familiar with it can
>>>>> provide an example.
>>>>>
>>>>>>
>>>>>>
>>>>>> I notice that drbd9 has an auto promotion feature, perhaps that would
>>>>>> help
>>>>>> here, and so I can forget about configuring drbd in pacemaker?  Is 
>>>>>> that
>>>>>> how it is supposed to work?  i.e. I can just concentrate on the
>>>>>> overlying
>>>>>> file system.
>>>>>>
>>>>>> Sorry that I'm being a bit slow about all this.
>>>>>>
>>>>>> Thanks,
>>>>>> Alastair.
>>>>>>
>>>>>> On Tue, 11 May 2021, Andrei Borzenkov wrote:
>>>>>>
>>>>>>> [EXTERNAL EMAIL]
>>>>>>>
>>>>>>> On 10.05.2021 20:36, Alastair Basden wrote:
>>>>>>>> Hi Andrei,
>>>>>>>>
>>>>>>>> Thanks.  So, in summary, I need to:
>>>>>>>> pcs resource create resourcedrbd0 ocf:linbit:drbd
>>>>>>>> drbd_resource=disk0 op
>>>>>>>> monitor interval=60s
>>>>>>>> pcs resource master resourcedrbd0Clone resourcedrbd0 master-max=1
>>>>>>>> master-node-max=1 clone-max=2 clone-node-max=1 notify=true
>>>>>>>>
>>>>>>>> pcs constraint location resourcedrb0Clone prefers node1=100
>>>>>>>> pcs constraint location resourcedrb0Clone prefers node2=50
>>>>>>>> pcs constraint location resourcedrb0Clone avoids node3
>>>>>>>> pcs constraint location resourcedrb0Clone avoids node4
>>>>>>>>
>>>>>>>> Does this mean that it will prefer to run as master on node1, and
>>>>>>>> slave
>>>>>>>> on node2?
>>>>>>>
>>>>>>> No. I already told you so.
>>>>>>>
>>>>>>>>    If not, how can I achieve that?
>>>>>>>>
>>>>>>>
>>>>>>> DRBD resource agents sets master scores based on disk state. If you
>>>>>>> statically override this decision you are risking promoting stale 
>>>>>>> copy
>>>>>>> which means data loss (I do not know if agent allows it, 
>>>>>>> hopefully not;
>>>>>>> but then it will continue to attempt to promote wrong copy and
>>>>>>> eventually fail). But if you insist, it is documented:
>>>>>>>
>>>>>>> https://clusterlabs.org/pacemaker/doc/en-US/Pacemaker/2.0/html/Pacemaker_Explained/s-promotion-scores.html 
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> Also statically biasing one single node means workload will be
>>>>>>> relocated
>>>>>>> every time node becomes available, which usually implies additional
>>>>>>> downtime. That is something normally avoided (which is why resource
>>>>>>> stickiness exists).
>>>>>>> _______________________________________________
>>>>>>> Manage your subscription:
>>>>>>> https://lists.clusterlabs.org/mailman/listinfo/users
>>>>>>>
>>>>>>> ClusterLabs home: https://www.clusterlabs.org/
>>>>>>> _______________________________________________
>>>>>> Manage your subscription:
>>>>>> https://lists.clusterlabs.org/mailman/listinfo/users
>>>>>>
>>>>>> ClusterLabs home: https://www.clusterlabs.org/
>>>>> _______________________________________________
>>>>> Manage your subscription:
>>>>> https://lists.clusterlabs.org/mailman/listinfo/users
>>>>>
>>>>> ClusterLabs home: https://www.clusterlabs.org/
>>>>>
>>>> _______________________________________________
>>>> Manage your subscription:
>>>> https://lists.clusterlabs.org/mailman/listinfo/users
>>>>
>>>> ClusterLabs home: https://www.clusterlabs.org/
>>>
>>> _______________________________________________
>>> Manage your subscription:
>>> https://lists.clusterlabs.org/mailman/listinfo/users
>>>
>>> ClusterLabs home: https://www.clusterlabs.org/
>>>
>>
>> _______________________________________________
>> Manage your subscription:
>> https://lists.clusterlabs.org/mailman/listinfo/users
>>
>> ClusterLabs home: https://www.clusterlabs.org/
>>
> _______________________________________________
> Manage your subscription:
> https://lists.clusterlabs.org/mailman/listinfo/users
> 
> ClusterLabs home: https://www.clusterlabs.org/
> 



More information about the Users mailing list