[ClusterLabs] multi-state constraints

Alastair Basden a.g.basden at durham.ac.uk
Tue May 11 04:16:08 EDT 2021

In fact, this link seems to be almost what I want to do:

The only missing parts are:
1. Avoid node3 and node4.

2. Preferentially run on node1 when it becomes active again.

What is the pcs command to achieve those?

On Tue, 11 May 2021, Andrei Borzenkov wrote:

> On 10.05.2021 20:36, Alastair Basden wrote:
>> Hi Andrei,
>> Thanks.  So, in summary, I need to:
>> pcs resource create resourcedrbd0 ocf:linbit:drbd drbd_resource=disk0 op
>> monitor interval=60s
>> pcs resource master resourcedrbd0Clone resourcedrbd0 master-max=1
>> master-node-max=1 clone-max=2 clone-node-max=1 notify=true
>> pcs constraint location resourcedrb0Clone prefers node1=100
>> pcs constraint location resourcedrb0Clone prefers node2=50
>> pcs constraint location resourcedrb0Clone avoids node3
>> pcs constraint location resourcedrb0Clone avoids node4
>> Does this mean that it will prefer to run as master on node1, and slave
>> on node2?
> No. I already told you so.
>>   If not, how can I achieve that?
> DRBD resource agents sets master scores based on disk state. If you
> statically override this decision you are risking promoting stale copy
> which means data loss (I do not know if agent allows it, hopefully not;
> but then it will continue to attempt to promote wrong copy and
> eventually fail). But if you insist, it is documented:
> https://clusterlabs.org/pacemaker/doc/en-US/Pacemaker/2.0/html/Pacemaker_Explained/s-promotion-scores.html
> Also statically biasing one single node means workload will be relocated
> every time node becomes available, which usually implies additional
> downtime. That is something normally avoided (which is why resource
> stickiness exists).
> _______________________________________________
> Manage your subscription:
> https://lists.clusterlabs.org/mailman/listinfo/users
> ClusterLabs home: https://www.clusterlabs.org/

More information about the Users mailing list