[ClusterLabs] Antw: [EXT] Moving multi-state resources

Ulrich Windl Ulrich.Windl at rz.uni-regensburg.de
Wed May 12 18:23:39 EDT 2021


On 5/12/21 4:06 PM, Alastair Basden wrote:
> Hi Ulrich,
> 
> What would I need to change?
> pcs resource meta resourcedrbdClone resource-stickiness=0
> doesn't fix it.

I don't have a cluster here, but maybe see "crm_simulate -LS" or 
https://clusterlabs.org/pacemaker/doc/en-US/Pacemaker/2.0/html/Pacemaker_Administration/_why_decisions_were_made.html

> 
> I already have:
> #> pcs property list --all | grep stick
>   default-resource-stickiness: (null)
> 
> #> pcs resource defaults
> No defaults set
> 
> Thanks,
> Alastair.
> 
> 
> 
> On Wed, 12 May 2021, Ulrich Windl wrote:
> 
>> [EXTERNAL EMAIL]
>>
>> My guess is default stickiness being > 0.
>>
>>>>> Alastair Basden <a.g.basden at durham.ac.uk> schrieb am 12.05.2021 um 
>>>>> 11:58
>> in
>> Nachricht <alpine.DEB.2.22.394.2105121045380.3533 at xps14>:
>>> Hi all,
>>>
>>> Some help required with master/slave moving please...
>>>
>>> I have set up a resource with:
>>> pcs resource create resourcedrbd ocf:linbit:drbd drbd_resource=disk1 op
>>> monitor interval=60s
>>> pcs resource master resourcedrbdClne resourcedrbd master‑max=1
>>> master‑node‑max=1 clone‑max=2 clone‑node‑max=1 notify=true
>>> pcs constraint location resourcedrbdClone prefers node1=100
>>> pcs constraint location resourcedrbdClone prefers node1=50
>>> pcs constraint location resourcedrbdClone avoids node3
>>> pcs constraint location resourcedrbdClone avoids node4
>>> pcs resource op add resourcedrbd monitor interval=61s role=Master
>>> pcs constraint location resourcedrbdClone rule role=master score=100 
>>> \#uname
>>
>>> eq node1
>>> pcs constraint location resourcedrbdClone rule role=master score=50 
>>> \#uname
>>
>>> eq node2
>>>
>>>
>>> If I put node1 into standby:
>>> pcs cluster standby node1
>>> It works, and moves the resource to node2.
>>>
>>> However, when I bring it back:
>>> pcs cluster unstandby node1
>>> the resource remains on node2.
>>>
>>> I want the resource to move back to node1.
>>>
>>> What have I missed?  I thought the location rule should have sorted this
>>> out.
>>>
>>> I have also tried:
>>> pcs constraint rule add location‑resourcedrbdClone‑1 role=master 
>>> score=100
>>> \#uname eq node1
>>> (getting the location‑resourcedrbdClone‑1 ID from
>>> /var/lib/pacemaker/cib/cib.xml ‑ though I guess there might be a better
>>> way to get the ID through pcs)
>>>
>>> And this doesn't help.
>>>
>>> Thanks,
>>> Alastair.
>>>
>>> _______________________________________________
>>> Manage your subscription:
>>> https://lists.clusterlabs.org/mailman/listinfo/users
>>>
>>> ClusterLabs home: https://www.clusterlabs.org/
>>
>>
>>
>> _______________________________________________
>> Manage your subscription:
>> https://lists.clusterlabs.org/mailman/listinfo/users
>>
>> ClusterLabs home: https://www.clusterlabs.org/
>>
> 
> _______________________________________________
> Manage your subscription:
> https://lists.clusterlabs.org/mailman/listinfo/users
> 
> ClusterLabs home: https://www.clusterlabs.org/
> 


More information about the Users mailing list