[ClusterLabs] bit of wizardry bit of trickery needed.

Strahil Nikolov hunter86_bg at yahoo.com
Tue May 11 06:55:19 EDT 2021


Oh wrong thread, just ignore .
Best Regards
 
 
  On Tue, May 11, 2021 at 13:54, Strahil Nikolov<hunter86_bg at yahoo.com> wrote:   Here is the example I had promised:
pcs node attribute server1 city=LApcs node attribute server2 city=NY
# Don't run on any node that is not in LApcs constraint location DummyRes1 rule score=-INFINITY city ne LA
 
#Don't run on any node that is not in NYpcs constraint location DummyRes2 rule score=-INFINITY city ne NY
The idea is that if you add a node and you forget to specify the attribute with the name 'city' , DummyRes1 & DummyRes2 won't be started on it.

For resources that do not have a constraint based on the city -> they will run everywhere unless you specify a colocation constraint between the resources.
Best Regards,Strahil Nikolov


 
  On Tue, May 11, 2021 at 9:15, Klaus Wenninger<kwenning at redhat.com> wrote:   On 5/10/21 7:16 PM, lejeczek wrote:
>
>
> On 10/05/2021 17:04, Andrei Borzenkov wrote:
>> On 10.05.2021 16:48, lejeczek wrote:
>>> Hi guys
>>>
>>> Before I begin my adventure with this I though I would ask experts if
>>> something like below is possible.
>>>
>>> resourceA if started on nodeA, then nodes B & C start resourceB (or
>>> recourceC)
>>>
>> Configure colocation with negative score between resourceB and
>> resourceA, so resourceB will be started on different node.
>>
>>> whether to start resource B or C on two nodes (might thing of it as
>>> master node + two slaves) depend on which node resourceA got started.
>>>
>>> eg.
>>> if nodeA runs rMaster -> nodeB, nodeC run rToA
>>> if nodeB runs rMaster -> nodeA, nodeC run rToB
>>> if nodeC runs rMaster -> nodeA, nodeB run rToC
>>>
>>> any light shedding or idea sharing are much welcomed.
>>> many thanks, L.
>>> _______________________________________________
>>> Manage your subscription:
>>> https://lists.clusterlabs.org/mailman/listinfo/users
>>>
>>> ClusterLabs home: https://www.clusterlabs.org/
>> _______________________________________________
>> Manage your subscription:
>> https://lists.clusterlabs.org/mailman/listinfo/users
>>
>> ClusterLabs home: https://www.clusterlabs.org/
> perhaps I did not do best explaining, I'll try again
>
> if nodeA runs rMaster -> nodeB, nodeC run rToA (meaning..
> a) no r{ToA,ToB,ToC} allowed on the node if "rMaster" runs on that 
> node, this case nodeA,
> b) if it's nodeA cluster chose to run "rMaster" on, then only "rToA" 
> is allowed to run on nodeB & nodeC
> c) a&b applies across to nodeB,C respectively
>
> I'm starting with "rMaster" and other three resources as clones but I 
> fail to see how to make it work.
Not sure if there is an easy way to get that working directly.
An anti-colocation as already suggested is probably a good idea.
A resource that sets an attribute to select which clone to start
could do the rest - with location-constraints using that attribute.

Without knowing more about your resources it is hard to
tell if there would be a more elegant way to solve your
problem.

If it is mainly about IP communication of the slaves with the
master (we btw. removed that wording from pacemaker as
considered offensive) you could have a floating IP-address that is
moved with the master (or more precisely rather the other
way round) and your slaves would connect with that IP without
really having to know who the master is.

If the logic is more complex and you anyway already have
a custom resource agent it might be worth thinking of a
promotable clone that runs the master
in it's promoted state and the slave in it's demoted state
with the logic moved into a resource agent.

Klaus
>
> _______________________________________________
> Manage your subscription:
> https://lists.clusterlabs.org/mailman/listinfo/users
>
> ClusterLabs home: https://www.clusterlabs.org/
>

_______________________________________________
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/
  
  
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.clusterlabs.org/pipermail/users/attachments/20210511/a619a97e/attachment-0001.htm>


More information about the Users mailing list