[Pacemaker] Multi-node resource dependency

"Tomcsányi, Domonkos" tomcsanyid at modit.hu
Mon Jul 22 08:47:50 UTC 2013


2013.07.22. 4:57 keltezéssel, Andrew Beekhof írta:
> On 20/07/2013, at 7:18 AM, Lars Ellenberg <Lars.Ellenberg at linbit.com> wrote:
>
>> On Fri, Jul 19, 2013 at 04:49:21PM +0200, "Tomcsányi, Domonkos" wrote:
>>> Hello everyone,
>>>
>>> I have been struggling with this issue for quite some time so I
>>> decided to ask you to see if maybe you can shed some light on this
>>> problem.
>>> So here is the deal:
>>> I have a 4-node cluster, from which 2-2 nodes belong together.
>>> In ASCII art it would look like this:
>>>
>>> --------------         --------------
>>> | NODE1 |    --   | NODE 2  |
>>> --------------         --------------
>>> |                         |
>>> |                         |
>>> |                         |
>>> --------------         --------------
>>> | NODE 3 |  --  | NODE 4 |
>>> --------------         --------------
>>>
>>> Now the behaviour I would like to achieve:
>>> If NODE 1 goes offline its services should get migrated to NODE 2
>>> AND NODE 3's services should get migrated to NODE 4.
>>> If NODE 3 goes offline its services should get migrated to NODE4 AND
>>> NODE1's services should get migrated to NODE 2.
>>> Of course the same should happen vice versa with NODE 2 and NODE 4.
>>>
>>> The services NODE1 and 2 are the same naturally, but they differ
>>> from NODE 3's and 4's services. So I added some 'location'
>>> directives to the config so the services can only be started on the
>>> right nodes.
>>> I tried 'colocation' which is great, but not for this kind of
>>> behaviour: if I colocate both resource groups of NODE 1 and 3 only
>>> one of them starts (of course, because colocation means the
>>> resource/resource group(s) should be running on the same NODE, so my
>>> location directives kick in and prevent for example NODE 3's
>>> services from starting on NODE 1).
>>>
>>> So my question is: is it possible to define such behaviour I
>>> described above in Pacemaker? If yes, how?
>> You may use node attributes in colocation constraints.
>>
>> So you would give your nodes attributes, first:
>>
>> crm node
>> 	attribute NODE1 set color pink
>> 	attribute NODE3 set color pink
>>
>> 	attribute NODE2 set color slime
>> 	attribute NODE4 set color slime
>>
>> crm configure
>> 	colocation c-by-color inf: rsc_a rsc_b rsc_c node-attribute=color
> I had totally forgotten about that feature :-)
> What a good idea!
>
>> The "implicit default" node-attribute is #uname ...
>> so using "color" the resources only need to run on nodes with the same
>> value for the node-attribute "color".
>>
>> 	Lars
>>
>> -- 
>> : Lars Ellenberg
>> : LINBIT | Your Way to High Availability
>> : DRBD/HA support and consulting http://www.linbit.com
>>
>> DRBD® and LINBIT® are registered trademarks of LINBIT, Austria.
>>
>> _______________________________________________
>> Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
>> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>>
>> Project Home: http://www.clusterlabs.org
>> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>> Bugs: http://bugs.clusterlabs.org
>>
Hi Lars,

Thank you for your answer, but I still don't really understand how it 
relates to my problem. As far as I understand this would give me an 
easier way to attach my services to a certain group of nodes, but this 
one I already achieved using 'location' directives like this:
location HTTPS_ONLY_ON_HTTPS_NODES HTTPS_SERVICE_GROUP 100: https1
location HTTPS_ONLY_ON_HTTPS_NODES2 HTTPS_SERVICE_GROUP 50: https2
location NEVER_RUN_HTTPS_ON_VPN_NODES HTTPS_SERVICE_GROUP -inf: vpn1
location NEVER_RUN_HTTPS_ON_VPN_NODES2 HTTPS_SERVICE_GROUP -inf: vpn2

Certainly your solution is more sophisticated, but my main problem still 
remains if I understand everything correctly, but correct me if I'm wrong.

Anyway, thank you for the idea!

Domonkos




More information about the Pacemaker mailing list