[Pacemaker] Force a Master resource off a node if another resource fails

Eliot Gable egable at broadvox.net
Thu Apr 30 21:58:00 UTC 2009


Let's say I have these requirements:


-          a master/slave resource called res-a

-          a normal resource called res-b that runs on node1 only

-          a normal resource called res-c that runs on node2 only

-          res-b and res-c both do the same thing, but they are configured as separate resources so I can refer to each individually

-          res-a must be started before res-b or res-c start

-          res-a can only be promoted to Master on node1 if res-b is running

-          res-a can only be promoted to Master on node2 if res-c is running

-          res-a prefers node1 to be Master

-          If res-a is Master on node1 and res-b fails, res-a should move to node2 if res-c is running

-          If res-a is Master on node2 and res-c fails, res-a should move to node1 if res-b is running

How would I set up the constraints?

This should make res-a prefer node1 over node2, but allow res-a to fail over to node2:

      <rsc_location id="res-a-location" rsc="res-a">
        <rule id="res-a-location-rule-1" score="500">
          <expression id="res-a-location-rule-1-exp" attribute="#uname" operation="eq" value="node1"/>
        </rule>
        <rule id="res-a-location-rule-2" score="0">
          <expression id="res-a-location-rule-2-exp" attribute="#uname" operation="eq" value="node2"/>
        </rule>
      </rsc_location>

This should make res-b run only on node1:

      <rsc_location id="res-b-location" rsc="res-b">
        <rule id="res-b-location-rule-1" score="INFINITY">
          <expression id="res-b-location-rule-1-exp" attribute="#uname" operation="eq" value="node1"/>
        </rule>
        <rule id="res-b-location-rule-2" score="-INFINITY">
          <expression id="res-b-location-rule-2-exp" attribute="#uname" operation="eq" value="node2"/>
        </rule>
      </rsc_location>

This should make res-c run only on node2:

      <rsc_location id="res-c-location" rsc="res-c">
        <rule id="res-c-location-rule-1" score="-INFINITY">
          <expression id="res-c-location-rule-1-exp" attribute="#uname" operation="eq" value="node1"/>
        </rule>
        <rule id="res-c-location-rule-2" score="INFINITY">
          <expression id="res-c-location-rule-2-exp" attribute="#uname" operation="eq" value="node2"/>
        </rule>
      </rsc_location>

This should make res-a start before res-b starts:

      <rsc_order id="order-res-a-first-then-b">
        <resource_set id="res-a-set-1" sequential="true">
          <resource_ref id="res-a"/>
        </resource_set>
        <resource_set id="res-b-set" sequential="false">
          <resource_ref id="res-b"/>
        </resource_set>
      </rsc_order>

This should make res-a start before res-c starts:

      <rsc_order id="order-res-a-first-then-c">
        <resource_set id="res-a-set-2" sequential="true">
          <resource_ref id="res-a"/>
        </resource_set>
        <resource_set id="res-c-set" sequential="false">
          <resource_ref id="res-c"/>
        </resource_set>
      </rsc_order>

This should make res-a only run as Master on node1 if res-b is started and make it prefer node1:

      <rsc_colocation id="res-a-prefers-node1" score="500">
        <resource_set id="res-a-prefers-node1-set-1" role="Master">
          <resource_ref id="res-a"/>
        </resource_set>
        <resource_set id="res-a-prefers-node1-set-2" role="Started">
          <resource_ref id="res-b"/>
        </resource_set>
      </rsc_colocation>

This should make res-a only run as Master on node1 if res-c is started and make it not prefer node2:

      <rsc_colocation id="res-a-not-prefer-node2" score="250">
        <resource_set id="res-a-not-prefer-node2-set-1" role="Master">
          <resource_ref id="res-a"/>
        </resource_set>
        <resource_set id="res-a-not-prefer-node2-set-2" role="Started">
          <resource_ref id="res-c"/>
        </resource_set>
      </rsc_colocation>

I do see res-a starting before b and c, and I do see it being promoted to master on node1. I also see res-b starting on node1 only and res-c on node2 only.

However, based on these constraints, I would expect that if I forced res-b to fail when res-a is Master on node1, then res-a should switch to Master on node2. Once that is done, then I would expect res-b to restart on node1 and res-a to stay Master on node2 (because of stickiness).

Instead, if I cause res-b to fail, it simply restarts res-b. Is there something I am missing in this logic? Do I need another constraint to force it over to node2 if res-b fails? Do I need another constraint to force it to node1 if res-c fails on node2?

Thanks for any assistance you can provide.

Eliot Gable
Senior Engineer
1228 Euclid Ave, Suite 390
Cleveland, OH 44115

Direct: 216-373-4808
Fax: 216-373-4657
egable at broadvox.net<mailto:egable at broadvox.net>

[cid:image001.gif at 01C9C9B5.E4D04DD0]
CONFIDENTIAL COMMUNICATION.  This e-mail and any files transmitted with it are confidential and are intended solely for the use of the individual or entity to whom it is addressed. If you are not the intended recipient, please call me immediately.  BROADVOX is a registered trademark of Broadvox, LLC.


________________________________
CONFIDENTIAL. This e-mail and any attached files are confidential and should be destroyed and/or returned if you are not the intended and proper recipient.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.clusterlabs.org/pipermail/pacemaker/attachments/20090430/7c3e7b03/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image001.gif
Type: image/gif
Size: 2308 bytes
Desc: image001.gif
URL: <https://lists.clusterlabs.org/pipermail/pacemaker/attachments/20090430/7c3e7b03/attachment-0003.gif>


More information about the Pacemaker mailing list