[Pacemaker] Ordering constraints between two clone and stateful resources

Andreas Kurz andreas at hastexo.com
Wed Nov 2 20:02:26 UTC 2011


Hello,

On 11/02/2011 06:07 PM, King, Christopher wrote:
> Hi all,
> 
>  
> 
> We have a 2-node cluster running on the SLES 11 HAE SP1, plus the latest
> supported patches.  (So Pacemaker version 1.1.5, etc).  In the cluster,
> we have a clone resource and a stateful resource that have an
> mandatoryordering constraint between the two, such that the stateful
> resource will start after the clone.
> 
>  
> 
> From the output of “crm configure show”.  We reproduced the problem with
> the stock “Dummy” and “Stateful” RAs.
> 
>  
> 
> primitive testStateful ocf:heartbeat:Stateful \
> 
>         op start interval="0" timeout="1800s" \
> 
>         op stop interval="0" timeout="45s" \
> 
>         op monitor interval="10s"
> 
>  
> 
> primitive testDummy ocf:heartbeat:Dummy \
> 
>         op monitor interval="20" timeout="10"
> 
>  
> 
> ms testStateful-ms testStateful \
> 
>         meta target-role="Started" master-max="1" master-node-max="1"
> clone-max="2" clone-node-max="1" notify="true" ordered="false"
> globally-unique="false" is-managed="true"
> 
>  
> 
> clone testDummy-clone testDummy \
> 
>         meta target-role="Started"
> 
>  
> 
> order testDummy-testStateful-order inf: testDummy-clone
> testStateful-ms:start
> 
>  
> 
> I then perform some experiments:
> 
> 1)      If the testDummy instance on the same node as the
> testStateful:slave instance fails, both the testStateful:slave and
> testStateful:master instances are stopped, and restarted when the
> testDummy instance is restarted.
> 
> 2)      If the testDummy instance on the same node as the
> testStateful:master instance fails, both the testStateful:slave and
> testStateful:master instances are stopped, and restarted when the
> testDummy instance is restarted.
> 
> The desired behaviour is for the instance of testStateful on the same
> node as the failed instance of the testDummy to be stopped, and
> restarted when the testDummy instance is restarted.  In other words, we
> want to be able to order an instance of a clone/stateful resource on the
> instance of a clone/stateful resource on the same node.  For our
> purposes, it is actually very harmful if the dependent resource depends
> on BOTH instances of the clone, as my experiments show that it does.
> 
>  
> 
> So questions:
> 
> 1)      Is it possible to express with a valid configuration a “nodal”
> ordering constaint between instances of different clone resources? 
> I.e., an ordering constraint in which the instance of the stateful
> resource depends on the instance of the clone on the same node, but not
> the instance of the clone on the other node.  I’ve read the on-line
> documentation which highly discourages referencing a clone’s child in an
> ordering constraint (and by “clone’s child”, I am assuming you mean the
> clone’s primitive, not a specific instance of the primitive) so I don’t
> want to do that.  But, if I manually edit the cluster configuration to
> reference the primitives in the ordering constraint between testDummy
> and testStateful, we seem to get the desired behaviour, but the cluster
> configuration is technically invalid.  (“crm configure verify” returns
> errors.)  So we don’t want to do that, but is there a valid way of
> configuring for the desired behaviour?
> 
> 2)      Again, looking at the online documentation, I found a reference
> to a meta attribute of clones called “interleave”.  The description is a
> little open to interpretation; does it refer to changing the behaviour
> of ordering constraints between the instances of a clone, or between two
> different cloned resources?  We think the former interpretation is most
> likely, but if I add a “interleave=”true” attribute to the
> testDummy-clone and testStateful-ms configurations above, I get the
> desired behaviour.  Is that what you would expect, or is this a
> side-effect that may not deterministically occur?

it refers to order constraints between different clone/multi-state
resources ... so you want to add "interleave=true" to your multi-state
resource. No side-effect ... it's a feature ;-)

Regards,
Andreas

-- 
Need help with Pacemaker?
http://www.hastexo.com/now


> 
>  
> 
> Thanks very much.
> 
> Chris
> 
>  
> 
> 
> 
> _______________________________________________
> Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
> 
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker



-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 286 bytes
Desc: OpenPGP digital signature
URL: <https://lists.clusterlabs.org/pipermail/pacemaker/attachments/20111102/1e40a184/attachment-0004.sig>


More information about the Pacemaker mailing list