[Pacemaker] Odd cluster constraint behaviour

Dejan Muhamedagic dejanmm at fastmail.fm
Tue Jul 7 12:01:48 EDT 2009


Hi,

On Tue, Jul 07, 2009 at 09:08:19AM +1000, Oliver Hookins wrote:
> On Mon Jul 06, 2009 at 11:11:14 +0200, Dejan Muhamedagic wrote:
> >Hi,
> >
> >On Mon, Jul 06, 2009 at 03:33:13PM +1000, Oliver Hookins wrote:
> >> I'm not sure if I've configured things correctly, as the last time I did
> >> this was on Heartbeat 2.0.7 or so. It's either that as a bug (and it's far
> >> more likely I've stuffed something up):
> >>  * there are two resource groups, one with higher priority (master) and one with
> >>    lower priority (slave) - note that I'm not actually configuring them as master/slave resources
> >>  * two nodes in the cluster (A and B)
> >>  * they are constrained to run:
> >>   - only if a pingd instance is successfully running
> >>   - colocated with -INFINITY to each other (i.e. cannot run together)
> >>   - in the order of the "master" resource group and then the "slave" resource group
> >>   - preferring resource group "master" running on node A
> >> 
> >> With the cluster in its default running state of "master" on A, and "slave" on B,
> >> everything seems fine. When I "fail" node A, "slave" stops on node B but then no
> >> resources are started on node B. It's clearly a conflict of constraints, or
> >> the resource group priorities being ignored but I can pick it.
> >> 
> >> Here are (hopefully) the relevant snippets of CIB:
> >> 
> >>     <resources>
> >>       <group id="master">
> >>         <meta_attributes id="master-meta_attributes">
> >>           <nvpair id="master-meta_attributes-priority" name="priority" value="1000"/>
> >>         </meta_attributes>
> >> 
> >>       <group id="slave">
> >>         <meta_attributes id="slave-meta_attributes">
> >>           <nvpair id="slave-meta_attributes-priority" name="priority" value="0"/>
> >>         </meta_attributes>
> >> 
> >>       <clone id="pingdclone">
> >>         <meta_attributes id="pingdclone-meta_attributes">
> >>           <nvpair id="pingdclone-meta_attributes-globally-unique" name="globally-unique" value="false"/>
> >>         </meta_attributes>
> >>         <primitive class="ocf" id="pingd" provider="pacemaker" type="pingd">
> >>           <instance_attributes id="pingd-instance_attributes">
> >>             <nvpair id="pingd-instance_attributes-host_list" name="host_list" value="X.X.X.X"/>
> >>             <nvpair id="pingd-instance_attributes-multiplier" name="multiplier" value="100"/>
> >>           </instance_attributes>
> >>           <operations>
> >>             <op id="pingd-monitor-15s" interval="15s" name="monitor" timeout="5s"/>
> >>           </operations>
> >>         </primitive>
> >>       </clone>
> >>     </resources>
> >>     <constraints>
> >>       <rsc_location id="cli-prefer-master0" node="nodeA" rsc="master" score="1000"/>
> >>       <rsc_location id="cli-prefer-master1" node="nodeB" rsc="master" score="0"/>
> >>       <rsc_colocation id="separate_master_and_slave" rsc="master" score="-INFINITY" with-rsc="slave"/>
> >
> >This colocation should be the other way around, i.e. exchange rsc
> >and with-rsc attributes. If that doesn't help, please post the
> >logs.
> 
> Yes, that fixed the problem. Can you point me in the direction of the
> documentation that explains why this is the case? I've been reading Andrew's
> "Configuration Explained 1.0" document but didn't pick up on this.

That's where all the information should be. At the bottom of page
23 there's a table titled "Options":

rsc The colocation source. If the constraint cannot be satisfied,
    the cluster may decide not to allow the resource to run at all.

Thanks,

Dejan

> -- 
> Regards,
> Oliver Hookins
> Anchor Systems
> 
> _______________________________________________
> Pacemaker mailing list
> Pacemaker at oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker




More information about the Pacemaker mailing list