[Pacemaker] stopping resource stops others in colocation / order sets

David Vossel dvossel at redhat.com
Fri Jun 15 16:02:32 UTC 2012


----- Original Message -----
> From: "David Vossel" <dvossel at redhat.com>
> To: "The Pacemaker cluster resource manager" <pacemaker at oss.clusterlabs.org>
> Sent: Friday, June 15, 2012 10:55:56 AM
> Subject: Re: [Pacemaker] stopping resource stops others in colocation /	order	sets
> 
> ----- Original Message -----
> > From: "Phil Frost" <phil at macprofessionals.com>
> > To: pacemaker at oss.clusterlabs.org
> > Sent: Thursday, June 14, 2012 1:19:05 PM
> > Subject: [Pacemaker] stopping resource stops others in colocation /
> > order	sets
> > 
> > I'm sure this is a typical novice question, but I've been dancing
> > around
> > this problem for a day without any real progress, so could use some
> > more
> > experienced eyes. I'm setting up what must be a pretty normal NFS /
> > DRBD
> > / LVM, 2 node, active / passive cluster. Everything works, mostly,
> > but
> > it doesn't quite behave as I'd expect, I'm not sure why, and not
> > sure
> > how to find out.
> > 
> > Start with the somewhat-contrived, but working test configuration
> > which
> > represents a DRBD device and some services (maybe a FS mount and
> > Apache)
> > that use it:
> > 
> > primitive drbd_nfsexports ocf:linbit:drbd \
> >          params drbd_resource="nfsexports" \
> >          op monitor interval="10s" role="Master" \
> >          op monitor interval="20s" role="Slave" \
> >          op start interval="0" timeout="240s" \
> >          op stop interval="0" timeout="100s"
> > 
> > primitive resB ocf:pacemaker:ping \
> >          params host_list="127.0.0.1" \
> >          op start interval="0" timeout="60s" \
> >          meta target-role="Started"
> > 
> > primitive resC ocf:pacemaker:ping \
> >          params host_list="127.0.0.1" \
> >          op start interval="0" timeout="60s" \
> >          meta target-role="Started"
> > 
> > ms drbd_nfsexports_ms drbd_nfsexports \
> >          meta master-max="1" master-node-max="1" clone-max="2"
> > clone-node-max="1" notify="true" target-role="Master"
> > 
> > colocation colo inf: drbd_nfsexports_ms:Master resB resC
> > 
> > order ord inf: drbd_nfsexports_ms:promote resB resC
> > 
> > property $id="cib-bootstrap-options" \
> >          no-quorum-policy="ignore" \
> >          dc-version="1.1.7-ee0730e13d124c3d58f00016c3376a1de5323cff"
> >          \
> >          cluster-infrastructure="openais" \
> >          expected-quorum-votes="2" \
> >          stonith-enabled="false" \
> >          last-lrm-refresh="1339694844"
> > 
> > If resC is stopped
> > 
> > resource stop resC
> > 
> > then drbd_nfsexports is demoted, and resB and resC will stop. Why
> > is
> > that? I'd expect that resC, being listed last in both the
> > colocation
> > and
> 
> It is the order constraint.
> 
> Order constraints are symmetrical. If you say to do these things in
> this order
> 
> 1. promote drbd
> 2. start resB
> 3. start rscC
> 
> Then the opposite is also true.  If you want to demote drbd it the
> following has to happen first.
> 
> 1. stop rscC
> 2. stop resB
> 3. demote drbd
> 
> You can get around this by using the symmetrical option for your
> order constraints.

I was trying to paste this link and ctrl-v signaled my email client to send... That's a pretty unfortunate bug.

Anyway.

Check out the symmetrical option here.  You are you going to want to set it to false if you want to ignore the reverse order of the constraint.  It is set to true by default.

http://www.clusterlabs.org/doc/en-US/Pacemaker/1.1/html/Pacemaker_Explained/s-resource-ordering.html


> 
> > order constraints, could be stopped without affecting the other
> > resources. I've also found that if I construct a similar scenario
> > with
> > another ping as resA instead of the DRBD resource, it behaves as
> > I'd
> > expect.
> > 
> > Perhaps interestingly, if I delete the order constraint:
> > 
> > configure delete ord
> > 
> > the the DRBD resource says demoted, but resB starts. Maybe this is
> > the
> > issue to examine, because I can see how with the order constraint
> > added,
> > any demotion of the DRBD resource would lead to all the services
> > being
> > stopped.
> > 
> > So, what If I delete the constraint "colo", and replace it with
> > what
> > I
> > understand to be equivalent (as I understand it, the order of the
> > resources is reverse when there are only two. Confusing, but
> > correct?):
> > 
> > configure delete colo
> > configure colocation colo1 inf: resC resB
> > configure colocation colo2 inf: resB drbd_nfsexports_ms:Master
> > 
> > Now just resC is stopped. Try making just resB stopped:
> > 
> > resource start resC
> > resource stop resB
> > 
> > now resC and resB are stopped, but I still have a DRBD master as
> > I'd
> > expect. Re-add the order constraint:
> > 
> > configure order ord inf: drbd_nfsexports_ms:promote resB resC
> > 
> > things remain unchanged.
> > 
> > What am I not understanding?
> > 
> > 
> > _______________________________________________
> > Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
> > http://oss.clusterlabs.org/mailman/listinfo/pacemaker
> > 
> > Project Home: http://www.clusterlabs.org
> > Getting started:
> > http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> > Bugs: http://bugs.clusterlabs.org
> > 
> 
> _______________________________________________
> Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
> 
> Project Home: http://www.clusterlabs.org
> Getting started:
> http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
> 




More information about the Pacemaker mailing list