[Pacemaker] Odd colocation behaviour with master/slave resource

Chris Redekop chris at replicon.com
Sat Oct 15 14:43:36 EDT 2011


Andrew: It's not that the slave IP doesn't move back...it's that the slave
IP simply doesn't run *anywhere*....even when both nodes are up, the slave
ip will not run anywhere.  I'm fairly convinced this is a bug....


On Thu, Sep 29, 2011 at 1:21 AM, Andrew Beekhof <andrew at beekhof.net> wrote:

> On Sat, Aug 27, 2011 at 9:07 AM, Chris Redekop <chris at replicon.com> wrote:
> > I'm attempting to set up a master/slave database cluster where the master
> is
> > R/W and the slave is R/O.  The master failure scenario works fine (slave
> > becomes master, master vip moves over)....however when the slave resource
> > goes down I want the slave vip to move to the master and then move back
> when
> > the slave comes back up...I can't seem to get this to work properly.
>  Here's
> > my test config I'm playing with:
> > primitive postgresql ocf:custom:pgsql \
> >         op monitor interval="30" timeout="30" depth="0"
> > primitive primaryip ocf:heartbeat:IPaddr2 \
> >         params ip="10.0.100.102"
> > primitive slaveip ocf:heartbeat:IPaddr2 \
> >         params ip="10.0.100.103"
> > ms ms_postgresql postgresql \
> >         meta clone-max="2" clone-node-max="1" master-max="1"
> > master-node-max="1" notify="true" target-role="Started"
> > colocation postgres_on_primaryip inf: primaryip ms_postgresql:Master
> > colocation slaveip_on_master 101: slaveip ms_postgresql:Master
> > colocation slaveip_on_slave 1000: slaveip ms_postgresql:Slave
> > property $id="cib-bootstrap-options" \
> >         dc-version="1.0.11-1554a83db0d3c3e546cfd3aaff6af1184f79ee87" \
> >         cluster-infrastructure="openais" \
> >         expected-quorum-votes="2" \
> >         stonith-enabled="false" \
> >         no-quorum-policy="ignore" \
> >         last-lrm-refresh="1314201732"
> > rsc_defaults $id="rsc-options" \
> >         resource-stickiness="100"
> > With the configuration like this it looks pretty straight forward but it
> > actually results in the slaveip not being run on *either* node.  As far
> as I
> > can figure out it seems when you have a colocation to a specific role it
> > implicitly generates a -inf record for the other role.  So the :Master
> > generates a -inf for :Slave, and :Slave generates a -inf for :Master, and
> > since slaveip has a colocation record for both they get added together
> > resulting in a -inf score for both nodes...if I wasn't so new at this I
> > would think that is a bug.  If I key the ip off the node's up/down status
> > (like via 'colocation whatever -101: slaveip primaryip') then it works if
> I
> > standby the slave node but of course it doesn't work if the resource
> fails
> > while the node stays up.  Can anyone shed some light on how to make this
> > work properly?  Thanks!
>
> Are you sure its not just the stickiness value preventing it from
> moving back to the slave when it returns?
>
> >
> > _______________________________________________
> > Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
> > http://oss.clusterlabs.org/mailman/listinfo/pacemaker
> >
> > Project Home: http://www.clusterlabs.org
> > Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> > Bugs:
> >
> http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker
> >
> >
>
> _______________________________________________
> Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs:
> http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.clusterlabs.org/pipermail/pacemaker/attachments/20111015/4e2ea485/attachment-0002.html>


More information about the Pacemaker mailing list