[Pacemaker] Odd colocation behaviour with master/slave resource

Chris Redekop chris at replicon.com
Tue Sep 6 11:14:52 EDT 2011

It seems to me like this would be a fairly common scenario, but the fact
that there are no replies makes me think I'm trying to set this whole thing
up incorrectly.  Am I not doing this the "normal" way? or is it just not as
common as I would expect?....or is this whole thing just a stupid newbie
question? :)

On Fri, Aug 26, 2011 at 5:07 PM, Chris Redekop <chris at replicon.com> wrote:

> I'm attempting to set up a master/slave database cluster where the master
> is R/W and the slave is R/O.  The master failure scenario works fine (slave
> becomes master, master vip moves over)....however when the slave resource
> goes down I want the slave vip to move to the master and then move back when
> the slave comes back up...I can't seem to get this to work properly.  Here's
> my test config I'm playing with:
> primitive postgresql ocf:custom:pgsql \
>         op monitor interval="30" timeout="30" depth="0"
> primitive primaryip ocf:heartbeat:IPaddr2 \
>         params ip=""
> primitive slaveip ocf:heartbeat:IPaddr2 \
>         params ip=""
> ms ms_postgresql postgresql \
>         meta clone-max="2" clone-node-max="1" master-max="1"
> master-node-max="1" notify="true" target-role="Started"
> colocation postgres_on_primaryip inf: primaryip ms_postgresql:Master
> colocation slaveip_on_master 101: slaveip ms_postgresql:Master
> colocation slaveip_on_slave 1000: slaveip ms_postgresql:Slave
> property $id="cib-bootstrap-options" \
>         dc-version="1.0.11-1554a83db0d3c3e546cfd3aaff6af1184f79ee87" \
>         cluster-infrastructure="openais" \
>         expected-quorum-votes="2" \
>         stonith-enabled="false" \
>         no-quorum-policy="ignore" \
>         last-lrm-refresh="1314201732"
> rsc_defaults $id="rsc-options" \
>         resource-stickiness="100"
> With the configuration like this it looks pretty straight forward but it
> actually results in the slaveip not being run on *either* node.  As far as I
> can figure out it seems when you have a colocation to a specific role it
> implicitly generates a -inf record for the other role.  So the :Master
> generates a -inf for :Slave, and :Slave generates a -inf for :Master, and
> since slaveip has a colocation record for both they get added together
> resulting in a -inf score for both nodes...if I wasn't so new at this I
> would think that is a bug.  If I key the ip off the node's up/down status
> (like via 'colocation whatever -101: slaveip primaryip') then it works if I
> standby the slave node but of course it doesn't work if the resource fails
> while the node stays up.  Can anyone shed some light on how to make this
> work properly?  Thanks!
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.clusterlabs.org/pipermail/pacemaker/attachments/20110906/72df2d47/attachment-0002.html>

More information about the Pacemaker mailing list