[Pacemaker] Colocation advice seeked

Dominik Klein dk at in-telegence.net
Fri Mar 20 07:34:05 UTC 2009


Hi

Actually, I built a system just like that for presentation purpose (so
just using Dummy resource, but that doesnt matter) to replace a system
that is currently using keepalived.

We seem to want to achieve just the same thing. Here's how I did it:

# m1 = mysql 1
primitive m1 ocf:heartbeat:Dummy \
       op monitor interval="10" \
       meta migration-threshold="2"
# m2 = mysql 2
primitive m2 ocf:heartbeat:Dummy \
       op monitor interval="10" \
       meta migration-threshold="2"
# m1-ip = master ip
primitive m1-ip ocf:heartbeat:Dummy \
       op monitor interval="10" \
       meta migration-threshold="2"
# m2-ip = slave ip
primitive m2-ip ocf:heartbeat:Dummy \
       op monitor interval="10" \
       meta migration-threshold="2"
# pingd for connectivity monitoring
primitive pingd ocf:pacemaker:pingd \
       params host_list="10.2.50.11 10.2.50.8 10.2.50.40" \
       op monitor interval="10"
clone pingdclone pingd

# keep m1 always on xen-03, never on xen-04
location m1-on-xen-03 m1 \
       rule $id="m1-on-xen-03-rule" inf: #uname eq xen-03 \
       rule $id="m1-on-xen-03-rule-0" -inf: #uname eq xen-04
# keep m2 always on xen-04, never on xen-03
location m2-on-xen-04 m2 \
       rule $id="m2-on-xen-04-rule" inf: #uname eq xen-04 \
       rule $id="m2-on-xen-04-rule-0" -inf: #uname eq xen-03
# m1-ip on xen-03 by default, but may also run on xen-04
location m1-ip-on-xen-03 m1-ip \
       rule $id="m1-ip-on-xen-03-rule" 100: #uname eq xen-03 \
       rule $id="m1-ip-on-xen-03-rule-0" 50: #uname eq xen-04
# m2-ip on xen-04 by default, but may also run on xen-03
location m2-ip-on-xen-04 m2-ip \
       rule $id="m2-ip-on-xen-04-rule" 100: #uname eq xen-04 \
       rule $id="m2-ip-on-xen-04-rule-0" 50: #uname eq xen-03
# m2-ip on a node with network connection
location m2-ip-connected m2-ip \
       rule $id="m2-ip-connected-rule" -inf: not_defined pingd or pingd
lte 0
# m1-ip on a node with network connection
location m1-ip-connected m1-ip \
       rule $id="m1-ip-connected-rule" -inf: not_defined pingd or pingd
lte 0
# Colocate each ip with each mysql with the same preference
colocation m2-ip-with-m2 75: m2-ip m2
colocation m2-ip-with-m1 75: m2-ip m1
colocation m1-ip-with-m1 75: m1-ip m1
colocation m1-ip-with-m2 75: m1-ip m2

So what happens is:

By default, m1 and m1-ip are on xen-03, m2 and m2-ip are on xen-04.
Scores for the ips are
m1 xen-03 175 (100 node preference + 75 colocation with m1)
m1 xen-04 125 (50 node preference + 75 colocation with m2)
m2 xen-03 125 (50 node preference + 75 colocation with m1)
m2 xen-04 175 (100 node preference + 75 colocation with m2)

If m2 fails 2 times, it won't be restarted (migration-threshold=2 and no
other node left). m2-ip however has a higher preference for xen-03 than
for xen-04 because the colocated m2 is no longer there, but the also
colocated m1 still is.

If a node fails completely (offline or standby), the ip will also move,
due to the same reason.

hth
Dominik

Neil Katin wrote:
> 
> Hi there.  I'm looking for advice about the "proper" way to configure
> pacemaker to support our access to a mysql cluster.
> 
> We're planning on configuring two mysql servers with master-master
> replication.  We'd like one of them to be the "master", and the other
> to be available as the "slave" (with a pacemaker managed IP address
> following those roles).
> 
> If only one mysql instance is running we would like that node to
> own both the ip addresses.
> 
> I thought this would be fairly straightforward to configure, but
> there seems to be something about colocation I'm not understanding.
> 
> Here's the configuration I'm running:
> 
> primitive mysql0 ocf:heartbeat:mysql \
>         params datadir="/var/lib/mysql" mysql_config="/etc/my.cnf"
> binary="/usr/bin/mysqld_safe" test_user="test"
> test_table="information_schema.schemata" OCF_CHECK_LEVEL="10" \
>         op monitor interval="60s" timeout="30s" \
>         op start interval="0s" timeout="300s" \
>         op stop interval="0s" timeout="300s"
> primitive mysql1 ocf:heartbeat:mysql \
>         params datadir="/var/lib/mysql" mysql_config="/etc/my.cnf"
> binary="/usr/bin/mysqld_safe"
> test_user="test"test_table="information_schema.schemata"
> OCF_CHECK_LEVEL="10" \
>         op monitor interval="60s" timeout="30s" \
>         op start interval="0s" timeout="300s" \
>         op stop interval="0s" timeout="300s"
> primitive ip-master ocf:heartbeat:IPaddr2 \
>         params ip="192.168.1.210" nic="eth1" cidr_netmask="24" \
>         op monitor interval="60s"
> primitive ip-slave ocf:heartbeat:IPaddr2 \
>         params ip="192.168.1.211" nic="eth1" cidr_netmask="24" \
>         op monitor interval="60s"
> location mysql0-location mysql0 \
>         rule $id="mysql0-location-rule" -inf: #uname ne
> gv-neil0.bunchball.net
> location mysql1-location mysql1 \
>         rule $id="mysql1-location-rule" -inf: #uname ne
> gv-neil2.bunchball.net
> location ip-master-location ip-master \
>         rule $id="ip-master-location-rule" -inf: #uname ne
> gv-neil0.bunchball.net and #uname ne gv-neil2.bunchball.net
> location ip-slave-location ip-slave \
>         rule $id="ip-slave-location-rule" -inf: #uname ne
> gv-neil0.bunchball.net and #uname ne gv-neil2.bunchball.net
> colocation ip-master-colo-slave -1: ip-slave ip-master
> colocation ip-master-colo-mysql0 10: ip-master mysql0
> property $id="cib-bootstrap-options" \
>         dc-version="1.0.2-c02b459053bfa44d509a2a0e0247b291d93662b7" \
>         last-lrm-refresh="1237492227"
> 
> I have the database nodes "pinned" to a particular machine, and the
> ip addresses can run on either of those database machines.
> 
> It works fine when all nodes are up.
> 
> When I put one of the database machines in standby however, the ip address
> doesn't move.  Here's the output of ptest -LVVs:
> 
> Allocation scores:
> ptest[3795]: 2009/03/19_16:48:25 WARN: unpack_resources: No STONITH
> resources have been defined
> native_color: mysql0 allocation score on gv-neil2.bunchball.net: -1000000
> native_color: mysql0 allocation score on gv-neil0.bunchball.net: 0
> native_color: mysql0 allocation score on gv-neil3.bunchball.net: -1000000
> ptest[3795]: 2009/03/19_16:48:25 WARN: native_color: Resource mysql0
> cannot run anywhere
> native_color: mysql1 allocation score on gv-neil2.bunchball.net: 0
> native_color: mysql1 allocation score on gv-neil0.bunchball.net: -1000000
> native_color: mysql1 allocation score on gv-neil3.bunchball.net: -1000000
> native_color: ip-master allocation score on gv-neil2.bunchball.net: -10
> native_color: ip-master allocation score on gv-neil0.bunchball.net: -10
> native_color: ip-master allocation score on gv-neil3.bunchball.net:
> -1000000
> ptest[3795]: 2009/03/19_16:48:25 WARN: native_color: Resource ip-master
> cannot run anywhere
> native_color: ip-slave allocation score on gv-neil2.bunchball.net: 0
> native_color: ip-slave allocation score on gv-neil0.bunchball.net: 0
> native_color: ip-slave allocation score on gv-neil3.bunchball.net: -1000000
> 
> So, here's my confusion: I thought a colocation constraint with a score
> of other than +/- infinity was "advisory": the scores would be blended
> together.
> 
> I added two extra location rules to add 1000 to the allocation scores
> for ip-master
> and ip-slave on the mysql machines; then things started working "as
> expected".
> 
> So, finally, here are my questions:
> 
> 1. Is the actual scheduling rule that a service will not be started if
> it has
>    a negative allocation score?
> 
> 2. The logic I would have preferred to use for positioning the ip
> resources was
>    "must be colocated with either mysql0 or mysql1".  However, there
> seems to be
>    no way to express this in a colocation rule.  Am I correct?  Is there
> a better
>    way.
> 
> 3. Even more high level: is there a better model for how to move service
> addresses
>    around for this use case?
> 
> As always, thanks much for your time.
> 
>     Neil




More information about the Pacemaker mailing list