[Pacemaker] Clone set members colocation

Yves Trudeau y.trudeau at videotron.ca
Wed Nov 9 16:29:25 UTC 2011


Hi Florian,
    the colocation rule was an attempt to separate the clone set 
members.  I tried with and without swappiness with no luck.

Here the new config:

root at testvirtbox1:/usr/lib/ocf/resource.d/heartbeat# crm configure show
node testvirtbox1 \
         attributes IP="10.2.2.160"
node testvirtbox2 \
         attributes IP="10.2.2.161" \
         attributes standby="off"
node testvirtbox3 \
         attributes IP="10.2.2.162" \
         attributes standby="off"
primitive reader_vip_1 ocf:heartbeat:IPaddr2 \
         params ip="10.2.2.171" nic="eth0"
clone reader_vips reader_vip_1 \
         meta globally-unique="true" clone-max="3" clone-node-max="3" 
notify="true" ordered="true" interleave="true"
location No-reader-vip-loc reader_vips \
         rule $id="No-reader-vip-rule" -inf: readerOK eq 0 \
         rule $id="Good-reader-vip-rule" 200: readerOK eq 1
property $id="cib-bootstrap-options" \
         dc-version="1.0.11-a15ead49e20f047e129882619ed075a65c1ebdfe" \
         cluster-infrastructure="openais" \
         expected-quorum-votes="3" \
         stonith-enabled="false" \
         no-quorum-policy="ignore"


The status:

root at testvirtbox1:/usr/lib/ocf/resource.d/heartbeat# crm status
============
Last updated: Wed Nov  9 11:16:48 2011
Stack: openais
Current DC: testvirtbox2 - partition with quorum
Version: 1.0.11-a15ead49e20f047e129882619ed075a65c1ebdfe
3 Nodes configured, 3 expected votes
1 Resources configured.
============

Online: [ testvirtbox1 testvirtbox3 testvirtbox2 ]

  Clone Set: reader_vips (unique)
      reader_vip_1:0     (ocf::heartbeat:IPaddr2):       Started 
testvirtbox1
      reader_vip_1:1     (ocf::heartbeat:IPaddr2):       Started 
testvirtbox1
      reader_vip_1:2     (ocf::heartbeat:IPaddr2):       Started 
testvirtbox1

And the attribute values:

root at testvirtbox1:/usr/lib/ocf/resource.d/heartbeat# crm_attribute -N 
testvirtbox1 -l reboot --name readerOK  --query -q;crm_attribute -N 
testvirtbox2 -l reboot --name readerOK  --query -q;crm_attribute -N 
testvirtbox3 -l reboot --name readerOK  --query -q
1
1
1

Switching testvirtbox1 readerOK attribute to 0 causes this to happen:

root at testvirtbox1:/usr/lib/ocf/resource.d/heartbeat# crm_attribute -N 
testvirtbox1 -l reboot --name readerOK  -v '0'
root at testvirtbox1:/usr/lib/ocf/resource.d/heartbeat# crm status
============
Last updated: Wed Nov  9 11:21:33 2011
Stack: openais
Current DC: testvirtbox2 - partition with quorum
Version: 1.0.11-a15ead49e20f047e129882619ed075a65c1ebdfe
3 Nodes configured, 3 expected votes
1 Resources configured.
============

Online: [ testvirtbox1 testvirtbox3 testvirtbox2 ]

  Clone Set: reader_vips (unique)
      reader_vip_1:0     (ocf::heartbeat:IPaddr2):       Started 
testvirtbox3
      reader_vip_1:1     (ocf::heartbeat:IPaddr2):       Started 
testvirtbox2
      reader_vip_1:2     (ocf::heartbeat:IPaddr2):       Started 
testvirtbox3


If there is no solution, I can always use regular IPAddr RA with 
negative colocation rules, that will work.

Regards,

Yves


On 11-11-08 05:29 PM, Florian Haas wrote:
> On 2011-11-08 22:42, Yves Trudeau wrote:
>> Hi,
>>     I am currently working on a replication solution adding logic to the
>> mysql RA and I need to be able to turn ON/OFF a virtual IP based on a
>> node attribute (Florian suggestion).  In order to achieve this, I
>> created a clone set of IPaddr2 RA and a location rule using the node
>> attribute setting.  It works almost correctly except that clone members
>> stick a bit too much and don't failback.  Here's my config:
>>
>> node testvirtbox1 \
>>          attributes IP="10.2.2.160"
>> node testvirtbox2 \
>>          attributes IP="10.2.2.161" \
>>          attributes standby="off"
>> node testvirtbox3 \
>>          attributes IP="10.2.2.162" \
>>          attributes standby="off"
>> primitive reader_vip_1 ocf:heartbeat:IPaddr2 \
>>          params ip="10.2.2.171" nic="eth0"
> You're meaning to manage an IP range, right? If so, then you're missing
> unique_clone_address="true" here.
>
> I also don't quite understand what you're trying to do with these "IP"
> attributes, but that doesn't look like it's relevant here.
>
>> clone reader_vips reader_vip_1 \
>>          meta globally-unique="true" clone-max="3" clone-node-max="3"
>> notify="true" ordered="true" interleave="true"
>> location No-reader-vip-loc reader_vips \
>>          rule $id="No-reader-vip-rule" -inf: readerOK eq 0
> That location constraint would not trigger on nodes where the "readerOK"
> attribute is not defined at all. Is that intentional? If not, I'd put:
>
> location No-reader-vip-loc reader_vips \
>    rule $id="No-reader-vip-rule" -inf: \
>      not_defined readerOK or readerOK eq 0
>
>> colocation reader_vips_dislike_reader_vips -200: reader_vips reader_vips
> What is _that_ meant to achieve?
>
>> property $id="cib-bootstrap-options" \
>>          dc-version="1.0.11-a15ead49e20f047e129882619ed075a65c1ebdfe" \
>>          cluster-infrastructure="openais" \
>>          expected-quorum-votes="3" \
>>          stonith-enabled="false" \
>>          no-quorum-policy="ignore"
>> rsc_defaults $id="rsc-options" \
>>          resource-stickiness="100"
>>
>>
>>
>> - Upon startup, the reader_vips clone are well spread on all 3 nodes
>> - Running "crm_attribute -N testvirtbox1 -l reboot --name readerOK -s
>> mysql_replication -v '0'"  removes the clone set member from
>> testvirtbox1 as expected and add it to testvirtbox2 or testvirtbox3
>> - But... running "crm_attribute -N testvirtbox1 -l reboot --name
>> readerOK -s mysql_replication -v '1'" does not make it come back, one of
>> the other node still have 2 clone members
> That I'd consider expected. As I understand it, your resource stickiness
> keeps the clone instance where it is.
>
>> Is there a way to make it come back normally or is there something I
>> don't do correctly?
> Try removing the resource stickiness score from rsc_defaults.
>
> Cheers,
> Florian
>
>





More information about the Pacemaker mailing list