[Pacemaker] Balancing of clone resources (globally-unique=true)

Andrew Beekhof andrew at beekhof.net
Mon Nov 15 07:37:52 UTC 2010


On Fri, Nov 12, 2010 at 7:41 AM, Chris Picton <chris at ecntelecoms.com> wrote:
> I have attached the output as requested

Normally it would get balanced, but its being pushed to 01 because
there are so many resources on 02

   sort_node_weight: slb-test-02.ecntelecoms.za.net (12) >
slb-test-01.ecntelecoms.za.net (2) : resources

So the cluster is trying to balance out the resources, just not at the
level you were expecting.

> On Thu, 11 Nov 2010 11:21:51 +0100, Andrew Beekhof wrote:
>>>> what version is this?
>>>
>>>
>>> This is 1.0.9
>>
>> Odd.  I wouldn't have expected this behavior. Can you attach the
>> output
>> from cibadmin -Ql please?
>>
>>
>>>> On Tue, Nov 9, 2010 at 5:51 PM, Chris Picton
>>>> <chris-SPl7aIeqIQkIqCj4VrxJSQ at public.gmane.org> wrote:
>>>>> From a previous thread (crm_resource - migrating/halt a cloned
>>>>> resource)
>>>>>
>>>>> Andrew Beekhof wrote:
>>>>>> bottom line, you don't get to chose where specific clone instances
>>>>>> get placed.
>>>>>
>>>>> In my case, I have a clone:
>>>>> primitive clusterip-9 ocf:heartbeat:IPaddr2 \
>>>>>        params ip="192.168.0.9" cidr_netmask="24" \
>>>>>        clusterip_hash="sourceip" nic="bondE" \ op monitor
>>>>>        interval="30s" \
>>>>>        meta resource-stickiness="0"
>>>>>
>>>>> clone clusterip-9-clone clusterip-9 \
>>>>>        meta globally-unique="true" clone-max="2" \
>>>>>        clone-node-max="2" resource_stickiness="0"
>>>>>
>>>>> When I start the clone, both instances start on the same node:
>>>>>
>>>>> Clone Set: clusterip-9-clone (unique)
>>>>>     clusterip-9:0      (ocf::heartbeat:IPaddr2):       Started
>>>>> slb-test-01.ecntelecoms.za.net
>>>>>     clusterip-9:1      (ocf::heartbeat:IPaddr2):       Started
>>>>> slb-test-01.ecntelecoms.za.net
>>>>>
>>>>> The second node has a colocated set of standalone IP addresses
>>>>> running, so I assume that pacemaker is pushing both clusterip
> clones
>>>>> to the second node to balance resources.
>>>>>
>>>>> My scores look like (0 for everything to do with this resource)
>>>>> clone_color: clusterip-9-clone allocation score on
>>>>> slb-test-01.ecntelecoms.za.net: 0
>>>>> clone_color: clusterip-9-clone allocation score on
>>>>> slb-test-02.ecntelecoms.za.net: 0
>>>>> clone_color: clusterip-9:0 allocation score on
>>>>> slb-test-01.ecntelecoms.za.net: 0
>>>>> clone_color: clusterip-9:0 allocation score on
>>>>> slb-test-02.ecntelecoms.za.net: 0
>>>>> clone_color: clusterip-9:1 allocation score on
>>>>> slb-test-01.ecntelecoms.za.net: 0
>>>>> clone_color: clusterip-9:1 allocation score on
>>>>> slb-test-02.ecntelecoms.za.net: 0
>>>>> native_color: clusterip-9:0 allocation score on
>>>>> slb-test-01.ecntelecoms.za.net: 0
>>>>> native_color: clusterip-9:0 allocation score on
>>>>> slb-test-02.ecntelecoms.za.net: 0
>>>>> native_color: clusterip-9:1 allocation score on
>>>>> slb-test-01.ecntelecoms.za.net: 0
>>>>> native_color: clusterip-9:1 allocation score on
>>>>> slb-test-02.ecntelecoms.za.net: 0
>>>>>
>>>>>
>>>>>
>>>>> Is there a way to request pacemaker to try split the clones up if
>>>>> possible over the available nodes?
>>>>>
>>>>> Regards
>>>>>
>>>>> Chris
>>>>>
>>>>>
>
>
> _______________________________________________
> Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker
>
>




More information about the Pacemaker mailing list