[Pacemaker] colocation that doesn't

Andrew Beekhof andrew at beekhof.net
Sat Nov 20 04:05:48 EST 2010


On Mon, Nov 15, 2010 at 6:34 PM, Alan Jones <falancluster at gmail.com> wrote:
> primitive resX ocf:pacemaker:Dummy
> primitive resY ocf:pacemaker:Dummy
> location resX-nodeA resX -inf: nodeA.acme.com
> location resY-loc resY 1: nodeB.acme.com
> colocation resX-resY -2: resX resY
>
> Both resX and resY end up on nodeB.  I'm expecting resY to land on nodeA.

Then -2 obviously isn't big enough is it.

Please read and understand:
   http://www.clusterlabs.org/doc/en-US/Pacemaker/1.0/html/Pacemaker_Explained/s-resource-colocation.html
For how colocation constraints actually work instead of inventing your
own rules.

Specifically, concentrate on the descriptions of with-rsc and score.

> Alan
>
> On Sun, Nov 14, 2010 at 11:18 PM, Andrew Beekhof <andrew at beekhof.net> wrote:
>> On Fri, Nov 5, 2010 at 4:07 AM, Vadym Chepkov <vchepkov at gmail.com> wrote:
>>>
>>> On Nov 4, 2010, at 12:53 PM, Alan Jones wrote:
>>>
>>>> If I understand you correctly, the role of the second resource in the
>>>> colocation command was defaulting to that of the first "Master" which
>>>> is not defined or is untested for none-ms resources.
>>>> Unfortunately, after changed that line to:
>>>>
>>>> colocation mystateful-ms-loc inf: mystateful-ms:Master myprim:Started
>>>>
>>>> ...it still doesn't work:
>>>>
>>>> myprim  (ocf::pacemaker:DummySlow):     Started node6.acme.com
>>>> Master/Slave Set: mystateful-ms
>>>>     Masters: [ node5.acme.com ]
>>>>     Slaves: [ node6.acme.com ]
>>>>
>>>> And after:
>>>> location myprim-loc myprim -inf: node5.acme.com
>>>>
>>>> myprim  (ocf::pacemaker:DummySlow):     Started node6.acme.com
>>>> Master/Slave Set: mystateful-ms
>>>>     Masters: [ node6.acme.com ]
>>>>     Slaves: [ node5.acme.com ]
>>>>
>>>> What I would like to do is enable logging for the code that calculates
>>>> the weights, etc.
>>>> It is obvious to me that the weights are calculated differently for
>>>> mystateful-ms based on the weights used in myprim.
>>>> Can you enable more verbose logging online or do you have to recompile?
>>>> My version is 1.0.9-89bd754939df5150de7cd76835f98fe90851b677 which is
>>>> different from Vadym's.
>>>> BTW: Is there another release planned for the stable branch?  1.0.9.1
>>>> is now 4 months old.
>>>> I understand that I could take the top of tree, but I would like to
>>>> believe that others are running the same version. ;)
>>>> Thank you!
>>>> Alan
>>>>
>>>> On Thu, Nov 4, 2010 at 8:22 AM, Dejan Muhamedagic <dejanmm at fastmail.fm> wrote:
>>>>> Hi,
>>>>>
>>>>> On Thu, Nov 04, 2010 at 06:51:59AM -0400, Vadym Chepkov wrote:
>>>>>> On Thu, Nov 4, 2010 at 5:37 AM, Dejan Muhamedagic <dejanmm at fastmail.fm> wrote:
>>>>>>
>>>>>>> This should be:
>>>>>>>
>>>>>>> colocation mystateful-ms-loc inf: mystateful-ms:Master myprim:Started
>>>>>>>
>>>>>>
>>>>>> Interesting, so in this case it is not necessary?
>>>>>>
>>>>>> colocation fs_on_drbd inf: WebFS WebDataClone:Master
>>>>>> (taken from Cluster_from_Scratch)
>>>>>>
>>>>>> but other way around it is?
>>>>>
>>>>> Yes, the role of the second resource defaults to the role of the
>>>>> first. Ditto for order and actions. A bit confusing, I know.
>>>>>
>>>>> Thanks,
>>>>>
>>>>> Dejan
>>>>>
>>>
>>>
>>> I did it a bit different this time and I observe the same anomaly.
>>>
>>> First I started stateful clone
>>>
>>> primitive s1 ocf:pacemaker:Stateful
>>> ms ms1 s1 meta master-max="1" master-node-max="1" clone-max="2" clone-node-max="1" notify="true"
>>>
>>> Then a primitive:
>>>
>>> primitive d1 ocf:pacemaker:Dummy
>>>
>>> Made sure Master and primitive are running on different hosts
>>> location ld1 d1 10: xen-12
>>>
>>> and then I added constraint
>>> colocation c1 inf: ms1:Master d1:Started
>>>
>>>  Master/Slave Set: ms1
>>>     Masters: [ xen-11 ]
>>>     Slaves: [ xen-12 ]
>>>  d1     (ocf::pacemaker:Dummy): Started xen-12
>>>
>>>
>>> It seems colocation constraint is not enough to promote a clone. Looks like a bug.
>>>
>>> # ptest -sL|grep s1
>>> clone_color: ms1 allocation score on xen-11: 0
>>> clone_color: ms1 allocation score on xen-12: 0
>>> clone_color: s1:0 allocation score on xen-11: 11
>>> clone_color: s1:0 allocation score on xen-12: 0
>>> clone_color: s1:1 allocation score on xen-11: 0
>>> clone_color: s1:1 allocation score on xen-12: 6
>>> native_color: s1:0 allocation score on xen-11: 11
>>> native_color: s1:0 allocation score on xen-12: 0
>>> native_color: s1:1 allocation score on xen-11: -1000000
>>> native_color: s1:1 allocation score on xen-12: 6
>>> s1:0 promotion score on xen-11: 20
>>> s1:1 promotion score on xen-12: 20
>>>
>>> Vadym
>>
>> Could you attach the result of cibadmin -Ql when the cluster is in
>> this state please?
>>
>> _______________________________________________
>> Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
>> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>>
>> Project Home: http://www.clusterlabs.org
>> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>> Bugs: http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker
>>
>
> _______________________________________________
> Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker
>
>




More information about the Pacemaker mailing list