[Pacemaker] Re: Understanding rules: location, colocation, order. Using with Master/Slave

Adrian Chapela achapela.rexistros at gmail.com
Tue Oct 28 16:22:03 UTC 2008


Serge Dubrouski escribió:
> On Tue, Oct 28, 2008 at 6:29 AM, Adrian Chapela
> <achapela.rexistros at gmail.com> wrote:
>   
>> Serge Dubrouski escribió:
>>     
>>> On Mon, Oct 27, 2008 at 12:43 PM, Adrian Chapela
>>> <achapela.rexistros at gmail.com> wrote:
>>>
>>>       
>>>> Serge Dubrouski escribió:
>>>>
>>>>         
>>>>> Something like this:
>>>>>
>>>>>     <rsc_order id="drbd0_before_myGroup" first="ms-drbd0"
>>>>> then="myGroup" then-action="start" first-action="promote"/>
>>>>>     <rsc_colocation id="myGroup_on_drbd0" rsc="myGroup"
>>>>> with-rsc="ms-drbd0" with-rsc-role="Master" score="INFINITY"/>
>>>>>     <rsc_location id="primNode" rsc="myGroup">
>>>>>       <rule id="prefered_primNode" score="1000">
>>>>>         <expression attribute="#uname" id="expression.id2242728"
>>>>> operation="eq" value="fc-node1"/>
>>>>>       </rule>
>>>>>     </rsc_location>
>>>>>
>>>>> See that cib.xml that I sent you a couple of days ago. First rule will
>>>>> promote DRBD before starting a group, second will collocate master and
>>>>> a group, third one will place group and master to the desired node.
>>>>>
>>>>>
>>>>>           
>>>> Yes, I bases my rules on yours, but I can't update my config with your
>>>> rules
>>>> directly.
>>>> <rsc_order id="drbd0_before_myGroup" first="ms-drbd0" then="mail_Group"
>>>> then-action="start" first-action="promote"/>
>>>>
>>>> This could be like:
>>>> <rsc_order id="drbd0_before_myGroup" from="mail_Group" action="start"
>>>> to="ms-drbd0" to_action="promote"/>
>>>>
>>>> What is the version of your heartbeat ? My 2.99.1 heartbeat didn't
>>>>
>>>>         
>>> It's processed by pacemaker, not heartbeat. That rule worked all right
>>> under 0.6, 0.7, 1.0:
>>>
>>> Refresh in 3s...
>>>
>>> ============
>>> Last updated: Mon Oct 27 14:52:47 2008
>>> Current DC: fc-node2 (ad6f19b7-228a-48b7-bae0-f95a838bde2a)
>>> 2 Nodes configured.
>>> 3 Resources configured.
>>> ============
>>>
>>> Node: fc-node1 (b88f98c6-50f2-463a-a6eb-51abbec645a9): online
>>> Node: fc-node2 (ad6f19b7-228a-48b7-bae0-f95a838bde2a): online
>>>
>>> Full list of resources:
>>>
>>> Clone Set: DoFencing
>>>    child_DoFencing:0   (stonith:external/xen0):        Started fc-node1
>>>    child_DoFencing:1   (stonith:external/xen0):        Started fc-node2
>>> Master/Slave Set: ms-drbd0
>>>    drbd0:0     (ocf::heartbeat:drbd):  Master fc-node1
>>>    drbd0:1     (ocf::heartbeat:drbd):  Started fc-node2
>>> Resource Group: myGroup
>>>    myIP        (ocf::heartbeat:IPaddr):        Started fc-node1
>>>    fs0 (ocf::heartbeat:Filesystem):    Started fc-node1
>>>    myPgsql     (ocf::heartbeat:pgsql): Started fc-node1
>>>
>>> [root at fc-node1 crm]# rpm -qa | grep pacemaker
>>> libpacemaker3-1.0.0-2.1
>>> pacemaker-1.0.0-2.1
>>> [root at fc-node1 crm]#
>>>
>>>
>>> What error do you get?
>>>
>>>       
>> I can update the configuration now. I have Heartbeat 2.99.1 + Pacemaker 1.0.
>> Bur now, I can't put Master on the node. Have you check fc-node2 as a master
>> node?
>>
>> How are you using pingd ? As a clone instance or  configured in ha.cf ?
>>     
>
> pingd is broken in 1.0 :-( Andrew fixed it in the latest dev release.
>   

OK, but Adrew said me this morning that bug was fixed in last stable 
code. I have downloaded Pacemaker-1-0-79d2ba7e502f but pingd seems 
broken as well. If I delete the pingd rule Group is starting but if I 
don't delete the rule the Group is not starting.

Another thing  is the master selection, it is wrong in my case, using 
your config Sergei. Could you try changing the master node from fc-node1 
and fc-node2 ?

Thank you!

>>>       
>>>> understand the rule.
>>>>
>>>>         
>>>>> On Mon, Oct 27, 2008 at 12:24 PM, Adrian Chapela
>>>>> <achapela.rexistros at gmail.com> wrote:
>>>>>
>>>>>
>>>>>           
>>>>>> Serge Dubrouski escribió:
>>>>>>
>>>>>>
>>>>>>             
>>>>>>> On Mon, Oct 27, 2008 at 12:07 PM, Adrian Chapela
>>>>>>> <achapela.rexistros at gmail.com> wrote:
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>               
>>>>>>>>> Hello!
>>>>>>>>>
>>>>>>>>> I am working on a cluster with a two Master/Slave instances.
>>>>>>>>>
>>>>>>>>> I have: 2 drbd Master/Slave instance, 1 pingd clone instance, 1
>>>>>>>>> group
>>>>>>>>> with
>>>>>>>>> Filesystem resource.
>>>>>>>>>
>>>>>>>>> ms-drbd0 is the first Master/Slave
>>>>>>>>> ms-drbd1 is the second Master/Slave
>>>>>>>>> mail_Group is the first group, it depends on ms-drbd0
>>>>>>>>> samba_Group is the second group, it depends on ms-drbd1
>>>>>>>>>
>>>>>>>>> I have the next rules:
>>>>>>>>>
>>>>>>>>> <rsc_order id="mail-drbd0_before_fs0" from="Montaxe_mail"
>>>>>>>>> action="start"
>>>>>>>>> to="ms-drbd0" to_action="promote"/>
>>>>>>>>> <rsc_order id="samba-drbd1_before_fs0" from="Montaxe_samba"
>>>>>>>>> action="start"
>>>>>>>>> to="ms-drbd1" to_action="promote"/>
>>>>>>>>> (starts Montaxe_mail when ms-drbd0 has been promoted, start
>>>>>>>>> Montaxe_samba
>>>>>>>>> when ms-drbd1 has been promoted. These rules are ok, I think)
>>>>>>>>>
>>>>>>>>> <rsc_colocation id="mail_Group_on_ms-drbd0" to="ms-drbd0"
>>>>>>>>> to_role="master"
>>>>>>>>> from="mail_Group" score="INFINITY"/>
>>>>>>>>> <rsc_colocation id="samba_Group_on_ms-drbd1" to="ms-drbd1"
>>>>>>>>> to_role="master" from="samba_Group" score="INFINITY"/>
>>>>>>>>> (Run mail_Group only on the master node, run samba_Group on the
>>>>>>>>> master
>>>>>>>>> node)
>>>>>>>>>
>>>>>>>>> <rsc_location id="mail:drbd" rsc="ms-drbd0">
>>>>>>>>> <rule id="rule:ms-drbd0" role="master" score="100">
>>>>>>>>>  <expression  attribute="#uname" operation="eq"
>>>>>>>>> value="debianquagga2"/>
>>>>>>>>> </rule>
>>>>>>>>> <rule id="mail_Group:pingd:rule" score="-INFINITY" boolean_op="or">
>>>>>>>>>  <expression id="mail_Group:pingd:expr:undefined" attribute="pingd"
>>>>>>>>> operation="not_defined"/>
>>>>>>>>>  <expression id="mail_Group:pingd:expr:zero" attribute="pingd"
>>>>>>>>> operation="lte" value="0"/>
>>>>>>>>> </rule>
>>>>>>>>> </rsc_location>
>>>>>>>>> <rsc_location id="samba:drbd" rsc="ms-drbd1">
>>>>>>>>> <rule id="rule:ms-drbd1" role="master" score="100">
>>>>>>>>>  <expression  attribute="#uname" operation="eq"
>>>>>>>>> value="debianquagga2"/>
>>>>>>>>> </rule>
>>>>>>>>> <rule id="samba_Group:pingd:rule" score="-INFINITY" boolean_op="or">
>>>>>>>>>  <expression id="samba_Group:pingd:expr:undefined" attribute="pingd"
>>>>>>>>> operation="not_defined"/>
>>>>>>>>>  <expression id="samba_Group:pingd:expr:zero" attribute="pingd"
>>>>>>>>> operation="lte" value="0"/>
>>>>>>>>> </rule>
>>>>>>>>> </rsc_location>
>>>>>>>>> (Select debianquagga2 as Master and if the node lost its connection
>>>>>>>>> take
>>>>>>>>> the score -INFINITY to do failover, it applies to ms-drbd0 and
>>>>>>>>> ms-drbd1)
>>>>>>>>>
>>>>>>>>> With this rules all is working very well but the node selected as
>>>>>>>>> master
>>>>>>>>> isn't "debianquagga2",   Why could be the reason ?
>>>>>>>>>
>>>>>>>>> I using Heartbeat 2.1.4
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>                   
>>>>>>>> I have attached the cib xml file. If I delete two groups, Master is
>>>>>>>> debianQuagga2, If not, Master is debianQuagga1.
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>                 
>>>>>>> That probably has something to do with how scores are counted for
>>>>>>> groups. In your rsc_location rule for masters you have a really low
>>>>>>> score for assigning master role to debianquagga2. It's possible that
>>>>>>> groups outscore them with default values. I'm not sure in that, that's
>>>>>>> just mu guess. You probably can check this with show score scripts.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>               
>>>>>> I will check that, thank you!
>>>>>>
>>>>>>
>>>>>>             
>>>>>>> I'd try to assign rsc_location rules to groups, not to master role.
>>>>>>> Your collocation rule will control that groups are on the same nodes
>>>>>>> with the masters. Or you can try to increase your scores from 100 to
>>>>>>> something higher.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>               
>>>>>> Ok, but I need that node be a master before than start group because it
>>>>>> depends on the master / slave resource, is that possible changing
>>>>>> collocation ? Could you open my eyes with a simple example ?
>>>>>>
>>>>>>
>>>>>>
>>>>>>             
>>>>>>>> _______________________________________________
>>>>>>>> Pacemaker mailing list
>>>>>>>> Pacemaker at clusterlabs.org
>>>>>>>> http://list.clusterlabs.org/mailman/listinfo/pacemaker
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>                 
>>>>>>>               
>>>>>> _______________________________________________
>>>>>> Pacemaker mailing list
>>>>>> Pacemaker at clusterlabs.org
>>>>>> http://list.clusterlabs.org/mailman/listinfo/pacemaker
>>>>>>
>>>>>>
>>>>>>
>>>>>>             
>>>>>
>>>>>           
>>>> _______________________________________________
>>>> Pacemaker mailing list
>>>> Pacemaker at clusterlabs.org
>>>> http://list.clusterlabs.org/mailman/listinfo/pacemaker
>>>>
>>>>
>>>>         
>>>
>>>
>>>       
>> _______________________________________________
>> Pacemaker mailing list
>> Pacemaker at clusterlabs.org
>> http://list.clusterlabs.org/mailman/listinfo/pacemaker
>>
>>     
>
>
>
>   





More information about the Pacemaker mailing list