[Pacemaker] Re: Understanding rules: location, colocation, order. Using with Master/Slave

Adrian Chapela achapela.rexistros at gmail.com
Mon Oct 27 18:43:44 UTC 2008


Serge Dubrouski escribió:
> Something like this:
>
>       <rsc_order id="drbd0_before_myGroup" first="ms-drbd0"
> then="myGroup" then-action="start" first-action="promote"/>
>       <rsc_colocation id="myGroup_on_drbd0" rsc="myGroup"
> with-rsc="ms-drbd0" with-rsc-role="Master" score="INFINITY"/>
>       <rsc_location id="primNode" rsc="myGroup">
>         <rule id="prefered_primNode" score="1000">
>           <expression attribute="#uname" id="expression.id2242728"
> operation="eq" value="fc-node1"/>
>         </rule>
>       </rsc_location>
>
> See that cib.xml that I sent you a couple of days ago. First rule will
> promote DRBD before starting a group, second will collocate master and
> a group, third one will place group and master to the desired node.
>   
Yes, I bases my rules on yours, but I can't update my config with your 
rules directly.
<rsc_order id="drbd0_before_myGroup" first="ms-drbd0" then="mail_Group" 
then-action="start" first-action="promote"/>

This could be like:
<rsc_order id="drbd0_before_myGroup" from="mail_Group" action="start" 
to="ms-drbd0" to_action="promote"/>

What is the version of your heartbeat ? My 2.99.1 heartbeat didn't 
understand the rule.
>
> On Mon, Oct 27, 2008 at 12:24 PM, Adrian Chapela
> <achapela.rexistros at gmail.com> wrote:
>   
>> Serge Dubrouski escribió:
>>     
>>> On Mon, Oct 27, 2008 at 12:07 PM, Adrian Chapela
>>> <achapela.rexistros at gmail.com> wrote:
>>>
>>>       
>>>>> Hello!
>>>>>
>>>>> I am working on a cluster with a two Master/Slave instances.
>>>>>
>>>>> I have: 2 drbd Master/Slave instance, 1 pingd clone instance, 1 group
>>>>> with
>>>>> Filesystem resource.
>>>>>
>>>>> ms-drbd0 is the first Master/Slave
>>>>> ms-drbd1 is the second Master/Slave
>>>>> mail_Group is the first group, it depends on ms-drbd0
>>>>> samba_Group is the second group, it depends on ms-drbd1
>>>>>
>>>>> I have the next rules:
>>>>>
>>>>> <rsc_order id="mail-drbd0_before_fs0" from="Montaxe_mail" action="start"
>>>>> to="ms-drbd0" to_action="promote"/>
>>>>> <rsc_order id="samba-drbd1_before_fs0" from="Montaxe_samba"
>>>>> action="start"
>>>>> to="ms-drbd1" to_action="promote"/>
>>>>> (starts Montaxe_mail when ms-drbd0 has been promoted, start
>>>>> Montaxe_samba
>>>>> when ms-drbd1 has been promoted. These rules are ok, I think)
>>>>>
>>>>> <rsc_colocation id="mail_Group_on_ms-drbd0" to="ms-drbd0"
>>>>> to_role="master"
>>>>> from="mail_Group" score="INFINITY"/>
>>>>> <rsc_colocation id="samba_Group_on_ms-drbd1" to="ms-drbd1"
>>>>> to_role="master" from="samba_Group" score="INFINITY"/>
>>>>> (Run mail_Group only on the master node, run samba_Group on the master
>>>>> node)
>>>>>
>>>>> <rsc_location id="mail:drbd" rsc="ms-drbd0">
>>>>> <rule id="rule:ms-drbd0" role="master" score="100">
>>>>>   <expression  attribute="#uname" operation="eq" value="debianquagga2"/>
>>>>> </rule>
>>>>> <rule id="mail_Group:pingd:rule" score="-INFINITY" boolean_op="or">
>>>>>    <expression id="mail_Group:pingd:expr:undefined" attribute="pingd"
>>>>> operation="not_defined"/>
>>>>>    <expression id="mail_Group:pingd:expr:zero" attribute="pingd"
>>>>> operation="lte" value="0"/>
>>>>> </rule>
>>>>> </rsc_location>
>>>>> <rsc_location id="samba:drbd" rsc="ms-drbd1">
>>>>> <rule id="rule:ms-drbd1" role="master" score="100">
>>>>>   <expression  attribute="#uname" operation="eq" value="debianquagga2"/>
>>>>> </rule>
>>>>> <rule id="samba_Group:pingd:rule" score="-INFINITY" boolean_op="or">
>>>>>    <expression id="samba_Group:pingd:expr:undefined" attribute="pingd"
>>>>> operation="not_defined"/>
>>>>>    <expression id="samba_Group:pingd:expr:zero" attribute="pingd"
>>>>> operation="lte" value="0"/>
>>>>> </rule>
>>>>> </rsc_location>
>>>>> (Select debianquagga2 as Master and if the node lost its connection take
>>>>> the score -INFINITY to do failover, it applies to ms-drbd0 and ms-drbd1)
>>>>>
>>>>> With this rules all is working very well but the node selected as master
>>>>> isn't "debianquagga2",   Why could be the reason ?
>>>>>
>>>>> I using Heartbeat 2.1.4
>>>>>
>>>>>
>>>>>           
>>>> I have attached the cib xml file. If I delete two groups, Master is
>>>> debianQuagga2, If not, Master is debianQuagga1.
>>>>
>>>>         
>>> That probably has something to do with how scores are counted for
>>> groups. In your rsc_location rule for masters you have a really low
>>> score for assigning master role to debianquagga2. It's possible that
>>> groups outscore them with default values. I'm not sure in that, that's
>>> just mu guess. You probably can check this with show score scripts.
>>>
>>>       
>> I will check that, thank you!
>>     
>>> I'd try to assign rsc_location rules to groups, not to master role.
>>> Your collocation rule will control that groups are on the same nodes
>>> with the masters. Or you can try to increase your scores from 100 to
>>> something higher.
>>>
>>>       
>> Ok, but I need that node be a master before than start group because it
>> depends on the master / slave resource, is that possible changing
>> collocation ? Could you open my eyes with a simple example ?
>>
>>     
>>>> _______________________________________________
>>>> Pacemaker mailing list
>>>> Pacemaker at clusterlabs.org
>>>> http://list.clusterlabs.org/mailman/listinfo/pacemaker
>>>>
>>>>
>>>>
>>>>         
>>>
>>>
>>>       
>> _______________________________________________
>> Pacemaker mailing list
>> Pacemaker at clusterlabs.org
>> http://list.clusterlabs.org/mailman/listinfo/pacemaker
>>
>>     
>
>
>
>   





More information about the Pacemaker mailing list