[Pacemaker] Re: Understanding rules: location, colocation, order. Using with Master/Slave

Adrian Chapela achapela.rexistros at gmail.com
Wed Oct 29 07:38:26 UTC 2008


Serge Dubrouski escribió:
> I don't see any attachments :-)
>   
Excuse me :( I attached the config file.
> On Tue, Oct 28, 2008 at 12:31 PM, Adrian Chapela
> <achapela.rexistros at gmail.com> wrote:
>   
>> Serge Dubrouski escribió:
>>     
>>> On Tue, Oct 28, 2008 at 11:09 AM, Adrian Chapela
>>> <achapela.rexistros at gmail.com> wrote:
>>>
>>>       
>>>> Serge Dubrouski escribió:
>>>>
>>>>         
>>>>> On Tue, Oct 28, 2008 at 10:22 AM, Adrian Chapela
>>>>> <achapela.rexistros at gmail.com> wrote:
>>>>>
>>>>>
>>>>>           
>>>>>> Serge Dubrouski escribió:
>>>>>>
>>>>>>
>>>>>>             
>>>>>>> On Tue, Oct 28, 2008 at 6:29 AM, Adrian Chapela
>>>>>>> <achapela.rexistros at gmail.com> wrote:
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>               
>>>>>>>> Serge Dubrouski escribió:
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>                 
>>>>>>>>> On Mon, Oct 27, 2008 at 12:43 PM, Adrian Chapela
>>>>>>>>> <achapela.rexistros at gmail.com> wrote:
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>                   
>>>>>>>>>> Serge Dubrouski escribió:
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>                     
>>>>>>>>>>> Something like this:
>>>>>>>>>>>
>>>>>>>>>>>  <rsc_order id="drbd0_before_myGroup" first="ms-drbd0"
>>>>>>>>>>> then="myGroup" then-action="start" first-action="promote"/>
>>>>>>>>>>>  <rsc_colocation id="myGroup_on_drbd0" rsc="myGroup"
>>>>>>>>>>> with-rsc="ms-drbd0" with-rsc-role="Master" score="INFINITY"/>
>>>>>>>>>>>  <rsc_location id="primNode" rsc="myGroup">
>>>>>>>>>>>    <rule id="prefered_primNode" score="1000">
>>>>>>>>>>>      <expression attribute="#uname" id="expression.id2242728"
>>>>>>>>>>> operation="eq" value="fc-node1"/>
>>>>>>>>>>>    </rule>
>>>>>>>>>>>  </rsc_location>
>>>>>>>>>>>
>>>>>>>>>>> See that cib.xml that I sent you a couple of days ago. First rule
>>>>>>>>>>> will
>>>>>>>>>>> promote DRBD before starting a group, second will collocate master
>>>>>>>>>>> and
>>>>>>>>>>> a group, third one will place group and master to the desired
>>>>>>>>>>> node.
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>                       
>>>>>>>>>> Yes, I bases my rules on yours, but I can't update my config with
>>>>>>>>>> your
>>>>>>>>>> rules
>>>>>>>>>> directly.
>>>>>>>>>> <rsc_order id="drbd0_before_myGroup" first="ms-drbd0"
>>>>>>>>>> then="mail_Group"
>>>>>>>>>> then-action="start" first-action="promote"/>
>>>>>>>>>>
>>>>>>>>>> This could be like:
>>>>>>>>>> <rsc_order id="drbd0_before_myGroup" from="mail_Group"
>>>>>>>>>> action="start"
>>>>>>>>>> to="ms-drbd0" to_action="promote"/>
>>>>>>>>>>
>>>>>>>>>> What is the version of your heartbeat ? My 2.99.1 heartbeat didn't
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>                     
>>>>>>>>> It's processed by pacemaker, not heartbeat. That rule worked all
>>>>>>>>> right
>>>>>>>>> under 0.6, 0.7, 1.0:
>>>>>>>>>
>>>>>>>>> Refresh in 3s...
>>>>>>>>>
>>>>>>>>> ============
>>>>>>>>> Last updated: Mon Oct 27 14:52:47 2008
>>>>>>>>> Current DC: fc-node2 (ad6f19b7-228a-48b7-bae0-f95a838bde2a)
>>>>>>>>> 2 Nodes configured.
>>>>>>>>> 3 Resources configured.
>>>>>>>>> ============
>>>>>>>>>
>>>>>>>>> Node: fc-node1 (b88f98c6-50f2-463a-a6eb-51abbec645a9): online
>>>>>>>>> Node: fc-node2 (ad6f19b7-228a-48b7-bae0-f95a838bde2a): online
>>>>>>>>>
>>>>>>>>> Full list of resources:
>>>>>>>>>
>>>>>>>>> Clone Set: DoFencing
>>>>>>>>>  child_DoFencing:0   (stonith:external/xen0):        Started
>>>>>>>>> fc-node1
>>>>>>>>>  child_DoFencing:1   (stonith:external/xen0):        Started
>>>>>>>>> fc-node2
>>>>>>>>> Master/Slave Set: ms-drbd0
>>>>>>>>>  drbd0:0     (ocf::heartbeat:drbd):  Master fc-node1
>>>>>>>>>  drbd0:1     (ocf::heartbeat:drbd):  Started fc-node2
>>>>>>>>> Resource Group: myGroup
>>>>>>>>>  myIP        (ocf::heartbeat:IPaddr):        Started fc-node1
>>>>>>>>>  fs0 (ocf::heartbeat:Filesystem):    Started fc-node1
>>>>>>>>>  myPgsql     (ocf::heartbeat:pgsql): Started fc-node1
>>>>>>>>>
>>>>>>>>> [root at fc-node1 crm]# rpm -qa | grep pacemaker
>>>>>>>>> libpacemaker3-1.0.0-2.1
>>>>>>>>> pacemaker-1.0.0-2.1
>>>>>>>>> [root at fc-node1 crm]#
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> What error do you get?
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>                   
>>>>>>>> I can update the configuration now. I have Heartbeat 2.99.1 +
>>>>>>>> Pacemaker
>>>>>>>> 1.0.
>>>>>>>> Bur now, I can't put Master on the node. Have you check fc-node2 as a
>>>>>>>> master
>>>>>>>> node?
>>>>>>>>
>>>>>>>> How are you using pingd ? As a clone instance or  configured in ha.cf
>>>>>>>> ?
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>>                 
>>>>>>> pingd is broken in 1.0 :-( Andrew fixed it in the latest dev release.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>>               
>>>>>> OK, but Adrew said me this morning that bug was fixed in last stable
>>>>>> code. I
>>>>>> have downloaded Pacemaker-1-0-79d2ba7e502f but pingd seems broken as
>>>>>> well.
>>>>>> If I delete the pingd rule Group is starting but if I don't delete the
>>>>>> rule
>>>>>> the Group is not starting.
>>>>>>
>>>>>> Another thing  is the master selection, it is wrong in my case, using
>>>>>> your
>>>>>> config Sergei. Could you try changing the master node from fc-node1 and
>>>>>> fc-node2 ?
>>>>>>
>>>>>> Thank you!
>>>>>>
>>>>>>
>>>>>>             
>>>>> I have no problems with moving master to fc-node2. JUST A simple
>>>>> change in the prefNode rsc_location rule:
>>>>>
>>>>>     <rsc_location id="primNode" rsc="myGroup">
>>>>>       <rule id="prefered_primNode" score="1000">
>>>>>         <expression attribute="#uname" id="prefNode" operation="eq"
>>>>> value="fc-node2"/>
>>>>>       </rule>
>>>>>
>>>>>
>>>>>           
>>>> OK, I understand the rule but is it working for you ? For me it is not
>>>> working.
>>>>
>>>>         
>>> Yes it does:
>>>
>>> Refresh in 13s...
>>>
>>> ============
>>> Last updated: Tue Oct 28 13:15:53 2008
>>> Current DC: fc-node2 (ad6f19b7-228a-48b7-bae0-f95a838bde2a)
>>> 2 Nodes configured.
>>> 3 Resources configured.
>>> ============
>>>
>>> Node: fc-node1 (b88f98c6-50f2-463a-a6eb-51abbec645a9): online
>>> Node: fc-node2 (ad6f19b7-228a-48b7-bae0-f95a838bde2a): online
>>>
>>> Clone Set: DoFencing
>>>    child_DoFencing:0   (stonith:external/xen0):        Started fc-node1
>>>    child_DoFencing:1   (stonith:external/xen0):        Started fc-node2
>>> Master/Slave Set: ms-drbd0
>>>    drbd0:0     (ocf::heartbeat:drbd):  Started fc-node1
>>>    drbd0:1     (ocf::heartbeat:drbd):  Master fc-node2
>>> Resource Group: myGroup
>>>    myIP        (ocf::heartbeat:IPaddr):        Started fc-node2
>>>    fs0 (ocf::heartbeat:Filesystem):    Started fc-node2
>>>    myPgsql     (ocf::heartbeat:pgsql): Started fc-node2
>>>
>>> Can you show your cib.xml once more? BTW, are you sure that your apps
>>> can start on that second node? What's in the log file?
>>>
>>>       
>> Yes, all apps are OK , pingd is not working and the Master node isn't which
>> I selected it. I attached my config but I don't see any error.... I feel
>> very silly.. I don't know why is the reason...
>>
>> I compilled this:
>> http://hg.clusterlabs.org/pacemaker/stable-1.0/archive/c83061c2f931.tar.bz2
>> and http://hg.clusterlabs.org/pacemaker/stable-1.0/archive/tip.tar.bz2 with
>> same result.
>> and this: http://hg.linux-ha.org/dev/archive/4bbe943cf36c.tar.bz2
>>
>> What are the packages I need ?
>>     
>>>       
>>>> _______________________________________________
>>>> Pacemaker mailing list
>>>> Pacemaker at clusterlabs.org
>>>> http://list.clusterlabs.org/mailman/listinfo/pacemaker
>>>>
>>>>
>>>>         
>>>
>>>
>>>       
>> _______________________________________________
>> Pacemaker mailing list
>> Pacemaker at clusterlabs.org
>> http://list.clusterlabs.org/mailman/listinfo/pacemaker
>>
>>     
>
>
>
>   

-------------- next part --------------
A non-text attachment was scrubbed...
Name: last.xml
Type: text/xml
Size: 3810 bytes
Desc: not available
URL: <https://lists.clusterlabs.org/pipermail/pacemaker/attachments/20081029/de5c7ee8/attachment-0002.xml>


More information about the Pacemaker mailing list