[Pacemaker] Multi State Resources

Adrian Chapela achapela.rexistros at gmail.com
Tue Dec 30 04:31:17 EST 2008


Dejan Muhamedagic escribió:
> On Tue, Dec 30, 2008 at 09:58:18AM +0100, Adrian Chapela wrote:
>   
>> Dejan Muhamedagic escribi?:
>>     
>>> Hi,
>>>
>>> On Mon, Dec 29, 2008 at 01:13:33PM +0100, Adrian Chapela wrote:
>>>   
>>>       
>>>> Hello,
>>>>
>>>> I have one Multi State resource and I want to permit it to start on one 
>>>> node and then in the other.
>>>>
>>>> In normal situation we will have two nodes starting at same time. With 
>>>> constraints Heartbeat will decide the best node to run all resources.
>>>>
>>>> But in another situation, We could have only one node starting. At this 
>>>> moment we need to start all resources in this node.
>>>>
>>>> How can I permit this situation ?
>>>>     
>>>>         
>>> What's there to permit? I suppose that in this case there will be
>>> only the master instance of the resource running. Or did I
>>> misunderstand your question?
>>>   
>>>       
>> No, You understood my question but I can't achieve that with my config file 
>> and I don't know why.
>>
>> Do you know some tip ?
>>     
>
> Your config file looks fine to me. The location preference of 50
> for node2 is not necessary. Also, did you check that the pingd
> attribute is updated in the cib and that its value is >=1000?
> Other than that, one can't say what's going on without looking at
> the logs.
>   
I thought about it. My first thought was pingd clone is not started, 
this could be the problem but It wasn't the problem. I updated this 
value to 5000 (the normal value in my configuration) manually and the 
master/slave resource didn't start. I thought in another problem.

OK, I will send the logs. Which logs you need ? hbreport maybe ?
> Thanks,
>
> Dejan
>
>   
>>> Thanks,
>>>
>>> Dejan
>>>
>>>   
>>>       
>>>> I can't use globally-unique as an option of multi state. One option could 
>>>> be "ordered" ?
>>>>
>>>> I have attached my config file.
>>>>
>>>> Could you have a look ?
>>>>
>>>> Thank you!
>>>>     
>>>>         
>>>   
>>>       
>>>> <configuration>
>>>>     <crm_config>
>>>>       <cluster_property_set id="cib-bootstrap-options">
>>>>         <nvpair id="cib-bootstrap-options-dc-version" name="dc-version" value="1.0.1-node: b2e38c67d01ed1571259f74f51ee101bdcf54226"/>
>>>>         <nvpair id="cib-bootstrap-options-default-resource-failure-stickiness" name="default-resource-failure-stickiness" value="-INFINITY"/>
>>>>         <nvpair id="cib-bootstrap-options-default-resource-stickiness" name="default-resource-stickiness" value="INFINITY"/>
>>>>         <nvpair id="cib-bootstrap-options-stonith-action" name="stonith-action" value="poweroff"/>
>>>>         <nvpair id="cib-bootstrap-options-symmetric-cluster" name="symmetric-cluster" value="true"/>
>>>>         <nvpair id="cib-bootstrap-options-no-quorum-policy" name="no-quorum-policy" value="ignore"/>
>>>>       </cluster_property_set>
>>>>     </crm_config>
>>>>     <nodes>
>>>>       <node id="ea8af0a5-d8f2-41e7-a861-83236d53689f" uname="node1" type="normal"/>
>>>>       <node id="016a6a9d-898c-4606-aa59-00e64ecaab86" uname="node2" type="normal"/>
>>>>     </nodes>
>>>>     <resources>
>>>>       <master id="MySQL">
>>>>         <meta_attributes id="MySQL-meta">
>>>>           <nvpair id="MySQL-meta-1" name="clone_max" value="2"/>
>>>>           <nvpair id="MySQL-meta-2" name="clone_node_max" value="1"/>
>>>>           <nvpair id="MySQL-meta-3" name="master_max" value="1"/>
>>>>           <nvpair id="MySQL-meta-4" name="master_node_max" value="1"/>
>>>>           <nvpair id="MySQL-meta-6" name="globally-unique" value="false"/>
>>>>         </meta_attributes>
>>>>         <primitive id="MySQL-primitive" class="ocf" provider="heartbeat" type="mysql_slave_master">
>>>>           <operations>
>>>>             <op id="MySQL-op-1" name="start" interval="0s" timeout="300s"/>
>>>>             <op id="MySQL-op-2" name="stop" interval="0s" timeout="900s" on-fail="fence"/>
>>>>             <op id="MySQL-op-3" name="monitor" interval="59s" timeout="60s" role="Master" on-fail="fence"/>
>>>>             <op id="MySQL-op-4" name="monitor" interval="60s" timeout="60s" role="Slave" on-fail="fence"/>
>>>>           </operations>
>>>>         </primitive>
>>>>       </master>
>>>>       <group id="IP_Group">
>>>>         <primitive class="ocf" id="IPaddr-1" provider="heartbeat" type="IPaddr">
>>>>           <operations>
>>>>             <op id="IPaddr-1-op-monitor" interval="5s" name="monitor" timeout="5s"/>
>>>>             <op id="IPaddr-1-op-start" name="start" interval="0s" timeout="5s"/>
>>>>             <op id="IPaddr-1-op-stop" name="stop" interval="0s" timeout="5s"/>
>>>>           </operations>
>>>>           <instance_attributes id="IPaddr-1-ia">
>>>>             <nvpair id="IPaddr-1-IP" name="ip" value="192.168.18.24"/>
>>>>             <nvpair id="IPaddr-1-netmask" name="netmask" value="24"/>
>>>>             <nvpair id="IPaddr-1-gw" name="gw" value="192.168.18.254"/>
>>>>             <nvpair id="IPaddr-1-nic" name="nic" value="eth0"/>
>>>>           </instance_attributes>
>>>>         </primitive>
>>>>       </group>
>>>>       <clone id="pingd-clone">
>>>>         <primitive id="pingd" provider="heartbeat" class="ocf" type="pingd">
>>>>           <instance_attributes id="pingd-attrs">
>>>>             <nvpair id="pingd-dampen" name="dampen" value="5s"/>
>>>>             <nvpair id="pingd-multiplier" name="multiplier" value="1000"/>
>>>>             <nvpair id="pingd-hosts" name="host_list" value="192.168.18.210 192.168.18.254 192.168.18.253 192.168.18.200 192.168.18.201"/>
>>>>           </instance_attributes>
>>>>           <operations>
>>>>             <op id="pingd-clone-monitor" name="monitor" interval="5s" timeout="20s"/>
>>>>             <op id="pingd-clone-start" name="start" interval="0" timeout="20s"/>
>>>>           </operations>
>>>>         </primitive>
>>>>       </clone>
>>>>       <clone id="DoFencing">
>>>>         <meta_attributes id="DoFencing-meta">
>>>>           <nvpair id="DoFencing-meta-1" name="clone_max" value="2"/>
>>>>           <nvpair id="DoFencing-meta-2" name="clone_node_max" value="1"/>
>>>>         </meta_attributes>
>>>>         <primitive id="ssh-stonith" class="stonith" type="ssh">
>>>>           <instance_attributes id="ssh-stonith-attributes">
>>>>             <nvpair id="ssh-stonith-hostlist" name="hostlist" value="node1_backup node2_backup"/>
>>>>           </instance_attributes>
>>>>           <operations>
>>>>             <op id="DoFencing-monitor" name="monitor" interval="5s" timeout="20s"/>
>>>>             <op id="DoFencing-start" name="start" interval="0" timeout="20s"/>
>>>>           </operations>
>>>>         </primitive>
>>>>       </clone>
>>>>     </resources>
>>>>     <constraints>
>>>>       <rsc_order id="MySQL-IP_Group" first="MySQL" first-action="promote" then="IP_Group" then-action="start"/>
>>>>       <rsc_colocation id="IP_Group-with-MySQL" rsc="IP_Group" with-rsc="MySQL" with-rsc-role="Master" score="INFINITY"/>
>>>>       <rsc_location id="loca_MySQL_node1" rsc="MySQL">
>>>>         <rule id="rule_loc_MySQL_node1" role="Master" score="100">
>>>>           <expression id="exp_rule_MySQL_node1" attribute="#uname" operation="eq" value="node1"/>
>>>>         </rule>
>>>>       </rsc_location>
>>>>       <rsc_location id="loca_MySQL_node2" rsc="MySQL">
>>>>         <rule id="rule_loc_MySQL_node2" role="Master" score="50">
>>>>           <expression id="exp_rule_MySQL_node2" attribute="#uname" operation="eq" value="node2"/>
>>>>         </rule>
>>>>       </rsc_location>
>>>>       <rsc_location id="mysql-connectivity" rsc="MySQL">
>>>>         <rule id="mysql-pingd-prefer-rule" score="-INFINITY" role="Master">
>>>>           <expression id="mysql-pingd-prefer" attribute="pingd" operation="lt" value="1000"/>
>>>>         </rule>
>>>>       </rsc_location>
>>>>     </constraints>
>>>>   </configuration>
>>>>     
>>>>         
>>>   
>>>       
>>>> _______________________________________________
>>>> Pacemaker mailing list
>>>> Pacemaker at clusterlabs.org
>>>> http://list.clusterlabs.org/mailman/listinfo/pacemaker
>>>>     
>>>>         
>>> _______________________________________________
>>> Pacemaker mailing list
>>> Pacemaker at clusterlabs.org
>>> http://list.clusterlabs.org/mailman/listinfo/pacemaker
>>>
>>>   
>>>       
>> _______________________________________________
>> Pacemaker mailing list
>> Pacemaker at clusterlabs.org
>> http://list.clusterlabs.org/mailman/listinfo/pacemaker
>>     
>
> _______________________________________________
> Pacemaker mailing list
> Pacemaker at clusterlabs.org
> http://list.clusterlabs.org/mailman/listinfo/pacemaker
>
>   





More information about the Pacemaker mailing list