[Pacemaker] Multi State Resources
Dejan Muhamedagic
dejanmm at fastmail.fm
Tue Dec 30 19:22:52 UTC 2008
On Tue, Dec 30, 2008 at 04:28:49PM +0100, Adrian Chapela wrote:
> Dejan Muhamedagic escribi?:
>> Hi,
>>
>> On Tue, Dec 30, 2008 at 11:56:56AM +0100, Adrian Chapela wrote:
>>
>>> Dejan Muhamedagic escribi?:
>>>
>>>> On Tue, Dec 30, 2008 at 10:31:17AM +0100, Adrian Chapela wrote:
>>>>
>>>>> Dejan Muhamedagic escribi?:
>>>>>
>>>>>> On Tue, Dec 30, 2008 at 09:58:18AM +0100, Adrian Chapela wrote:
>>>>>>
>>>>>>> Dejan Muhamedagic escribi?:
>>>>>>>
>>>>>>>> Hi,
>>>>>>>>
>>>>>>>> On Mon, Dec 29, 2008 at 01:13:33PM +0100, Adrian Chapela wrote:
>>>>>>>>
>>>>>>>>> Hello,
>>>>>>>>>
>>>>>>>>> I have one Multi State resource and I want to permit it to start on
>>>>>>>>> one node and then in the other.
>>>>>>>>>
>>>>>>>>> In normal situation we will have two nodes starting at same time.
>>>>>>>>> With constraints Heartbeat will decide the best node to run all
>>>>>>>>> resources.
>>>>>>>>>
>>>>>>>>> But in another situation, We could have only one node starting. At
>>>>>>>>> this moment we need to start all resources in this node.
>>>>>>>>>
>>>>>>>>> How can I permit this situation ?
>>>>>>>>>
>>>>>>>> What's there to permit? I suppose that in this case there will be
>>>>>>>> only the master instance of the resource running. Or did I
>>>>>>>> misunderstand your question?
>>>>>>>>
>>>>>>> No, You understood my question but I can't achieve that with my
>>>>>>> config file and I don't know why.
>>>>>>>
>>>>>>> Do you know some tip ?
>>>>>>>
>>>>>> Your config file looks fine to me. The location preference of 50
>>>>>> for node2 is not necessary. Also, did you check that the pingd
>>>>>> attribute is updated in the cib and that its value is >=1000?
>>>>>> Other than that, one can't say what's going on without looking at
>>>>>> the logs.
>>>>>>
>>>>> I thought about it. My first thought was pingd clone is not started,
>>>>> this could be the problem but It wasn't the problem. I updated this
>>>>> value to 5000 (the normal value in my configuration) manually and the
>>>>> master/slave resource didn't start. I thought in another problem.
>>>>>
>>>>> OK, I will send the logs. Which logs you need ? hbreport maybe ?
>>>>>
>>>> hb_report would be the best.
>>>>
>>> OK, I have uploaded the hb_report.
>>>
>>
>> node2 wants to fence node1 since startup-fencing is by default
>> true. All other actions are waiting for that. But node1 can't be
>> fenced, because there's no stonith resource which has it in its
>> hostlist. This is wrong:
>>
>> <nvpair id="ssh-stonith-hostlist" name="hostlist" value="node1_backup node2_backup"/>
>>
>> BTW, hope that you know that, it is also bad to use ssh stonith
>> unless testing.
>>
> Yes, I know that but I wanted to try to use a "second interface" and I
> write in the list of host, the name of these second interfaces.
Can't work that way. The cluster looks into the hostlist to know
which nodes a stonith resource can fence. So, it should contain
the node names. Which way a stonith agent gets to a node is a
different matter and can't be controlled this way.
> But, I am thinking in use ssh with bonding in two servers, could be a good
> way to achieve a good stonith ?
Better than single interface, but still not good enough for
production.
Thanks,
Dejan
>> Thanks,
>>
>> Dejan
>>
>>
>>>>>> Thanks,
>>>>>>
>>>>>> Dejan
>>>>>>
>>>>>>
>>>>>>>> Thanks,
>>>>>>>>
>>>>>>>> Dejan
>>>>>>>>
>>>>>>>>
>>>>>>>>> I can't use globally-unique as an option of multi state. One option
>>>>>>>>> could be "ordered" ?
>>>>>>>>>
>>>>>>>>> I have attached my config file.
>>>>>>>>>
>>>>>>>>> Could you have a look ?
>>>>>>>>>
>>>>>>>>> Thank you!
>>>>>>>>>
>>>>>>>>
>>>>>>>>> <configuration>
>>>>>>>>> <crm_config>
>>>>>>>>> <cluster_property_set id="cib-bootstrap-options">
>>>>>>>>> <nvpair id="cib-bootstrap-options-dc-version" name="dc-version" value="1.0.1-node: b2e38c67d01ed1571259f74f51ee101bdcf54226"/>
>>>>>>>>> <nvpair id="cib-bootstrap-options-default-resource-failure-stickiness" name="default-resource-failure-stickiness" value="-INFINITY"/>
>>>>>>>>> <nvpair id="cib-bootstrap-options-default-resource-stickiness" name="default-resource-stickiness" value="INFINITY"/>
>>>>>>>>> <nvpair id="cib-bootstrap-options-stonith-action" name="stonith-action" value="poweroff"/>
>>>>>>>>> <nvpair id="cib-bootstrap-options-symmetric-cluster" name="symmetric-cluster" value="true"/>
>>>>>>>>> <nvpair id="cib-bootstrap-options-no-quorum-policy" name="no-quorum-policy" value="ignore"/>
>>>>>>>>> </cluster_property_set>
>>>>>>>>> </crm_config>
>>>>>>>>> <nodes>
>>>>>>>>> <node id="ea8af0a5-d8f2-41e7-a861-83236d53689f" uname="node1" type="normal"/>
>>>>>>>>> <node id="016a6a9d-898c-4606-aa59-00e64ecaab86" uname="node2" type="normal"/>
>>>>>>>>> </nodes>
>>>>>>>>> <resources>
>>>>>>>>> <master id="MySQL">
>>>>>>>>> <meta_attributes id="MySQL-meta">
>>>>>>>>> <nvpair id="MySQL-meta-1" name="clone_max" value="2"/>
>>>>>>>>> <nvpair id="MySQL-meta-2" name="clone_node_max" value="1"/>
>>>>>>>>> <nvpair id="MySQL-meta-3" name="master_max" value="1"/>
>>>>>>>>> <nvpair id="MySQL-meta-4" name="master_node_max" value="1"/>
>>>>>>>>> <nvpair id="MySQL-meta-6" name="globally-unique" value="false"/>
>>>>>>>>> </meta_attributes>
>>>>>>>>> <primitive id="MySQL-primitive" class="ocf" provider="heartbeat" type="mysql_slave_master">
>>>>>>>>> <operations>
>>>>>>>>> <op id="MySQL-op-1" name="start" interval="0s" timeout="300s"/>
>>>>>>>>> <op id="MySQL-op-2" name="stop" interval="0s" timeout="900s" on-fail="fence"/>
>>>>>>>>> <op id="MySQL-op-3" name="monitor" interval="59s" timeout="60s" role="Master" on-fail="fence"/>
>>>>>>>>> <op id="MySQL-op-4" name="monitor" interval="60s" timeout="60s" role="Slave" on-fail="fence"/>
>>>>>>>>> </operations>
>>>>>>>>> </primitive>
>>>>>>>>> </master>
>>>>>>>>> <group id="IP_Group">
>>>>>>>>> <primitive class="ocf" id="IPaddr-1" provider="heartbeat" type="IPaddr">
>>>>>>>>> <operations>
>>>>>>>>> <op id="IPaddr-1-op-monitor" interval="5s" name="monitor" timeout="5s"/>
>>>>>>>>> <op id="IPaddr-1-op-start" name="start" interval="0s" timeout="5s"/>
>>>>>>>>> <op id="IPaddr-1-op-stop" name="stop" interval="0s" timeout="5s"/>
>>>>>>>>> </operations>
>>>>>>>>> <instance_attributes id="IPaddr-1-ia">
>>>>>>>>> <nvpair id="IPaddr-1-IP" name="ip" value="192.168.18.24"/>
>>>>>>>>> <nvpair id="IPaddr-1-netmask" name="netmask" value="24"/>
>>>>>>>>> <nvpair id="IPaddr-1-gw" name="gw" value="192.168.18.254"/>
>>>>>>>>> <nvpair id="IPaddr-1-nic" name="nic" value="eth0"/>
>>>>>>>>> </instance_attributes>
>>>>>>>>> </primitive>
>>>>>>>>> </group>
>>>>>>>>> <clone id="pingd-clone">
>>>>>>>>> <primitive id="pingd" provider="heartbeat" class="ocf" type="pingd">
>>>>>>>>> <instance_attributes id="pingd-attrs">
>>>>>>>>> <nvpair id="pingd-dampen" name="dampen" value="5s"/>
>>>>>>>>> <nvpair id="pingd-multiplier" name="multiplier" value="1000"/>
>>>>>>>>> <nvpair id="pingd-hosts" name="host_list" value="192.168.18.210 192.168.18.254 192.168.18.253 192.168.18.200 192.168.18.201"/>
>>>>>>>>> </instance_attributes>
>>>>>>>>> <operations>
>>>>>>>>> <op id="pingd-clone-monitor" name="monitor" interval="5s" timeout="20s"/>
>>>>>>>>> <op id="pingd-clone-start" name="start" interval="0" timeout="20s"/>
>>>>>>>>> </operations>
>>>>>>>>> </primitive>
>>>>>>>>> </clone>
>>>>>>>>> <clone id="DoFencing">
>>>>>>>>> <meta_attributes id="DoFencing-meta">
>>>>>>>>> <nvpair id="DoFencing-meta-1" name="clone_max" value="2"/>
>>>>>>>>> <nvpair id="DoFencing-meta-2" name="clone_node_max" value="1"/>
>>>>>>>>> </meta_attributes>
>>>>>>>>> <primitive id="ssh-stonith" class="stonith" type="ssh">
>>>>>>>>> <instance_attributes id="ssh-stonith-attributes">
>>>>>>>>> <nvpair id="ssh-stonith-hostlist" name="hostlist" value="node1_backup node2_backup"/>
>>>>>>>>> </instance_attributes>
>>>>>>>>> <operations>
>>>>>>>>> <op id="DoFencing-monitor" name="monitor" interval="5s" timeout="20s"/>
>>>>>>>>> <op id="DoFencing-start" name="start" interval="0" timeout="20s"/>
>>>>>>>>> </operations>
>>>>>>>>> </primitive>
>>>>>>>>> </clone>
>>>>>>>>> </resources>
>>>>>>>>> <constraints>
>>>>>>>>> <rsc_order id="MySQL-IP_Group" first="MySQL" first-action="promote" then="IP_Group" then-action="start"/>
>>>>>>>>> <rsc_colocation id="IP_Group-with-MySQL" rsc="IP_Group" with-rsc="MySQL" with-rsc-role="Master" score="INFINITY"/>
>>>>>>>>> <rsc_location id="loca_MySQL_node1" rsc="MySQL">
>>>>>>>>> <rule id="rule_loc_MySQL_node1" role="Master" score="100">
>>>>>>>>> <expression id="exp_rule_MySQL_node1" attribute="#uname" operation="eq" value="node1"/>
>>>>>>>>> </rule>
>>>>>>>>> </rsc_location>
>>>>>>>>> <rsc_location id="loca_MySQL_node2" rsc="MySQL">
>>>>>>>>> <rule id="rule_loc_MySQL_node2" role="Master" score="50">
>>>>>>>>> <expression id="exp_rule_MySQL_node2" attribute="#uname" operation="eq" value="node2"/>
>>>>>>>>> </rule>
>>>>>>>>> </rsc_location>
>>>>>>>>> <rsc_location id="mysql-connectivity" rsc="MySQL">
>>>>>>>>> <rule id="mysql-pingd-prefer-rule" score="-INFINITY" role="Master">
>>>>>>>>> <expression id="mysql-pingd-prefer" attribute="pingd" operation="lt" value="1000"/>
>>>>>>>>> </rule>
>>>>>>>>> </rsc_location>
>>>>>>>>> </constraints>
>>>>>>>>> </configuration>
>>>>>>>>>
>>>>>>>>
>>>>>>>>> _______________________________________________
>>>>>>>>> Pacemaker mailing list
>>>>>>>>> Pacemaker at clusterlabs.org
>>>>>>>>> http://list.clusterlabs.org/mailman/listinfo/pacemaker
>>>>>>>>>
>>>>>>>> _______________________________________________
>>>>>>>> Pacemaker mailing list
>>>>>>>> Pacemaker at clusterlabs.org
>>>>>>>> http://list.clusterlabs.org/mailman/listinfo/pacemaker
>>>>>>>>
>>>>>>>>
>>>>>>> _______________________________________________
>>>>>>> Pacemaker mailing list
>>>>>>> Pacemaker at clusterlabs.org
>>>>>>> http://list.clusterlabs.org/mailman/listinfo/pacemaker
>>>>>>>
>>>>>> _______________________________________________
>>>>>> Pacemaker mailing list
>>>>>> Pacemaker at clusterlabs.org
>>>>>> http://list.clusterlabs.org/mailman/listinfo/pacemaker
>>>>>>
>>>>>>
>>>>> _______________________________________________
>>>>> Pacemaker mailing list
>>>>> Pacemaker at clusterlabs.org
>>>>> http://list.clusterlabs.org/mailman/listinfo/pacemaker
>>>>>
>>>> _______________________________________________
>>>> Pacemaker mailing list
>>>> Pacemaker at clusterlabs.org
>>>> http://list.clusterlabs.org/mailman/listinfo/pacemaker
>>>>
>>>>
>>
>>
>>
>>> _______________________________________________
>>> Pacemaker mailing list
>>> Pacemaker at clusterlabs.org
>>> http://list.clusterlabs.org/mailman/listinfo/pacemaker
>>>
>>
>>
>> _______________________________________________
>> Pacemaker mailing list
>> Pacemaker at clusterlabs.org
>> http://list.clusterlabs.org/mailman/listinfo/pacemaker
>>
>>
>
>
> _______________________________________________
> Pacemaker mailing list
> Pacemaker at clusterlabs.org
> http://list.clusterlabs.org/mailman/listinfo/pacemaker
More information about the Pacemaker
mailing list