[ClusterLabs] stonith device locate on same host in active/passive cluster
Albert Weng
weng.albert at gmail.com
Wed May 3 22:04:22 EDT 2017
Hi Marek,
Thanks your reply.
On Tue, May 2, 2017 at 5:15 PM, Marek Grac <mgrac at redhat.com> wrote:
>
>
> On Tue, May 2, 2017 at 11:02 AM, Albert Weng <weng.albert at gmail.com>
> wrote:
>
>>
>> Hi Marek,
>>
>> thanks for your quickly responding.
>>
>> According to you opinion, when i type "pcs status" then i saw the
>> following result of fence :
>> ipmi-fence-node1 (stonith:fence_ipmilan): Started cluaterb
>> ipmi-fence-node2 (stonith:fence_ipmilan): Started clusterb
>>
>> Does it means both ipmi stonith devices are working correctly? (rest of
>> resources can failover to another node correctly)
>>
>
> Yes, they are working correctly.
>
> When it becomes important to run fence agents to kill the other node. It
> will be executed from the other node, so the fact where fence agent resides
> currently is not important
>
> Does "started on node" means which node is controlling fence behavior?
even all fence agents and resources "started on same node", the cluster
fence behavior still work correctly?
Thanks a lot.
> m,
>
>
>>
>> should i have to use location constraint to avoid stonith device running
>> on same node ?
>> # pcs constraint location ipmi-fence-node1 prefers clustera
>> # pcs constraint location ipmi-fence-node2 prefers clusterb
>>
>> thanks a lot
>>
>> On Tue, May 2, 2017 at 4:25 PM, Marek Grac <mgrac at redhat.com> wrote:
>>
>>> Hi,
>>>
>>>
>>>
>>> On Tue, May 2, 2017 at 3:39 AM, Albert Weng <weng.albert at gmail.com>
>>> wrote:
>>>
>>>> Hi All,
>>>>
>>>> I have created active/passive pacemaker cluster on RHEL 7.
>>>>
>>>> here is my environment:
>>>> clustera : 192.168.11.1
>>>> clusterb : 192.168.11.2
>>>> clustera-ilo4 : 192.168.11.10
>>>> clusterb-ilo4 : 192.168.11.11
>>>>
>>>> both nodes are connected SAN storage for shared storage.
>>>>
>>>> i used the following cmd to create my stonith devices on each node :
>>>> # pcs -f stonith_cfg stonith create ipmi-fence-node1 fence_ipmilan
>>>> parms lanplus="ture" pcmk_host_list="clustera"
>>>> pcmk_host_check="static-list" action="reboot" ipaddr="192.168.11.10"
>>>> login=adminsitrator passwd=1234322 op monitor interval=60s
>>>>
>>>> # pcs -f stonith_cfg stonith create ipmi-fence-node02 fence_ipmilan
>>>> parms lanplus="true" pcmk_host_list="clusterb"
>>>> pcmk_host_check="static-list" action="reboot" ipaddr="192.168.11.11"
>>>> login=USERID passwd=password op monitor interval=60s
>>>>
>>>> # pcs status
>>>> ipmi-fence-node1 clustera
>>>> ipmi-fence-node2 clusterb
>>>>
>>>> but when i failover to passive node, then i ran
>>>> # pcs status
>>>>
>>>> ipmi-fence-node1 clusterb
>>>> ipmi-fence-node2 clusterb
>>>>
>>>> why both fence device locate on the same node ?
>>>>
>>>
>>> When node 'clustera' is down, is there any place where ipmi-fence-node*
>>> can be executed?
>>>
>>> If you are worrying that node can not self-fence itself you are right.
>>> But if 'clustera' will become available then attempt to fence clusterb will
>>> work as expected.
>>>
>>> m,
>>>
>>> _______________________________________________
>>> Users mailing list: Users at clusterlabs.org
>>> http://lists.clusterlabs.org/mailman/listinfo/users
>>>
>>> Project Home: http://www.clusterlabs.org
>>> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>>> Bugs: http://bugs.clusterlabs.org
>>>
>>>
>>
>>
>> --
>> Kind regards,
>> Albert Weng
>>
>>
>> <https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail>
>> 不含病毒。www.avast.com
>> <https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail>
>> <#m_-2295807312831151002_m_-1498931108676747190_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
>>
>> _______________________________________________
>> Users mailing list: Users at clusterlabs.org
>> http://lists.clusterlabs.org/mailman/listinfo/users
>>
>> Project Home: http://www.clusterlabs.org
>> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>> Bugs: http://bugs.clusterlabs.org
>>
>>
>
> _______________________________________________
> Users mailing list: Users at clusterlabs.org
> http://lists.clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
>
>
--
Kind regards,
Albert Weng
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.clusterlabs.org/pipermail/users/attachments/20170504/00a3ab8a/attachment-0003.html>
More information about the Users
mailing list