<div dir="ltr"><div><div><div>Hi Ken,<br><br></div>thank you for your comment.<br><br></div>i think this case can be closed, i use your suggestion of constraint and then problem resolved.<br><br></div>thanks a lot~~<br></div><div class="gmail_extra"><br><div class="gmail_quote">On Thu, May 4, 2017 at 10:28 PM, Ken Gaillot <span dir="ltr"><<a href="mailto:kgaillot@redhat.com" target="_blank">kgaillot@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="">On 05/03/2017 09:04 PM, Albert Weng wrote:<br>
> Hi Marek,<br>
><br>
> Thanks your reply.<br>
><br>
> On Tue, May 2, 2017 at 5:15 PM, Marek Grac <<a href="mailto:mgrac@redhat.com">mgrac@redhat.com</a><br>
</span><span class="">> <mailto:<a href="mailto:mgrac@redhat.com">mgrac@redhat.com</a>>> wrote:<br>
><br>
><br>
><br>
> On Tue, May 2, 2017 at 11:02 AM, Albert Weng <<a href="mailto:weng.albert@gmail.com">weng.albert@gmail.com</a><br>
</span><span class="">> <mailto:<a href="mailto:weng.albert@gmail.com">weng.albert@gmail.com</a>><wbr>> wrote:<br>
><br>
><br>
> Hi Marek,<br>
><br>
> thanks for your quickly responding.<br>
><br>
> According to you opinion, when i type "pcs status" then i saw<br>
> the following result of fence :<br>
> ipmi-fence-node1 (stonith:fence_ipmilan): Started cluaterb<br>
> ipmi-fence-node2 (stonith:fence_ipmilan): Started clusterb<br>
><br>
> Does it means both ipmi stonith devices are working correctly?<br>
> (rest of resources can failover to another node correctly)<br>
><br>
><br>
> Yes, they are working correctly.<br>
><br>
> When it becomes important to run fence agents to kill the other<br>
> node. It will be executed from the other node, so the fact where<br>
> fence agent resides currently is not important<br>
><br>
> Does "started on node" means which node is controlling fence behavior?<br>
> even all fence agents and resources "started on same node", the cluster<br>
> fence behavior still work correctly?<br>
><br>
><br>
> Thanks a lot.<br>
><br>
> m,<br>
<br>
</span>Correct. Fencing is *executed* independently of where or even whether<br>
fence devices are running. The node that is "running" a fence device<br>
performs the recurring monitor on the device; that's the only real effect.<br>
<span class=""><br>
> should i have to use location constraint to avoid stonith device<br>
> running on same node ?<br>
> # pcs constraint location ipmi-fence-node1 prefers clustera<br>
> # pcs constraint location ipmi-fence-node2 prefers clusterb<br>
><br>
> thanks a lot<br>
<br>
</span>It's a good idea, so that a node isn't monitoring its own fence device,<br>
but that's the only reason -- it doesn't affect whether or how the node<br>
can be fenced. I would configure it as an anti-location, e.g.<br>
<br>
pcs constraint location ipmi-fence-node1 avoids node1=100<br>
<br>
In a 2-node cluster, there's no real difference, but in a larger<br>
cluster, it's the simplest config. I wouldn't use INFINITY (there's no<br>
harm in a node monitoring its own fence device if it's the last node<br>
standing), but I would use a score high enough to outweigh any stickiness.<br>
<span class=""><br>
> On Tue, May 2, 2017 at 4:25 PM, Marek Grac <<a href="mailto:mgrac@redhat.com">mgrac@redhat.com</a><br>
</span><span class="">> <mailto:<a href="mailto:mgrac@redhat.com">mgrac@redhat.com</a>>> wrote:<br>
><br>
> Hi,<br>
><br>
><br>
><br>
> On Tue, May 2, 2017 at 3:39 AM, Albert Weng<br>
</span><div><div class="h5">> <<a href="mailto:weng.albert@gmail.com">weng.albert@gmail.com</a> <mailto:<a href="mailto:weng.albert@gmail.com">weng.albert@gmail.com</a>><wbr>> wrote:<br>
><br>
> Hi All,<br>
><br>
> I have created active/passive pacemaker cluster on RHEL 7.<br>
><br>
> here is my environment:<br>
> clustera : 192.168.11.1<br>
> clusterb : 192.168.11.2<br>
> clustera-ilo4 : 192.168.11.10<br>
> clusterb-ilo4 : 192.168.11.11<br>
><br>
> both nodes are connected SAN storage for shared storage.<br>
><br>
> i used the following cmd to create my stonith devices on<br>
> each node :<br>
> # pcs -f stonith_cfg stonith create ipmi-fence-node1<br>
> fence_ipmilan parms lanplus="ture"<br>
> pcmk_host_list="clustera" pcmk_host_check="static-list"<br>
> action="reboot" ipaddr="192.168.11.10"<br>
> login=adminsitrator passwd=1234322 op monitor interval=60s<br>
><br>
> # pcs -f stonith_cfg stonith create ipmi-fence-node02<br>
> fence_ipmilan parms lanplus="true"<br>
> pcmk_host_list="clusterb" pcmk_host_check="static-list"<br>
> action="reboot" ipaddr="192.168.11.11" login=USERID<br>
> passwd=password op monitor interval=60s<br>
><br>
> # pcs status<br>
> ipmi-fence-node1 clustera<br>
> ipmi-fence-node2 clusterb<br>
><br>
> but when i failover to passive node, then i ran<br>
> # pcs status<br>
><br>
> ipmi-fence-node1 clusterb<br>
> ipmi-fence-node2 clusterb<br>
><br>
> why both fence device locate on the same node ?<br>
><br>
><br>
> When node 'clustera' is down, is there any place where<br>
> ipmi-fence-node* can be executed?<br>
><br>
> If you are worrying that node can not self-fence itself you<br>
> are right. But if 'clustera' will become available then<br>
> attempt to fence clusterb will work as expected.<br>
><br>
> m,<br>
><br>
> ______________________________<wbr>_________________<br>
> Users mailing list: <a href="mailto:Users@clusterlabs.org">Users@clusterlabs.org</a><br>
</div></div>> <mailto:<a href="mailto:Users@clusterlabs.org">Users@clusterlabs.org</a>><br>
> <a href="http://lists.clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.clusterlabs.org/<wbr>mailman/listinfo/users</a><br>
<span class="">> <<a href="http://lists.clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.clusterlabs.org/<wbr>mailman/listinfo/users</a>><br>
><br>
> Project Home: <a href="http://www.clusterlabs.org" rel="noreferrer" target="_blank">http://www.clusterlabs.org</a><br>
> Getting started:<br>
> <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" rel="noreferrer" target="_blank">http://www.clusterlabs.org/<wbr>doc/Cluster_from_Scratch.pdf</a><br>
> <<a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" rel="noreferrer" target="_blank">http://www.clusterlabs.org/<wbr>doc/Cluster_from_Scratch.pdf</a>><br>
> Bugs: <a href="http://bugs.clusterlabs.org" rel="noreferrer" target="_blank">http://bugs.clusterlabs.org</a><br>
><br>
><br>
><br>
><br>
> --<br>
> Kind regards,<br>
> Albert Weng<br>
><br>
</span>> <<a href="https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail" rel="noreferrer" target="_blank">https://www.avast.com/sig-<wbr>email?utm_medium=email&utm_<wbr>source=link&utm_campaign=sig-<wbr>email&utm_content=webmail</a>><br>
> 不含病毒。<a href="http://www.avast.com" rel="noreferrer" target="_blank">www.avast.com</a><br>
> <<a href="https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail" rel="noreferrer" target="_blank">https://www.avast.com/sig-<wbr>email?utm_medium=email&utm_<wbr>source=link&utm_campaign=sig-<wbr>email&utm_content=webmail</a>><br>
<div class="HOEnZb"><div class="h5"><br>
______________________________<wbr>_________________<br>
Users mailing list: <a href="mailto:Users@clusterlabs.org">Users@clusterlabs.org</a><br>
<a href="http://lists.clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.clusterlabs.org/<wbr>mailman/listinfo/users</a><br>
<br>
Project Home: <a href="http://www.clusterlabs.org" rel="noreferrer" target="_blank">http://www.clusterlabs.org</a><br>
Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" rel="noreferrer" target="_blank">http://www.clusterlabs.org/<wbr>doc/Cluster_from_Scratch.pdf</a><br>
Bugs: <a href="http://bugs.clusterlabs.org" rel="noreferrer" target="_blank">http://bugs.clusterlabs.org</a><br>
</div></div></blockquote></div><br><br clear="all"><br>-- <br><div class="gmail_signature" data-smartmail="gmail_signature">Kind regards,<br>Albert Weng</div>
</div>