<div dir="ltr"><div>Hi Marek,<br><br></div>Thanks your reply.<br><div><div><div class="gmail_extra"><br><div class="gmail_quote">On Tue, May 2, 2017 at 5:15 PM, Marek Grac <span dir="ltr"><<a href="mailto:mgrac@redhat.com" target="_blank">mgrac@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><br><div class="gmail_extra"><br><div class="gmail_quote"><span class="gmail-">On Tue, May 2, 2017 at 11:02 AM, Albert Weng <span dir="ltr"><<a href="mailto:weng.albert@gmail.com" target="_blank">weng.albert@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div><div><br>Hi Marek,<br><br></div>thanks for your quickly responding.<br><br></div>According to you opinion, when i type "pcs status" then i saw the following result of fence : <br>ipmi-fence-node1 (stonith:fence_ipmilan): Started cluaterb<br>ipmi-fence-node2 (stonith:fence_ipmilan): Started clusterb<br><br><div class="gmail_extra">Does it means both ipmi stonith devices are working correctly? (rest of resources can failover to another node correctly)<br></div></div></blockquote><div><br></div></span><div>Yes, they are working correctly. </div><div><br></div><div>When it becomes important to run fence agents to kill the other node. It will be executed from the other node, so the fact where fence agent resides currently is not important</div><span class="gmail-HOEnZb"><font color="#888888"><div><br></div></font></span></div></div></div></blockquote><div>Does "started on node" means which node is controlling fence behavior? even all fence agents and resources "started on same node", the cluster fence behavior still work correctly?<br></div><div> <br><br></div><div>Thanks a lot.<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><div class="gmail_quote"><span class="gmail-HOEnZb"><font color="#888888"><div></div><div>m,</div></font></span><div><div class="gmail-h5"><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div class="gmail_extra"><br></div><div class="gmail_extra">should i have to use location constraint to avoid stonith device running on same node ?<br># pcs constraint location ipmi-fence-node1 prefers clustera<br># pcs constraint location ipmi-fence-node2 prefers clusterb<br></div><div class="gmail_extra"><br></div><div class="gmail_extra">thanks a lot<br></div><div class="gmail_extra"><br><div class="gmail_quote"><div><div class="gmail-m_-2295807312831151002h5">On Tue, May 2, 2017 at 4:25 PM, Marek Grac <span dir="ltr"><<a href="mailto:mgrac@redhat.com" target="_blank">mgrac@redhat.com</a>></span> wrote:<br></div></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div><div class="gmail-m_-2295807312831151002h5"><div dir="ltr">Hi,<div><br></div><div><br></div><div class="gmail_extra"><br><div class="gmail_quote"><span class="gmail-m_-2295807312831151002m_-1498931108676747190gmail-">On Tue, May 2, 2017 at 3:39 AM, Albert Weng <span dir="ltr"><<a href="mailto:weng.albert@gmail.com" target="_blank">weng.albert@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div><div><div><div><div><div><div>Hi All,<br><br></div>I have created active/passive pacemaker cluster on RHEL 7.<br><br></div>here is my environment: <br></div>clustera : 192.168.11.1<br></div>clusterb : 192.168.11.2<br></div>clustera-ilo4 : 192.168.11.10<br></div>clusterb-ilo4 : 192.168.11.11<br><br></div><div>both nodes are connected SAN storage for shared storage.<br><br></div><div><div>i used the following cmd to create my stonith devices on each node :<br><span style="font-weight:bold"># pcs -f stonith_cfg
stonith create ipmi-fence-node1 fence_ipmilan parms lanplus="ture"
pcmk_host_list="clustera" pcmk_host_check="static-list"
action="reboot" ipaddr="192.168.11.10" login=adminsitrator
passwd=1234322 op monitor interval=60s</span></div><div>
<div><div><div><div><div><div><div><div><div><div><div><div><div><span style="font-weight:bold"><br># pcs -f stonith_cfg
stonith create ipmi-fence-node02 fence_ipmilan parms lanplus="true"
pcmk_host_list="clusterb" pcmk_host_check="static-list"
action="reboot" ipaddr="192.168.11.11" login=USERID
passwd=password op monitor interval=60s</span>
<br><br></div><div># pcs status<br></div><div>ipmi-fence-node1 <wbr> clustera<br></div><div>ipmi-fence-node2 <wbr> clusterb<br><br></div><div>but when i failover to passive node, then i ran <br></div><div># pcs status<br><br></div><div>ipmi-fence-node1 <wbr> clusterb<br></div><div>ipmi-fence-node2 <wbr> clusterb<br><br></div><div>why both fence device locate on the same node ? </div></div></div></div></div></div></div></div></div></div></div></div></div></div></div></div></blockquote><div><br></div></span><div>When node 'clustera' is down, is there any place where ipmi-fence-node* can be executed?</div><div><br></div><div>If you are worrying that node can not self-fence itself you are right. But if 'clustera' will become available then attempt to fence clusterb will work as expected.</div><span class="gmail-m_-2295807312831151002m_-1498931108676747190gmail-HOEnZb"><font color="#888888"><div><br></div><div>m, </div></font></span></div></div></div>
<br></div></div>______________________________<wbr>_________________<br>
Users mailing list: <a href="mailto:Users@clusterlabs.org" target="_blank">Users@clusterlabs.org</a><br>
<a href="http://lists.clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.clusterlabs.org/m<wbr>ailman/listinfo/users</a><br>
<br>
Project Home: <a href="http://www.clusterlabs.org" rel="noreferrer" target="_blank">http://www.clusterlabs.org</a><br>
Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" rel="noreferrer" target="_blank">http://www.clusterlabs.org/doc<wbr>/Cluster_from_Scratch.pdf</a><br>
Bugs: <a href="http://bugs.clusterlabs.org" rel="noreferrer" target="_blank">http://bugs.clusterlabs.org</a><br>
<br></blockquote></div><span><br><br clear="all"><br>-- <br><div class="gmail-m_-2295807312831151002m_-1498931108676747190gmail_signature">Kind regards,<br>Albert Weng</div>
</span></div></div><div id="gmail-m_-2295807312831151002m_-1498931108676747190DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2"><br> <table style="border-top:1px solid rgb(211,212,222)">
<tbody><tr>
<td style="width:55px;padding-top:18px"><a href="https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail" target="_blank"><img src="https://ipmcdn.avast.com/images/icons/icon-envelope-tick-round-orange-animated-no-repeat-v1.gif" style="width: 46px; height: 29px;" height="29" width="46"></a></td>
<td style="width:470px;padding-top:17px;color:rgb(65,66,78);font-size:13px;font-family:arial,helvetica,sans-serif;line-height:18px">不含病毒。<a href="https://www.avast.com/sig-email?utm_medium=email&utm_source=link&utm_campaign=sig-email&utm_content=webmail" style="color:rgb(68,83,234)" target="_blank">www.avast.com</a> </td>
</tr>
</tbody></table>
<a href="#m_-2295807312831151002_m_-1498931108676747190_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2" width="1" height="1"></a></div>
<br>______________________________<wbr>_________________<br>
Users mailing list: <a href="mailto:Users@clusterlabs.org" target="_blank">Users@clusterlabs.org</a><br>
<a href="http://lists.clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.clusterlabs.org/m<wbr>ailman/listinfo/users</a><br>
<br>
Project Home: <a href="http://www.clusterlabs.org" rel="noreferrer" target="_blank">http://www.clusterlabs.org</a><br>
Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" rel="noreferrer" target="_blank">http://www.clusterlabs.org/doc<wbr>/Cluster_from_Scratch.pdf</a><br>
Bugs: <a href="http://bugs.clusterlabs.org" rel="noreferrer" target="_blank">http://bugs.clusterlabs.org</a><br>
<br></blockquote></div></div></div><br></div></div>
<br>______________________________<wbr>_________________<br>
Users mailing list: <a href="mailto:Users@clusterlabs.org">Users@clusterlabs.org</a><br>
<a href="http://lists.clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.clusterlabs.org/<wbr>mailman/listinfo/users</a><br>
<br>
Project Home: <a href="http://www.clusterlabs.org" rel="noreferrer" target="_blank">http://www.clusterlabs.org</a><br>
Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" rel="noreferrer" target="_blank">http://www.clusterlabs.org/<wbr>doc/Cluster_from_Scratch.pdf</a><br>
Bugs: <a href="http://bugs.clusterlabs.org" rel="noreferrer" target="_blank">http://bugs.clusterlabs.org</a><br>
<br></blockquote></div><br><br clear="all"><br>-- <br><div class="gmail_signature">Kind regards,<br>Albert Weng</div>
</div></div></div></div>