<div dir="ltr"><div><div><div>Hi Ken,<br><br></div>thank you for your comment.<br><br></div>i think this case can be closed, i use your suggestion of constraint and then problem resolved.<br><br></div>thanks a lot~~<br></div><div class="gmail_extra"><br><div class="gmail_quote">On Thu, May 4, 2017 at 10:28 PM, Ken Gaillot <span dir="ltr">&lt;<a href="mailto:kgaillot@redhat.com" target="_blank">kgaillot@redhat.com</a>&gt;</span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="">On 05/03/2017 09:04 PM, Albert Weng wrote:<br>
&gt; Hi Marek,<br>
&gt;<br>
&gt; Thanks your reply.<br>
&gt;<br>
&gt; On Tue, May 2, 2017 at 5:15 PM, Marek Grac &lt;<a href="mailto:mgrac@redhat.com">mgrac@redhat.com</a><br>
</span><span class="">&gt; &lt;mailto:<a href="mailto:mgrac@redhat.com">mgrac@redhat.com</a>&gt;&gt; wrote:<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt;     On Tue, May 2, 2017 at 11:02 AM, Albert Weng &lt;<a href="mailto:weng.albert@gmail.com">weng.albert@gmail.com</a><br>
</span><span class="">&gt;     &lt;mailto:<a href="mailto:weng.albert@gmail.com">weng.albert@gmail.com</a>&gt;<wbr>&gt; wrote:<br>
&gt;<br>
&gt;<br>
&gt;         Hi Marek,<br>
&gt;<br>
&gt;         thanks for your quickly responding.<br>
&gt;<br>
&gt;         According to you opinion, when i type &quot;pcs status&quot; then i saw<br>
&gt;         the following result of fence :<br>
&gt;         ipmi-fence-node1    (stonith:fence_ipmilan):    Started cluaterb<br>
&gt;         ipmi-fence-node2    (stonith:fence_ipmilan):    Started clusterb<br>
&gt;<br>
&gt;         Does it means both ipmi stonith devices are working correctly?<br>
&gt;         (rest of resources can failover to another node correctly)<br>
&gt;<br>
&gt;<br>
&gt;     Yes, they are working correctly.<br>
&gt;<br>
&gt;     When it becomes important to run fence agents to kill the other<br>
&gt;     node. It will be executed from the other node, so the fact where<br>
&gt;     fence agent resides currently is not important<br>
&gt;<br>
&gt; Does &quot;started on node&quot; means which node is controlling fence behavior?<br>
&gt; even all fence agents and resources &quot;started on same node&quot;, the cluster<br>
&gt; fence behavior still work correctly?<br>
&gt;<br>
&gt;<br>
&gt; Thanks a lot.<br>
&gt;<br>
&gt;     m,<br>
<br>
</span>Correct. Fencing is *executed* independently of where or even whether<br>
fence devices are running. The node that is &quot;running&quot; a fence device<br>
performs the recurring monitor on the device; that&#39;s the only real effect.<br>
<span class=""><br>
&gt;         should i have to use location constraint to avoid stonith device<br>
&gt;         running on same node ?<br>
&gt;         # pcs constraint location ipmi-fence-node1 prefers clustera<br>
&gt;         # pcs constraint location ipmi-fence-node2 prefers clusterb<br>
&gt;<br>
&gt;         thanks a lot<br>
<br>
</span>It&#39;s a good idea, so that a node isn&#39;t monitoring its own fence device,<br>
but that&#39;s the only reason -- it doesn&#39;t affect whether or how the node<br>
can be fenced. I would configure it as an anti-location, e.g.<br>
<br>
   pcs constraint location ipmi-fence-node1 avoids node1=100<br>
<br>
In a 2-node cluster, there&#39;s no real difference, but in a larger<br>
cluster, it&#39;s the simplest config. I wouldn&#39;t use INFINITY (there&#39;s no<br>
harm in a node monitoring its own fence device if it&#39;s the last node<br>
standing), but I would use a score high enough to outweigh any stickiness.<br>
<span class=""><br>
&gt;         On Tue, May 2, 2017 at 4:25 PM, Marek Grac &lt;<a href="mailto:mgrac@redhat.com">mgrac@redhat.com</a><br>
</span><span class="">&gt;         &lt;mailto:<a href="mailto:mgrac@redhat.com">mgrac@redhat.com</a>&gt;&gt; wrote:<br>
&gt;<br>
&gt;             Hi,<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt;             On Tue, May 2, 2017 at 3:39 AM, Albert Weng<br>
</span><div><div class="h5">&gt;             &lt;<a href="mailto:weng.albert@gmail.com">weng.albert@gmail.com</a> &lt;mailto:<a href="mailto:weng.albert@gmail.com">weng.albert@gmail.com</a>&gt;<wbr>&gt; wrote:<br>
&gt;<br>
&gt;                 Hi All,<br>
&gt;<br>
&gt;                 I have created active/passive pacemaker cluster on RHEL 7.<br>
&gt;<br>
&gt;                 here is my environment:<br>
&gt;                 clustera : 192.168.11.1<br>
&gt;                 clusterb : 192.168.11.2<br>
&gt;                 clustera-ilo4 : 192.168.11.10<br>
&gt;                 clusterb-ilo4 : 192.168.11.11<br>
&gt;<br>
&gt;                 both nodes are connected SAN storage for shared storage.<br>
&gt;<br>
&gt;                 i used the following cmd to create my stonith devices on<br>
&gt;                 each node :<br>
&gt;                 # pcs -f stonith_cfg stonith create ipmi-fence-node1<br>
&gt;                 fence_ipmilan parms lanplus=&quot;ture&quot;<br>
&gt;                 pcmk_host_list=&quot;clustera&quot; pcmk_host_check=&quot;static-list&quot;<br>
&gt;                 action=&quot;reboot&quot; ipaddr=&quot;192.168.11.10&quot;<br>
&gt;                 login=adminsitrator passwd=1234322 op monitor interval=60s<br>
&gt;<br>
&gt;                 # pcs -f stonith_cfg stonith create ipmi-fence-node02<br>
&gt;                 fence_ipmilan parms lanplus=&quot;true&quot;<br>
&gt;                 pcmk_host_list=&quot;clusterb&quot; pcmk_host_check=&quot;static-list&quot;<br>
&gt;                 action=&quot;reboot&quot; ipaddr=&quot;192.168.11.11&quot; login=USERID<br>
&gt;                 passwd=password op monitor interval=60s<br>
&gt;<br>
&gt;                 # pcs status<br>
&gt;                 ipmi-fence-node1                     clustera<br>
&gt;                 ipmi-fence-node2                     clusterb<br>
&gt;<br>
&gt;                 but when i failover to passive node, then i ran<br>
&gt;                 # pcs status<br>
&gt;<br>
&gt;                 ipmi-fence-node1                    clusterb<br>
&gt;                 ipmi-fence-node2                    clusterb<br>
&gt;<br>
&gt;                 why both fence device locate on the same node ?<br>
&gt;<br>
&gt;<br>
&gt;             When node &#39;clustera&#39; is down, is there any place where<br>
&gt;             ipmi-fence-node* can be executed?<br>
&gt;<br>
&gt;             If you are worrying that node can not self-fence itself you<br>
&gt;             are right. But if &#39;clustera&#39; will become available then<br>
&gt;             attempt to fence clusterb will work as expected.<br>
&gt;<br>
&gt;             m,<br>
&gt;<br>
&gt;             ______________________________<wbr>_________________<br>
&gt;             Users mailing list: <a href="mailto:Users@clusterlabs.org">Users@clusterlabs.org</a><br>
</div></div>&gt;             &lt;mailto:<a href="mailto:Users@clusterlabs.org">Users@clusterlabs.org</a>&gt;<br>
&gt;             <a href="http://lists.clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.clusterlabs.org/<wbr>mailman/listinfo/users</a><br>
<span class="">&gt;             &lt;<a href="http://lists.clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.clusterlabs.org/<wbr>mailman/listinfo/users</a>&gt;<br>
&gt;<br>
&gt;             Project Home: <a href="http://www.clusterlabs.org" rel="noreferrer" target="_blank">http://www.clusterlabs.org</a><br>
&gt;             Getting started:<br>
&gt;             <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" rel="noreferrer" target="_blank">http://www.clusterlabs.org/<wbr>doc/Cluster_from_Scratch.pdf</a><br>
&gt;             &lt;<a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" rel="noreferrer" target="_blank">http://www.clusterlabs.org/<wbr>doc/Cluster_from_Scratch.pdf</a>&gt;<br>
&gt;             Bugs: <a href="http://bugs.clusterlabs.org" rel="noreferrer" target="_blank">http://bugs.clusterlabs.org</a><br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt;<br>
&gt;         --<br>
&gt;         Kind regards,<br>
&gt;         Albert Weng<br>
&gt;<br>
</span>&gt;         &lt;<a href="https://www.avast.com/sig-email?utm_medium=email&amp;utm_source=link&amp;utm_campaign=sig-email&amp;utm_content=webmail" rel="noreferrer" target="_blank">https://www.avast.com/sig-<wbr>email?utm_medium=email&amp;utm_<wbr>source=link&amp;utm_campaign=sig-<wbr>email&amp;utm_content=webmail</a>&gt;<br>
&gt;               不含病毒。<a href="http://www.avast.com" rel="noreferrer" target="_blank">www.avast.com</a><br>
&gt;         &lt;<a href="https://www.avast.com/sig-email?utm_medium=email&amp;utm_source=link&amp;utm_campaign=sig-email&amp;utm_content=webmail" rel="noreferrer" target="_blank">https://www.avast.com/sig-<wbr>email?utm_medium=email&amp;utm_<wbr>source=link&amp;utm_campaign=sig-<wbr>email&amp;utm_content=webmail</a>&gt;<br>
<div class="HOEnZb"><div class="h5"><br>
______________________________<wbr>_________________<br>
Users mailing list: <a href="mailto:Users@clusterlabs.org">Users@clusterlabs.org</a><br>
<a href="http://lists.clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.clusterlabs.org/<wbr>mailman/listinfo/users</a><br>
<br>
Project Home: <a href="http://www.clusterlabs.org" rel="noreferrer" target="_blank">http://www.clusterlabs.org</a><br>
Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" rel="noreferrer" target="_blank">http://www.clusterlabs.org/<wbr>doc/Cluster_from_Scratch.pdf</a><br>
Bugs: <a href="http://bugs.clusterlabs.org" rel="noreferrer" target="_blank">http://bugs.clusterlabs.org</a><br>
</div></div></blockquote></div><br><br clear="all"><br>-- <br><div class="gmail_signature" data-smartmail="gmail_signature">Kind regards,<br>Albert Weng</div>
</div>