[ClusterLabs] stonith device locate on same host in active/passive cluster

Albert Weng weng.albert at gmail.com
Thu May 11 14:28:23 UTC 2017


Hi Ken,

thank you for your comment.

i think this case can be closed, i use your suggestion of constraint and
then problem resolved.

thanks a lot~~

On Thu, May 4, 2017 at 10:28 PM, Ken Gaillot <kgaillot at redhat.com> wrote:

> On 05/03/2017 09:04 PM, Albert Weng wrote:
> > Hi Marek,
> >
> > Thanks your reply.
> >
> > On Tue, May 2, 2017 at 5:15 PM, Marek Grac <mgrac at redhat.com
> > <mailto:mgrac at redhat.com>> wrote:
> >
> >
> >
> >     On Tue, May 2, 2017 at 11:02 AM, Albert Weng <weng.albert at gmail.com
> >     <mailto:weng.albert at gmail.com>> wrote:
> >
> >
> >         Hi Marek,
> >
> >         thanks for your quickly responding.
> >
> >         According to you opinion, when i type "pcs status" then i saw
> >         the following result of fence :
> >         ipmi-fence-node1    (stonith:fence_ipmilan):    Started cluaterb
> >         ipmi-fence-node2    (stonith:fence_ipmilan):    Started clusterb
> >
> >         Does it means both ipmi stonith devices are working correctly?
> >         (rest of resources can failover to another node correctly)
> >
> >
> >     Yes, they are working correctly.
> >
> >     When it becomes important to run fence agents to kill the other
> >     node. It will be executed from the other node, so the fact where
> >     fence agent resides currently is not important
> >
> > Does "started on node" means which node is controlling fence behavior?
> > even all fence agents and resources "started on same node", the cluster
> > fence behavior still work correctly?
> >
> >
> > Thanks a lot.
> >
> >     m,
>
> Correct. Fencing is *executed* independently of where or even whether
> fence devices are running. The node that is "running" a fence device
> performs the recurring monitor on the device; that's the only real effect.
>
> >         should i have to use location constraint to avoid stonith device
> >         running on same node ?
> >         # pcs constraint location ipmi-fence-node1 prefers clustera
> >         # pcs constraint location ipmi-fence-node2 prefers clusterb
> >
> >         thanks a lot
>
> It's a good idea, so that a node isn't monitoring its own fence device,
> but that's the only reason -- it doesn't affect whether or how the node
> can be fenced. I would configure it as an anti-location, e.g.
>
>    pcs constraint location ipmi-fence-node1 avoids node1=100
>
> In a 2-node cluster, there's no real difference, but in a larger
> cluster, it's the simplest config. I wouldn't use INFINITY (there's no
> harm in a node monitoring its own fence device if it's the last node
> standing), but I would use a score high enough to outweigh any stickiness.
>
> >         On Tue, May 2, 2017 at 4:25 PM, Marek Grac <mgrac at redhat.com
> >         <mailto:mgrac at redhat.com>> wrote:
> >
> >             Hi,
> >
> >
> >
> >             On Tue, May 2, 2017 at 3:39 AM, Albert Weng
> >             <weng.albert at gmail.com <mailto:weng.albert at gmail.com>>
> wrote:
> >
> >                 Hi All,
> >
> >                 I have created active/passive pacemaker cluster on RHEL
> 7.
> >
> >                 here is my environment:
> >                 clustera : 192.168.11.1
> >                 clusterb : 192.168.11.2
> >                 clustera-ilo4 : 192.168.11.10
> >                 clusterb-ilo4 : 192.168.11.11
> >
> >                 both nodes are connected SAN storage for shared storage.
> >
> >                 i used the following cmd to create my stonith devices on
> >                 each node :
> >                 # pcs -f stonith_cfg stonith create ipmi-fence-node1
> >                 fence_ipmilan parms lanplus="ture"
> >                 pcmk_host_list="clustera" pcmk_host_check="static-list"
> >                 action="reboot" ipaddr="192.168.11.10"
> >                 login=adminsitrator passwd=1234322 op monitor
> interval=60s
> >
> >                 # pcs -f stonith_cfg stonith create ipmi-fence-node02
> >                 fence_ipmilan parms lanplus="true"
> >                 pcmk_host_list="clusterb" pcmk_host_check="static-list"
> >                 action="reboot" ipaddr="192.168.11.11" login=USERID
> >                 passwd=password op monitor interval=60s
> >
> >                 # pcs status
> >                 ipmi-fence-node1                     clustera
> >                 ipmi-fence-node2                     clusterb
> >
> >                 but when i failover to passive node, then i ran
> >                 # pcs status
> >
> >                 ipmi-fence-node1                    clusterb
> >                 ipmi-fence-node2                    clusterb
> >
> >                 why both fence device locate on the same node ?
> >
> >
> >             When node 'clustera' is down, is there any place where
> >             ipmi-fence-node* can be executed?
> >
> >             If you are worrying that node can not self-fence itself you
> >             are right. But if 'clustera' will become available then
> >             attempt to fence clusterb will work as expected.
> >
> >             m,
> >
> >             _______________________________________________
> >             Users mailing list: Users at clusterlabs.org
> >             <mailto:Users at clusterlabs.org>
> >             http://lists.clusterlabs.org/mailman/listinfo/users
> >             <http://lists.clusterlabs.org/mailman/listinfo/users>
> >
> >             Project Home: http://www.clusterlabs.org
> >             Getting started:
> >             http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> >             <http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf>
> >             Bugs: http://bugs.clusterlabs.org
> >
> >
> >
> >
> >         --
> >         Kind regards,
> >         Albert Weng
> >
> >         <https://www.avast.com/sig-email?utm_medium=email&utm_
> source=link&utm_campaign=sig-email&utm_content=webmail>
> >               不含病毒。www.avast.com
> >         <https://www.avast.com/sig-email?utm_medium=email&utm_
> source=link&utm_campaign=sig-email&utm_content=webmail>
>
> _______________________________________________
> Users mailing list: Users at clusterlabs.org
> http://lists.clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
>



-- 
Kind regards,
Albert Weng
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.clusterlabs.org/pipermail/users/attachments/20170511/afef8f1e/attachment-0002.html>


More information about the Users mailing list