[ClusterLabs] I question whether STONITH is working.
durwin at mgtsciences.com
durwin at mgtsciences.com
Mon Feb 20 15:16:23 EST 2017
Klaus Wenninger <kwenning at redhat.com> wrote on 02/16/2017 03:27:07 AM:
> From: Klaus Wenninger <kwenning at redhat.com>
> To: kgaillot at redhat.com, Cluster Labs - All topics related to open-
> source clustering welcomed <users at clusterlabs.org>
> Date: 02/16/2017 03:27 AM
> Subject: Re: [ClusterLabs] I question whether STONITH is working.
>
> On 02/15/2017 10:30 PM, Ken Gaillot wrote:
> > On 02/15/2017 12:17 PM, durwin at mgtsciences.com wrote:
> >> I have 2 Fedora VMs (node1, and node2) running on a Windows 10
machine
> >> using Virtualbox.
> >>
> >> I began with this.
> >> http://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html-single/
> Clusters_from_Scratch/
> >>
> >>
> >> When it came to fencing, I refered to this.
> >> http://www.linux-ha.org/wiki/SBD_Fencing
> >>
> >> To the file /etc/sysconfig/sbd I added these lines.
> >> SBD_OPTS="-W"
> >> SBD_DEVICE="/dev/sdb1"
> >> I added 'modprobe softdog' to rc.local
> >>
> >> After getting sbd working, I resumed with Clusters from Scratch,
chapter
> >> 8.3.
> >> I executed these commands *only* one node1. Am I suppose to run any
of
> >> these commands on other nodes? 'Clusters from Scratch' does not
specify.
> > Configuration commands only need to be run once. The cluster
> > synchronizes all changes across the cluster.
> >
> >> pcs cluster cib stonith_cfg
> >> pcs -f stonith_cfg stonith create sbd-fence fence_sbd
> >> devices="/dev/sdb1" port="node2"
> > The above command creates a fence device configured to kill node2 --
but
> > it doesn't tell the cluster which nodes the device can be used to
kill.
> > Thus, even if you try to fence node1, it will use this device, and
node2
> > will be shot.
> >
> > The pcmk_host_list parameter specifies which nodes the device can
kill.
> > If not specified, the device will be used to kill any node. So, just
add
> > pcmk_host_list=node2 here.
> >
> > You'll need to configure a separate device to fence node1.
> >
> > I haven't used fence_sbd, so I don't know if there's a way to
configure
> > it as one device that can kill both nodes.
>
> fence_sbd should return a proper dynamic-list.
> So without ports and host-list it should just work fine.
> Not even a host-map should be needed. Or actually it is not
> supported because if sbd is using different node-naming than
> pacemaker, pacemaker-watcher within sbd is gonna fail.
It was said that 'port=' is not needed, that if I used the command below
it would just work (as I understood what was being said). So I deleted
using this command.
pcs -f stonith_cfg stonith delete sbd-fence
Recreated without 'port='.
pcs -f stonith_cfg stonith create sbd-fence fence_sbd devices="/dev/sdb1"
pcs cluster cib-push stonith_cfg
>From node2 I executed this command.
stonith_admin --reboot node1
But node2 rebooted anyway.
If I follow what Ken shared, I would need another 'watchdog' in addition
to another sbd device. Are multiple watchdogs possible?
I am lost at this point.
I have 2 VM nodes running Fedora25 on a Windows 10 host. Every
node in a cluster needs to be fenced (as I understand it). Using
SBD, what is the correct way to proceed?
Thank you,
Durwin
>
> >
> >> pcs -f stonith_cfg property set stonith-enabled=true
> >> pcs cluster cib-push stonith_cfg
> >>
> >> I then tried this command from node1.
> >> stonith_admin --reboot node2
> >>
> >> Node2 did not reboot or even shutdown. the command 'sbd -d /dev/sdb1
> >> list' showed node2 as off, but I was still logged into it (cluster
> >> status on node2 showed not running).
> >>
> >> I rebooted and ran this command on node 2 and started cluster.
> >> sbd -d /dev/sdb1 message node2 clear
> >>
> >> If I ran this command on node2, node2 rebooted.
> >> stonith_admin --reboot node1
> >>
> >> What have I missed or done wrong?
> >>
> >>
> >> Thank you,
> >>
> >> Durwin F. De La Rue
> >> Management Sciences, Inc.
> >> 6022 Constitution Ave. NE
> >> Albuquerque, NM 87110
> >> Phone (505) 255-8611
> >
> > _______________________________________________
> > Users mailing list: Users at clusterlabs.org
> > http://lists.clusterlabs.org/mailman/listinfo/users
> >
> > Project Home: http://www.clusterlabs.org
> > Getting started:
http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> > Bugs: http://bugs.clusterlabs.org
>
>
>
> _______________________________________________
> Users mailing list: Users at clusterlabs.org
> http://lists.clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
This email message and any attachments are for the sole use of the
intended recipient(s) and may contain proprietary and/or confidential
information which may be privileged or otherwise protected from
disclosure. Any unauthorized review, use, disclosure or distribution is
prohibited. If you are not the intended recipient(s), please contact the
sender by reply email and destroy the original message and any copies of
the message as well as any attachments to the original message.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.clusterlabs.org/pipermail/users/attachments/20170220/47c3334a/attachment-0003.html>
More information about the Users
mailing list