<tt><font size=2>Klaus Wenninger <kwenning@redhat.com> wrote on
02/16/2017 03:27:07 AM:<br>
<br>
> From: Klaus Wenninger <kwenning@redhat.com></font></tt>
<br><tt><font size=2>> To: kgaillot@redhat.com, Cluster Labs - All topics
related to open-<br>
> source clustering welcomed <users@clusterlabs.org></font></tt>
<br><tt><font size=2>> Date: 02/16/2017 03:27 AM</font></tt>
<br><tt><font size=2>> Subject: Re: [ClusterLabs] I question whether
STONITH is working.</font></tt>
<br><tt><font size=2>> <br>
> On 02/15/2017 10:30 PM, Ken Gaillot wrote:<br>
> > On 02/15/2017 12:17 PM, durwin@mgtsciences.com wrote:<br>
> >> I have 2 Fedora VMs (node1, and node2) running on a Windows
10 machine<br>
> >> using Virtualbox.<br>
> >><br>
> >> I began with this.<br>
> >> </font></tt><a href="http://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html-single/"><tt><font size=2>http://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html-single/</font></tt></a><tt><font size=2><br>
> Clusters_from_Scratch/<br>
> >><br>
> >><br>
> >> When it came to fencing, I refered to this.<br>
> >> </font></tt><a href="http://www.linux-ha.org/wiki/SBD_Fencing"><tt><font size=2>http://www.linux-ha.org/wiki/SBD_Fencing</font></tt></a><tt><font size=2><br>
> >><br>
> >> To the file /etc/sysconfig/sbd I added these lines.<br>
> >> SBD_OPTS="-W"<br>
> >> SBD_DEVICE="/dev/sdb1"<br>
> >> I added 'modprobe softdog' to rc.local<br>
> >><br>
> >> After getting sbd working, I resumed with Clusters from Scratch,
chapter<br>
> >> 8.3.<br>
> >> I executed these commands *only* one node1. Am I suppose
to run any of<br>
> >> these commands on other nodes? 'Clusters from Scratch' does
not specify.<br>
> > Configuration commands only need to be run once. The cluster<br>
> > synchronizes all changes across the cluster.<br>
> ><br>
> >> pcs cluster cib stonith_cfg<br>
> >> pcs -f stonith_cfg stonith create sbd-fence fence_sbd<br>
> >> devices="/dev/sdb1" port="node2"<br>
> > The above command creates a fence device configured to kill node2
-- but<br>
> > it doesn't tell the cluster which nodes the device can be used
to kill.<br>
> > Thus, even if you try to fence node1, it will use this device,
and node2<br>
> > will be shot.<br>
> ><br>
> > The pcmk_host_list parameter specifies which nodes the device
can kill.<br>
> > If not specified, the device will be used to kill any node. So,
just add<br>
> > pcmk_host_list=node2 here.<br>
> ><br>
> > You'll need to configure a separate device to fence node1.<br>
> ><br>
> > I haven't used fence_sbd, so I don't know if there's a way to
configure<br>
> > it as one device that can kill both nodes.<br>
> <br>
> fence_sbd should return a proper dynamic-list.<br>
> So without ports and host-list it should just work fine.<br>
> Not even a host-map should be needed. Or actually it is not<br>
> supported because if sbd is using different node-naming than<br>
> pacemaker, pacemaker-watcher within sbd is gonna fail.</font></tt>
<br>
<br><tt><font size=2>I am not clear on what you are conveying. On
the command</font></tt>
<br><tt><font size=2>'pcs -f stonith_cfg stonith create' I do not need
the port= option?</font></tt>
<br>
<br><tt><font size=2>Ken stated I need an sbd device for each node in the
cluster (needing fencing).</font></tt>
<br><tt><font size=2>I assume each node is a possible failure and would
need fencing.</font></tt>
<br><tt><font size=2>So what *is* a slot? SBD device allocates 255
slots in each device.</font></tt>
<br><tt><font size=2>These slots are not to keep track of the nodes?</font></tt>
<br>
<br><tt><font size=2>Regarding fence_sbd returning dynamic-list. The
command</font></tt>
<br><tt><font size=2>'</font></tt><font size=1 face="Lucida Console">sbd
-d /dev/sdb1 list</font><tt><font size=2>' returns every node in the cluster.</font></tt>
<br><tt><font size=2>Is this the list you are referring to?</font></tt>
<br>
<br><tt><font size=2>Thank you,</font></tt>
<br>
<br><tt><font size=2>Durwin</font></tt>
<br><tt><font size=2><br>
> <br>
> ><br>
> >> pcs -f stonith_cfg property set stonith-enabled=true<br>
> >> pcs cluster cib-push stonith_cfg<br>
> >><br>
> >> I then tried this command from node1.<br>
> >> stonith_admin --reboot node2<br>
> >><br>
> >> Node2 did not reboot or even shutdown. the command 'sbd -d
/dev/sdb1<br>
> >> list' showed node2 as off, but I was still logged into it
(cluster<br>
> >> status on node2 showed not running).<br>
> >><br>
> >> I rebooted and ran this command on node 2 and started cluster.<br>
> >> sbd -d /dev/sdb1 message node2 clear<br>
> >><br>
> >> If I ran this command on node2, node2 rebooted.<br>
> >> stonith_admin --reboot node1<br>
> >><br>
> >> What have I missed or done wrong?<br>
> >><br>
> >><br>
> >> Thank you,<br>
> >><br>
> >> Durwin F. De La Rue<br>
> >> Management Sciences, Inc.<br>
> >> 6022 Constitution Ave. NE<br>
> >> Albuquerque, NM 87110<br>
> >> Phone (505) 255-8611<br>
> ><br>
> > _______________________________________________<br>
> > Users mailing list: Users@clusterlabs.org<br>
> > </font></tt><a href=http://lists.clusterlabs.org/mailman/listinfo/users><tt><font size=2>http://lists.clusterlabs.org/mailman/listinfo/users</font></tt></a><tt><font size=2><br>
> ><br>
> > Project Home: </font></tt><a href=http://www.clusterlabs.org/><tt><font size=2>http://www.clusterlabs.org</font></tt></a><tt><font size=2><br>
> > Getting started: </font></tt><a href=http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf><tt><font size=2>http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</font></tt></a><tt><font size=2><br>
> > Bugs: </font></tt><a href=http://bugs.clusterlabs.org/><tt><font size=2>http://bugs.clusterlabs.org</font></tt></a><tt><font size=2><br>
> <br>
> <br>
> <br>
> _______________________________________________<br>
> Users mailing list: Users@clusterlabs.org<br>
> </font></tt><a href=http://lists.clusterlabs.org/mailman/listinfo/users><tt><font size=2>http://lists.clusterlabs.org/mailman/listinfo/users</font></tt></a><tt><font size=2><br>
> <br>
> Project Home: </font></tt><a href=http://www.clusterlabs.org/><tt><font size=2>http://www.clusterlabs.org</font></tt></a><tt><font size=2><br>
> Getting started: </font></tt><a href=http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf><tt><font size=2>http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</font></tt></a><tt><font size=2><br>
> Bugs: </font></tt><a href=http://bugs.clusterlabs.org/><tt><font size=2>http://bugs.clusterlabs.org</font></tt></a><tt><font size=2><br>
</font></tt><font size=2 face="sans-serif"><br>
<br>
<br>
This email message and any attachments are for the sole use of the intended
recipient(s) and may contain proprietary and/or confidential information
which may be privileged or otherwise protected from disclosure. Any unauthorized
review, use, disclosure or distribution is prohibited. If you are not the
intended recipient(s), please contact the sender by reply email and destroy
the original message and any copies of the message as well as any attachments
to the original message.</font>