<html>
<head>
<meta content="text/html; charset=windows-1252"
http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<div class="moz-cite-prefix">On 02/16/2017 05:42 PM,
<a class="moz-txt-link-abbreviated" href="mailto:durwin@mgtsciences.com">durwin@mgtsciences.com</a> wrote:<br>
</div>
<blockquote
cite="mid:OFEE70CD22.81F0C3FC-ON872580C9.0059EB35-872580C9.005BC310@mgtsciences.com"
type="cite"><tt><font size="2">Klaus Wenninger
<a class="moz-txt-link-rfc2396E" href="mailto:kwenning@redhat.com"><kwenning@redhat.com></a> wrote on
02/16/2017 03:27:07 AM:<br>
<br>
> From: Klaus Wenninger <a class="moz-txt-link-rfc2396E" href="mailto:kwenning@redhat.com"><kwenning@redhat.com></a></font></tt>
<br>
<tt><font size="2">> To: <a class="moz-txt-link-abbreviated" href="mailto:kgaillot@redhat.com">kgaillot@redhat.com</a>, Cluster Labs -
All topics
related to open-<br>
> source clustering welcomed <a class="moz-txt-link-rfc2396E" href="mailto:users@clusterlabs.org"><users@clusterlabs.org></a></font></tt>
<br>
<tt><font size="2">> Date: 02/16/2017 03:27 AM</font></tt>
<br>
<tt><font size="2">> Subject: Re: [ClusterLabs] I question
whether
STONITH is working.</font></tt>
<br>
<tt><font size="2">> <br>
> On 02/15/2017 10:30 PM, Ken Gaillot wrote:<br>
> > On 02/15/2017 12:17 PM, <a class="moz-txt-link-abbreviated" href="mailto:durwin@mgtsciences.com">durwin@mgtsciences.com</a>
wrote:<br>
> >> I have 2 Fedora VMs (node1, and node2) running
on a Windows
10 machine<br>
> >> using Virtualbox.<br>
> >><br>
> >> I began with this.<br>
> >> </font></tt><a moz-do-not-send="true"
href="http://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html-single/"><tt><font
size="2">http://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html-single/</font></tt></a><tt><font
size="2"><br>
> Clusters_from_Scratch/<br>
> >><br>
> >><br>
> >> When it came to fencing, I refered to this.<br>
> >> </font></tt><a moz-do-not-send="true"
href="http://www.linux-ha.org/wiki/SBD_Fencing"><tt><font
size="2">http://www.linux-ha.org/wiki/SBD_Fencing</font></tt></a><tt><font
size="2"><br>
> >><br>
> >> To the file /etc/sysconfig/sbd I added these
lines.<br>
> >> SBD_OPTS="-W"<br>
> >> SBD_DEVICE="/dev/sdb1"<br>
> >> I added 'modprobe softdog' to rc.local<br>
> >><br>
> >> After getting sbd working, I resumed with
Clusters from Scratch,
chapter<br>
> >> 8.3.<br>
> >> I executed these commands *only* one node1. Am
I suppose
to run any of<br>
> >> these commands on other nodes? 'Clusters from
Scratch' does
not specify.<br>
> > Configuration commands only need to be run once. The
cluster<br>
> > synchronizes all changes across the cluster.<br>
> ><br>
> >> pcs cluster cib stonith_cfg<br>
> >> pcs -f stonith_cfg stonith create sbd-fence
fence_sbd<br>
> >> devices="/dev/sdb1" port="node2"<br>
> > The above command creates a fence device configured
to kill node2
-- but<br>
> > it doesn't tell the cluster which nodes the device
can be used
to kill.<br>
> > Thus, even if you try to fence node1, it will use
this device,
and node2<br>
> > will be shot.<br>
> ><br>
> > The pcmk_host_list parameter specifies which nodes
the device
can kill.<br>
> > If not specified, the device will be used to kill
any node. So,
just add<br>
> > pcmk_host_list=node2 here.<br>
> ><br>
> > You'll need to configure a separate device to fence
node1.<br>
> ><br>
> > I haven't used fence_sbd, so I don't know if there's
a way to
configure<br>
> > it as one device that can kill both nodes.<br>
> <br>
> fence_sbd should return a proper dynamic-list.<br>
> So without ports and host-list it should just work fine.<br>
> Not even a host-map should be needed. Or actually it is
not<br>
> supported because if sbd is using different node-naming
than<br>
> pacemaker, pacemaker-watcher within sbd is gonna fail.</font></tt>
<br>
<br>
<tt><font size="2">I am not clear on what you are conveying. On
the command</font></tt>
<br>
<tt><font size="2">'pcs -f stonith_cfg stonith create' I do not
need
the port= option?</font></tt></blockquote>
<br>
<tt><font size="2">e.g. 'pcs stonith create FenceSBD fence_sbd
devices="/dev/vdb"'<br>
should do the whole trick.<br>
<br>
</font></tt>
<blockquote
cite="mid:OFEE70CD22.81F0C3FC-ON872580C9.0059EB35-872580C9.005BC310@mgtsciences.com"
type="cite">
<br>
<br>
<tt><font size="2">Ken stated I need an sbd device for each node
in the
cluster (needing fencing).</font></tt>
<br>
<tt><font size="2">I assume each node is a possible failure and
would
need fencing.</font></tt>
<br>
<tt><font size="2">So what *is* a slot? SBD device allocates 255
slots in each device.</font></tt>
<br>
<tt><font size="2">These slots are not to keep track of the nodes?</font></tt></blockquote>
<br>
<tt><font size="2">There is a slot for each node - and if the
sbd-instance doesn't find one matching<br>
its own name it creates one (paints one of the 255 that is
unused with its own name).<br>
The slots are used to send messages to the sbd-instances on the
nodes.<br>
<br>
</font></tt>
<blockquote
cite="mid:OFEE70CD22.81F0C3FC-ON872580C9.0059EB35-872580C9.005BC310@mgtsciences.com"
type="cite">
<br>
<br>
<tt><font size="2">Regarding fence_sbd returning dynamic-list.
The
command</font></tt>
<br>
<tt><font size="2">'</font></tt><font face="Lucida Console"
size="1">sbd
-d /dev/sdb1 list</font><tt><font size="2">' returns every node
in the cluster.</font></tt>
<br>
<tt><font size="2">Is this the list you are referring to?</font></tt></blockquote>
<br>
<tt><font size="2">Yes and no. fence_sbd - fence-agent is using the
same command to create that<br>
list when it is asked by pacemaker which nodes it is able to
fence.<br>
So you don't have to hardcode that, although you can of course
using a<br>
host-map if you don't want sbd-fencing to be used for certain
nodes because<br>
you might have a better fencing device (can be solved using
fencing-levels<br>
as well).<br>
<br>
</font></tt>
<blockquote
cite="mid:OFEE70CD22.81F0C3FC-ON872580C9.0059EB35-872580C9.005BC310@mgtsciences.com"
type="cite">
<br>
<br>
<tt><font size="2">Thank you,</font></tt>
<br>
<br>
<tt><font size="2">Durwin</font></tt>
<br>
<tt><font size="2"><br>
> <br>
> ><br>
> >> pcs -f stonith_cfg property set
stonith-enabled=true<br>
> >> pcs cluster cib-push stonith_cfg<br>
> >><br>
> >> I then tried this command from node1.<br>
> >> stonith_admin --reboot node2<br>
> >><br>
> >> Node2 did not reboot or even shutdown. the
command 'sbd -d
/dev/sdb1<br>
> >> list' showed node2 as off, but I was still
logged into it
(cluster<br>
> >> status on node2 showed not running).<br>
> >><br>
> >> I rebooted and ran this command on node 2 and
started cluster.<br>
> >> sbd -d /dev/sdb1 message node2 clear<br>
> >><br>
> >> If I ran this command on node2, node2 rebooted.<br>
> >> stonith_admin --reboot node1<br>
> >><br>
> >> What have I missed or done wrong?<br>
> >><br>
> >><br>
> >> Thank you,<br>
> >><br>
> >> Durwin F. De La Rue<br>
> >> Management Sciences, Inc.<br>
> >> 6022 Constitution Ave. NE<br>
> >> Albuquerque, NM 87110<br>
> >> Phone (505) 255-8611<br>
> ><br>
> > _______________________________________________<br>
> > Users mailing list: <a class="moz-txt-link-abbreviated" href="mailto:Users@clusterlabs.org">Users@clusterlabs.org</a><br>
> > </font></tt><a moz-do-not-send="true"
href="http://lists.clusterlabs.org/mailman/listinfo/users"><tt><font
size="2">http://lists.clusterlabs.org/mailman/listinfo/users</font></tt></a><tt><font
size="2"><br>
> ><br>
> > Project Home: </font></tt><a moz-do-not-send="true"
href="http://www.clusterlabs.org/"><tt><font size="2">http://www.clusterlabs.org</font></tt></a><tt><font
size="2"><br>
> > Getting started: </font></tt><a
moz-do-not-send="true"
href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf"><tt><font
size="2">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</font></tt></a><tt><font
size="2"><br>
> > Bugs: </font></tt><a moz-do-not-send="true"
href="http://bugs.clusterlabs.org/"><tt><font size="2">http://bugs.clusterlabs.org</font></tt></a><tt><font
size="2"><br>
> <br>
> <br>
> <br>
> _______________________________________________<br>
> Users mailing list: <a class="moz-txt-link-abbreviated" href="mailto:Users@clusterlabs.org">Users@clusterlabs.org</a><br>
> </font></tt><a moz-do-not-send="true"
href="http://lists.clusterlabs.org/mailman/listinfo/users"><tt><font
size="2">http://lists.clusterlabs.org/mailman/listinfo/users</font></tt></a><tt><font
size="2"><br>
> <br>
> Project Home: </font></tt><a moz-do-not-send="true"
href="http://www.clusterlabs.org/"><tt><font size="2">http://www.clusterlabs.org</font></tt></a><tt><font
size="2"><br>
> Getting started: </font></tt><a moz-do-not-send="true"
href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf"><tt><font
size="2">http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf</font></tt></a><tt><font
size="2"><br>
> Bugs: </font></tt><a moz-do-not-send="true"
href="http://bugs.clusterlabs.org/"><tt><font size="2">http://bugs.clusterlabs.org</font></tt></a><tt><font
size="2"><br>
</font></tt><font face="sans-serif" size="2"><br>
<br>
<br>
This email message and any attachments are for the sole use of
the intended
recipient(s) and may contain proprietary and/or confidential
information
which may be privileged or otherwise protected from disclosure.
Any unauthorized
review, use, disclosure or distribution is prohibited. If you
are not the
intended recipient(s), please contact the sender by reply email
and destroy
the original message and any copies of the message as well as
any attachments
to the original message.</font>
</blockquote>
<p><br>
</p>
</body>
</html>