[ClusterLabs] I question whether STONITH is working.

Klaus Wenninger kwenning at redhat.com
Thu Feb 16 05:27:07 EST 2017


On 02/15/2017 10:30 PM, Ken Gaillot wrote:
> On 02/15/2017 12:17 PM, durwin at mgtsciences.com wrote:
>> I have 2 Fedora VMs (node1, and node2) running on a Windows 10 machine
>> using Virtualbox.
>>
>> I began with this.
>> http://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html-single/Clusters_from_Scratch/
>>
>>
>> When it came to fencing, I refered to this.
>> http://www.linux-ha.org/wiki/SBD_Fencing
>>
>> To the file /etc/sysconfig/sbd I added these lines.
>> SBD_OPTS="-W"
>> SBD_DEVICE="/dev/sdb1"
>> I added 'modprobe softdog' to rc.local
>>
>> After getting sbd working, I resumed with Clusters from Scratch, chapter
>> 8.3.
>> I executed these commands *only* one node1.  Am I suppose to run any of
>> these commands on other nodes? 'Clusters from Scratch' does not specify.
> Configuration commands only need to be run once. The cluster
> synchronizes all changes across the cluster.
>
>> pcs cluster cib stonith_cfg
>> pcs -f stonith_cfg stonith create sbd-fence fence_sbd
>> devices="/dev/sdb1" port="node2"
> The above command creates a fence device configured to kill node2 -- but
> it doesn't tell the cluster which nodes the device can be used to kill.
> Thus, even if you try to fence node1, it will use this device, and node2
> will be shot.
>
> The pcmk_host_list parameter specifies which nodes the device can kill.
> If not specified, the device will be used to kill any node. So, just add
> pcmk_host_list=node2 here.
>
> You'll need to configure a separate device to fence node1.
>
> I haven't used fence_sbd, so I don't know if there's a way to configure
> it as one device that can kill both nodes.

fence_sbd should return a proper dynamic-list.
So without ports and host-list it should just work fine.
Not even a host-map should be needed. Or actually it is not
supported because if sbd is using different node-naming than
pacemaker, pacemaker-watcher within sbd is gonna fail.

>
>> pcs -f stonith_cfg property set stonith-enabled=true
>> pcs cluster cib-push stonith_cfg
>>
>> I then tried this command from node1.
>> stonith_admin --reboot node2
>>
>> Node2 did not reboot or even shutdown. the command 'sbd -d /dev/sdb1
>> list' showed node2 as off, but I was still logged into it (cluster
>> status on node2 showed not running).
>>
>> I rebooted and ran this command on node 2 and started cluster.
>> sbd -d /dev/sdb1 message node2 clear
>>
>> If I ran this command on node2, node2 rebooted.
>> stonith_admin --reboot node1
>>
>> What have I missed or done wrong?
>>
>>
>> Thank you,
>>
>> Durwin F. De La Rue
>> Management Sciences, Inc.
>> 6022 Constitution Ave. NE
>> Albuquerque, NM  87110
>> Phone (505) 255-8611
>
> _______________________________________________
> Users mailing list: Users at clusterlabs.org
> http://lists.clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org






More information about the Users mailing list