[ClusterLabs] I question whether STONITH is working.
kwenning at redhat.com
Thu Feb 16 10:27:07 UTC 2017
On 02/15/2017 10:30 PM, Ken Gaillot wrote:
> On 02/15/2017 12:17 PM, durwin at mgtsciences.com wrote:
>> I have 2 Fedora VMs (node1, and node2) running on a Windows 10 machine
>> using Virtualbox.
>> I began with this.
>> When it came to fencing, I refered to this.
>> To the file /etc/sysconfig/sbd I added these lines.
>> I added 'modprobe softdog' to rc.local
>> After getting sbd working, I resumed with Clusters from Scratch, chapter
>> I executed these commands *only* one node1. Am I suppose to run any of
>> these commands on other nodes? 'Clusters from Scratch' does not specify.
> Configuration commands only need to be run once. The cluster
> synchronizes all changes across the cluster.
>> pcs cluster cib stonith_cfg
>> pcs -f stonith_cfg stonith create sbd-fence fence_sbd
>> devices="/dev/sdb1" port="node2"
> The above command creates a fence device configured to kill node2 -- but
> it doesn't tell the cluster which nodes the device can be used to kill.
> Thus, even if you try to fence node1, it will use this device, and node2
> will be shot.
> The pcmk_host_list parameter specifies which nodes the device can kill.
> If not specified, the device will be used to kill any node. So, just add
> pcmk_host_list=node2 here.
> You'll need to configure a separate device to fence node1.
> I haven't used fence_sbd, so I don't know if there's a way to configure
> it as one device that can kill both nodes.
fence_sbd should return a proper dynamic-list.
So without ports and host-list it should just work fine.
Not even a host-map should be needed. Or actually it is not
supported because if sbd is using different node-naming than
pacemaker, pacemaker-watcher within sbd is gonna fail.
>> pcs -f stonith_cfg property set stonith-enabled=true
>> pcs cluster cib-push stonith_cfg
>> I then tried this command from node1.
>> stonith_admin --reboot node2
>> Node2 did not reboot or even shutdown. the command 'sbd -d /dev/sdb1
>> list' showed node2 as off, but I was still logged into it (cluster
>> status on node2 showed not running).
>> I rebooted and ran this command on node 2 and started cluster.
>> sbd -d /dev/sdb1 message node2 clear
>> If I ran this command on node2, node2 rebooted.
>> stonith_admin --reboot node1
>> What have I missed or done wrong?
>> Thank you,
>> Durwin F. De La Rue
>> Management Sciences, Inc.
>> 6022 Constitution Ave. NE
>> Albuquerque, NM 87110
>> Phone (505) 255-8611
> Users mailing list: Users at clusterlabs.org
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
More information about the Users