<div dir="ltr">Hi,<div><br></div><div>for fence_vbox take a look at my older blogpost> <a href="https://ox.sk/howto-fence-vbox-cdd3da374ecd">https://ox.sk/howto-fence-vbox-cdd3da374ecd</a></div><div><br></div><div>if all you need is to have fencing in a state when dlm works and you promise that you will never have real data on it. There is an easy hack, it really does not matter which fence agent you use. All we care about is if action 'monitor' works, so add option></div><div><br></div><div>pcmk_monitor_action=metadata<br></div><div><br></div><div>It means that instead of monitor action, you will use action 'metadata' which just prints XML metadata and succeed.</div><div><br></div><div>m,</div></div><div class="gmail_extra"><br><div class="gmail_quote">On Fri, Feb 9, 2018 at 6:33 AM, 范国腾 <span dir="ltr"><<a href="mailto:fanguoteng@highgo.com" target="_blank">fanguoteng@highgo.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">Thank Klaus,<br>
<br>
The information is very helpful. I try to study the fence_vbox and the fence_sdb.<br>
<br>
In our test lab, we use ipmi as the stonith. But I want to setup a simulator environment in my laptop. So I just need the stonith resource in start state so that I could create dlm and clvm resource.And I don't need it relally work. Do anybody have other suggestion?<br>
<br>
<br>
-----邮件原件-----<br>
发件人: Users [mailto:<a href="mailto:users-bounces@clusterlabs.org">users-bounces@<wbr>clusterlabs.org</a>] 代表 Klaus Wenninger<br>
发送时间: 2018年2月9日 1:11<br>
收件人: <a href="mailto:users@clusterlabs.org">users@clusterlabs.org</a><br>
主题: Re: [ClusterLabs] How to create the stonith resource in virtualbox<br>
<div class="HOEnZb"><div class="h5"><br>
On 02/08/2018 02:05 PM, Andrei Borzenkov wrote:<br>
> On Thu, Feb 8, 2018 at 5:51 AM, 范国腾 <<a href="mailto:fanguoteng@highgo.com">fanguoteng@highgo.com</a>> wrote:<br>
>> Hello,<br>
>><br>
>> I setup the pacemaker cluster using virtualbox. There are three nodes. The OS is centos7, the /dev/sdb is the shared storage(three nodes use the same disk file).<br>
>><br>
>> (1) At first, I create the stonith using this command:<br>
>> pcs stonith create scsi-stonith-device fence_scsi<br>
>> devices=/dev/mapper/fence pcmk_monitor_action=metadata<br>
>> pcmk_reboot_action=off pcmk_host_list="db7-1 db7-2 db7-3" meta<br>
>> provides=unfencing;<br>
>><br>
>> I know the VM not have the /dev/mapper/fence. But sometimes the stonith resource able to start, sometimes not. Don't know why. It is not stable.<br>
>><br>
> It probably tries to check resource and fails. State of stonith<br>
> resource is irrelevant for actual fencing operation (this resource is<br>
> only used for periodical check, not for fencing itself).<br>
><br>
>> (2) Then I use the following command to setup stonith using the shared disk /dev/sdb:<br>
>> pcs stonith create scsi-shooter fence_scsi<br>
>> devices=/dev/disk/by-id/ata-<wbr>VBOX_HARDDISK_VBc833e6c6-<wbr>af12c936 meta<br>
>> provides=unfencing<br>
>><br>
>> But the stonith always be stopped and the log show:<br>
>> Feb 7 15:45:53 db7-1 stonith-ng[8166]: warning: fence_scsi[8197]<br>
>> stderr: [ Failed: nodename or key is required ]<br>
>><br>
> Well, you need to provide what is missing - your command did not<br>
> specify any host.<br>
><br>
>> Could anyone help tell what is the correct command to setup the stonith in VM and centos? Is there any document to introduce this so that I could study it?<br>
<br>
I personally don't have any experience setting up a pacemaker-cluster in vbox.<br>
<br>
Thus I'm limited to giving rather general advice.<br>
<br>
What you might have to assure together with fence_scsi is if the scsi-emulation vbox offers lives up to the requirements of fence_scsi.<br>
I've read about troubles in a posting back from 2015. The guy then went for using scsi via iSCSI.<br>
<br>
Otherwise you could look for alternatives to fence_scsi.<br>
<br>
One might be fence_vbox. It doesn't come with centos so far iirc but the upstream repo on github has it.<br>
Fencing via the hypervisor is in general not a bad idea when it comes to clusters running in VMs (If you can live with the boundary conditions like giving certain credentials to the VMs that allow communication with the hypervisor.).<br>
There was some discussion about fence_vbox on the clusterlabs-list a couple of months ago. iirc there had been issues with using windows as a host for vbox - but I guess they were fixed in the course of this discussion.<br>
<br>
Another way of doing fencing via a shared disk is fence_sbd (available in centos) - although quite different from how fence_scsi is using the disk. One difference that might be helpful here is that it has less requirements on which disk-infrastructure is emulated.<br>
On the other hand it is strongly advised for sbd in general to use a good watchdog device (one that brings down your machine - virtual or physical - in a very reliable manner). And afaik the only watchdog-device available inside a vbox VM is softdog that doesn't meet this requirement too well as it relies on the kernel running in the VM to be at least partially functional.<br>
<br>
Sorry for not being able to help in a more specific way but I would be interested in which ways of fencing people are using when it comes to clusters based on vbox VMs myself ;-)<br>
<br>
Regards,<br>
Klaus<br>
>><br>
>><br>
>> Thanks<br>
>><br>
>><br>
>> Here is the cluster status:<br>
>> [root@db7-1 ~]# pcs status<br>
>> Cluster name: cluster_pgsql<br>
>> Stack: corosync<br>
>> Current DC: db7-2 (version 1.1.16-12.el7_4.7-94ff4df) - partition<br>
>> with quorum Last updated: Wed Feb 7 16:27:13 2018 Last change: Wed<br>
>> Feb 7 15:42:38 2018 by root via cibadmin on db7-1<br>
>><br>
>> 3 nodes configured<br>
>> 1 resource configured<br>
>><br>
>> Online: [ db7-1 db7-2 db7-3 ]<br>
>><br>
>> Full list of resources:<br>
>><br>
>> scsi-shooter (stonith:fence_scsi): Stopped<br>
>><br>
>> Daemon Status:<br>
>> corosync: active/disabled<br>
>> pacemaker: active/disabled<br>
>> pcsd: active/enabled<br>
>> ______________________________<wbr>_________________<br>
>> Users mailing list: <a href="mailto:Users@clusterlabs.org">Users@clusterlabs.org</a><br>
>> <a href="http://lists.clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.clusterlabs.org/<wbr>mailman/listinfo/users</a><br>
>><br>
>> Project Home: <a href="http://www.clusterlabs.org" rel="noreferrer" target="_blank">http://www.clusterlabs.org</a> Getting started:<br>
>> <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" rel="noreferrer" target="_blank">http://www.clusterlabs.org/<wbr>doc/Cluster_from_Scratch.pdf</a><br>
>> Bugs: <a href="http://bugs.clusterlabs.org" rel="noreferrer" target="_blank">http://bugs.clusterlabs.org</a><br>
> ______________________________<wbr>_________________<br>
> Users mailing list: <a href="mailto:Users@clusterlabs.org">Users@clusterlabs.org</a><br>
> <a href="http://lists.clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.clusterlabs.org/<wbr>mailman/listinfo/users</a><br>
><br>
> Project Home: <a href="http://www.clusterlabs.org" rel="noreferrer" target="_blank">http://www.clusterlabs.org</a> Getting started:<br>
> <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" rel="noreferrer" target="_blank">http://www.clusterlabs.org/<wbr>doc/Cluster_from_Scratch.pdf</a><br>
> Bugs: <a href="http://bugs.clusterlabs.org" rel="noreferrer" target="_blank">http://bugs.clusterlabs.org</a><br>
<br>
______________________________<wbr>_________________<br>
Users mailing list: <a href="mailto:Users@clusterlabs.org">Users@clusterlabs.org</a> <a href="http://lists.clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.clusterlabs.org/<wbr>mailman/listinfo/users</a><br>
<br>
Project Home: <a href="http://www.clusterlabs.org" rel="noreferrer" target="_blank">http://www.clusterlabs.org</a> Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" rel="noreferrer" target="_blank">http://www.clusterlabs.org/<wbr>doc/Cluster_from_Scratch.pdf</a><br>
Bugs: <a href="http://bugs.clusterlabs.org" rel="noreferrer" target="_blank">http://bugs.clusterlabs.org</a><br>
______________________________<wbr>_________________<br>
Users mailing list: <a href="mailto:Users@clusterlabs.org">Users@clusterlabs.org</a><br>
<a href="http://lists.clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.clusterlabs.org/<wbr>mailman/listinfo/users</a><br>
<br>
Project Home: <a href="http://www.clusterlabs.org" rel="noreferrer" target="_blank">http://www.clusterlabs.org</a><br>
Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" rel="noreferrer" target="_blank">http://www.clusterlabs.org/<wbr>doc/Cluster_from_Scratch.pdf</a><br>
Bugs: <a href="http://bugs.clusterlabs.org" rel="noreferrer" target="_blank">http://bugs.clusterlabs.org</a><br>
</div></div></blockquote></div><br></div>