[ClusterLabs] cluster with two ESX server

Klaus Wenninger kwenning at redhat.com
Wed Nov 29 14:39:47 EST 2017

On 11/29/2017 08:24 PM, Andrei Borzenkov wrote:
> 29.11.2017 20:14, Klaus Wenninger пишет:
>> On 11/28/2017 07:41 PM, Andrei Borzenkov wrote:
>>> 28.11.2017 10:45, Ramann, Björn пишет:
>>>> hi at all,
>>>> in my configuration, the 1st Node run on ESX1, the second run on ESX2. Now I'm looking for a way to configure the cluster fence/stonith with two ESX server - is this possible?
>>> if you have sgared storage, SBD may be an option.
>> True.
>> And if you feel like experimenting you can have a look at
>> https://github.com/wenningerk/sbd/tree/vmware.
>> On ESX you don't have virtual watchdog-devices with
>> a kernel-driver sitting on top (contrary to e.g.
>> with qemu-kvm).
>> This basically is a test-implementation using
>> vSphere HA Application Monitoring as a replacement.
> This sure sounds interesting. Does it work with open-vm-tools or does it
> require VMware tools?

Unfortunately with none of both.
You need libappmonitorlib.so from GuestSDK which I didn't
find anywhere else.
Apart from that library you are fine with open-vm-tools.
See VMware_GuestSDK.spec from my github-repo for details.

When setting up a vSphere Cluster enable Application
Monitoring and check that the following is true.

('Failure interval' = 'Minimum uptime') * 'Maximum per-VM resets' ==
'Maximum reset time window'

Otherwise your 'watchdog' will stop working after 3 resets
till the reset time window is over (maybe never).


>> In comparison to using softdog this approach doesn't rely
>> on any working code inside vm to trigger a reboot.
>>  [root at node4 ~]# sbd query-watchdog
>>   Discovered 3 watchdog devices:
>>   [1] vmware
>>   Identity: VMware Application Monitoring (gray)
>>   Driver: <unknown>
>>   [2] /dev/watchdog
>>   Identity: Software Watchdog
>>   Driver: softdog
>>   CAUTION: Not recommended for use with sbd.
>>   [3] /dev/watchdog0
>>   Identity: Software Watchdog
>>   Driver: softdog
>>   CAUTION: Not recommended for use with sbd.
>> Have in mind that this is just a proof-of-concept
>> implementation. So expect any kind of changes and
>> be aware that in the current state it is definitely
>> not fit to go into any distribution.
>> Regarding building you can find VMware_GuestSDK.spec
>> in the vmware-branch of my sbd-fork.
>> Basically this builds rpms from the vmware-GuestSDK-tarball -
>> both library-binary-rpm for the target and devel-rpm
>> for building vmware-enabled-sbd.
>> Regards,
>> Klaus
>>>> I try to us  fence_vmware with vcenter, but then the vcenter is  a single point of failure und running two vcenter is current not possible.
>>> You can run vcenter on vFT VM in which case it should be pretty robust.
>>> _______________________________________________
>>> Users mailing list: Users at clusterlabs.org
>>> http://lists.clusterlabs.org/mailman/listinfo/users
>>> Project Home: http://www.clusterlabs.org
>>> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>>> Bugs: http://bugs.clusterlabs.org

More information about the Users mailing list