[ClusterLabs] SBD with shared block storage (and watchdog?)

Klaus Wenninger kwenning at redhat.com
Mon Feb 13 13:04:15 EST 2017

On 02/13/2017 06:34 PM, durwin at mgtsciences.com wrote:
> I am working to get an active/active cluster running.  
> I have Windows 10 running 2 Fedora 25 Virtualbox VMs.
> VMs named node1, and node2.
> I created a vdi disk and set it to shared.
> I formatted it to gfs2 with this command.
> mkfs.gfs2 -t msicluster:msigfs2 -j 2 /dev/sdb1
> After installing 'dlm' and insuring guest additions were
> installed, I was able to mount the gfs2 parition.
> I then followed.
> https://github.com/l-mb/sbd/blob/master/man/sbd.8.pod
> I used this command.
> sbd -d /dev/sdb1 create

To be honest I have no experience with using a partition for
a filesystem and sbd in parallel.
I would guess that you have to tell the filesystem at least to
reserve some space for sbd.
For the first experience I would go for a separate
partition for sbd.

> Using sbd to 'list' returns nothing, but 'dump' shows this.

Did you point list to /dev/sdb1 as well?
(sbd list -d /dev/sdb1)
Still it might return nothing as you haven't used
any of the slot you created by now.
You can try to add one manually though.
(sbd allocate test -d /dev/sdb1)

> fc25> sbd -d /dev/sdb1 dump
> ==Dumping header on disk /dev/sdb1
> Header version     : 2.1
> UUID               : 6094f0f4-2a07-47db-b4f7-6d478464d56a
> Number of slots    : 255
> Sector size        : 512
> Timeout (watchdog) : 5
> Timeout (allocate) : 2
> Timeout (loop)     : 1
> Timeout (msgwait)  : 10
> ==Header on disk /dev/sdb1 is dumped
> I then tried the 'watch' command and journalctl shows error listed.
> sbd -d /dev/sdb1 -W -P watch
> Feb 13 09:54:09 node1 sbd[6908]:    error: watchdog_init: Cannot open
> watchdog device '/dev/watchdog': No such file or directory (2)
> Feb 13 09:54:09 node1 sbd[6908]:  warning: cleanup_servant_by_pid:
> Servant for pcmk (pid: 6910) has terminated
> Feb 13 09:54:09 node1 sbd[6908]:  warning: cleanup_servant_by_pid:
> Servant for /dev/sdb1 (pid: 6909) has terminated

well, to be expected if the kernel doesn't see a watchdog device ...

> From
> http://blog.clusterlabs.org/blog/2015/sbd-fun-and-profit
> I installed watchdog.
> my /etc/sysconfig/sbd is.
> SBD_WATCHDOG_DEV=/dev/watchdog
> the sbd-fun-and-profit says to use this command.
> virsh edit vmnode

You would do that on the host if you were running linux as host-os
and you were using libvirt to control virtualization.
Haven't played with VirtualBox and watchdog-devices. But probably
it is possible to have one or it is there by default already.
You best go to the graphical guest definition and search amongst
the to be added devices.
Don't know if it is a virtual version of something that exists in physical
world so that the linux kernel is expected to have a driver for it or
if the driver comes with the guest additions.
Otherwise you can go for softdog of course - but be aware that this
won't take you out if the kernel is hanging.


> But there is no vmnode and no instructions on how to create it.
> Is anyone able to piece together the missing steps?
> Thank you.
> Durwin F. De La Rue
> Management Sciences, Inc.
> 6022 Constitution Ave. NE
> Albuquerque, NM  87110
> Phone (505) 255-8611
> This email message and any attachments are for the sole use of the
> intended recipient(s) and may contain proprietary and/or confidential
> information which may be privileged or otherwise protected from
> disclosure. Any unauthorized review, use, disclosure or distribution
> is prohibited. If you are not the intended recipient(s), please
> contact the sender by reply email and destroy the original message and
> any copies of the message as well as any attachments to the original
> message.
> _______________________________________________
> Users mailing list: Users at clusterlabs.org
> http://lists.clusterlabs.org/mailman/listinfo/users
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org

More information about the Users mailing list