[ClusterLabs] Recommendation Fencing
Klaus Wenninger
kwenning at redhat.com
Mon Sep 2 16:48:46 UTC 2024
On Sat, Aug 31, 2024 at 5:13 PM Angelo M Ruggiero via Users <
users at clusterlabs.org> wrote:
> Hi,
>
> Thanks for the previous replies. I am currently reading them.
>
> Can i be cheeky i have been researching and having some other
> "organisation issues" around fencing.
>
> Maybe it is possible to recommend what fencing and whether to use sbd with
> just watchdog or also with storage. Or some pointers...
>
> Here is my setup
>
> 2 Node, with a Quorum Device
>
> nodes run on vmware and RHEL 8+ there is vmware watch available (i even
> tried it out a bit)
>
> Each node is a different site, but they are close i.e approx 15 km with a
> good connection (i work for a big bank in switzerland)
>
> Application DB Shared storage is given via nfs mounting and available on
> both side. The shared storage can be used at both sites.
>
> We want to run SAP with HANA the above setup and using pacemaker there are
> some restrictions around sbd and vmware but lets put that to one side just
> from a view of pacemaker what option i choose i have to make sure it is SAP
> Certified... Oh joy. 🙂
>
> I see the following main options
>
> Option 1. Just fence_vmware with no sbd at all
>
> Option 2. Fence_vmware with sbd but just watchdog
>
> Option 3. Fence_vmware with sbd, watchdog and share storage
>
> The organisational issue is that the sbd shared storage is considered non
> standard, although we are setting up Oracle RAC which needs similar setup.
>
> What i read around at sort of think is good is option 3 as it provide
> posion pill, self fencing
>
Which benefits are you expecting from poison pill fencing that make the
hassle with the shared disk(s) worthwhile?
Where poison pill fencing is a no-brainer is usually pure 2-node setups but
as you're having qdevice already ...
How would your shared disk setup look like? Going for a single disk at one
site or for 3 disks at all 3 sites?
In case of a single disk having the site where that resides isolated would
prevent resource recovery - usually
a nice option with sbd that you can recover from an isolated site.
When going for 3 disks you may not have that limitation. I don't know
exactly for the external sbd fence-script but
when using fence_sbd you would have to use the most current upstream
version (meanwhile already 2 years old)
otherwise fencing would fail if not all 3 disks are reachable.
When going with a single disk that is being made available via some
mechanism that is invisible to sbd you
obviously have to investigate how that mechanism is working and how this is
gonna impact timing. It is obvious
that you have to prevent a situation where one site can happily write to
the shared disk while the other one is
seeing a replicated but disconnected (or slowly updated) version.
Not a complete and well structured consideration of pros and cons for all
the options - but my 50ct for now ...
Regards,
Klaus
>
> What i do not have clear in my head and what i will work on next week to
> work out the pros and cons and what situations can be handled.
>
> I have discounted other fence agents as I am not sure they work on vmware,
> but happy to be told other options.
>
> Any input gratefully received.
>
> regards
> Angelo
>
> _______________________________________________
> Manage your subscription:
> https://lists.clusterlabs.org/mailman/listinfo/users
>
> ClusterLabs home: https://www.clusterlabs.org/
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.clusterlabs.org/pipermail/users/attachments/20240902/b71c3363/attachment.htm>
More information about the Users
mailing list