[ClusterLabs] Fencing Approach

Klaus Wenninger kwenning at redhat.com
Wed Oct 9 17:03:09 UTC 2024


On Wed, Oct 9, 2024 at 3:08 PM Angelo Ruggiero via Users <
users at clusterlabs.org> wrote:

> Hello,
>
> My setup....
>
>
>    - We are setting up a pacemaker cluster to run SAP runnig on RHEL on
>    Vmware virtual machines.
>    - We will have two nodes for the application server of SAP and 2 nodes
>    for the Hana database. SAP/RHEL provide good support on how to setup the
>    cluster. 🙂
>    - SAP will need a number of floating Ips to be moved around as well
>    mountin/unmounted NFS file system coming from a NetApp device. SAP will
>    need processes switching on and off when something happens planned or
>    unplanned.I am not clear if the netapp devic is active and the other site
>    is DR but what i know is the ip addresses just get moved during a DR
>    incident. Just to be complete the HANA data sync is done by HANA itself
>    most probably async with an RPO of 15mins or so.
>    -  We will have a quorum node also with hopefully a seperate network
>    not sure if it will be on a seperate vmware infra though.
>    - I am hoping to be allowed to use the vmware watchdog although it
>    might take some persuading as declared as "non standard" for us by our
>    infra people. I have it already in DEV to play with now.
>
> I managed to set the above working just using a floating ip and a nfs
> mount as my resources and I can see the following. The self fencing
> approach works fine i.e the servers reboot when they loose network
> connectivity and/or become in quorate as long as they are offering
> resources.
>
> So my questions are in relation to further fencing .... I did a lot of
> reading and saw various reference...
>
>
>    1. Use of sbd shared storage
>
> The question is what does using sbd with a shated storage really give me.
> I need to justify why i need this shared storage again to the infra guys
> but to be honest also to myself.   I have been given this infra and will
> play with it next few days.
>
>
>    2. Use of fence vmware
>
> In addition there is the ability of course to fence using the fence_vmware
> agents and I again I need to justify why i need this. In this particular
> cases it will be a very hard sell because the dev/test and prod
> environments run on the same vmware infra so to use fence_vmware would
> effectively mean dev is connected to prod i.e the user id for a dev or test
> box is being provided by a production environment. I do not have this
> ability at all so cannot play with it.
>
>
>
> My current thought train...i.e the typical things i think about...
>
> Perhaps someone can help me be clear on the benefits of 1 and 2 over and
> above the setup i think it doable.
>
>
>    1.  gives me the ability to use poison pill
>
>    But what scenarios does poison pill really help why would the other
>    parts of the cluster want to fence the node if the node itself has not
>    killed it self as it lost quorum either because quorum devcice gone or
>    network connectivity failed and resources needs to be switched off.
>
>               What i get is that it is very explict i.e the others nodes
> tell the other server to die. So it must be a case initiated by the other
> nodes.
>               I am struggling to think of a scenarios where the other
> nodes would want to fence it.
>

Main scenario where poison pill shines is 2-node-clusters where you don't
have usable quorum for watchdog-fencing.
Configured with pacemaker-awareness - default - availability of the
shared-disk doesn't become an issue as, due to
fallback to availability of the 2nd node,  the disk is no spof (single
point of failure) in these clusters.
Other nodes btw. can still kill a node with watchdog-fencing. If the node
isn't able to accept that wish of another
node for it to die it will have lost quorum, have stopped triggering the
watchdog anyway.

Regards,
Klaus

>
> Possible Scenarios, did i miss any?
>
>    - Loss of network connection to the node. But that is covered by the
>    node self fencing
>    - If some monitoring said the node was not healthly or responding...
>    Maybe this is the case it is good for but then it must be a partial failure
>    where the node is still part fof the cluster and can respond. I.e not OS
>    freeze or only it looses connection as then the watchdog or the self
>    fencing will kick in.
>    - HW failures, cpu, memory, disk For virtual hardware does that
>    actually ever fail? Sorry if stupid question. I could ask our infra guys
>    but....,
>    So is virtual hardware so reliable that hw failures can be ignored.
>    - Loss of shared storage SAP uses a lot of shared storage via NFS. Not
>    sure what happens when that fails need to research it a bit but each node
>    will sort that out itself I am presuming.
>    - Human error: but no cluster will fix that and the human who makes a
>    change will realise it and revert. 🙂
>
>        2. Fence vmware
>
>       I see this as a better poision pill as it works at the hardware
> level. But if I do not need poision pill then i do not need this.
>
> In general OS freezes or even panics if take took long are covered by the
> watchdog.
>
> regards
> Angelo
>
>
>
>
>
> _______________________________________________
> Manage your subscription:
> https://lists.clusterlabs.org/mailman/listinfo/users
>
> ClusterLabs home: https://www.clusterlabs.org/
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.clusterlabs.org/pipermail/users/attachments/20241009/b9a58eb1/attachment.htm>


More information about the Users mailing list