[ClusterLabs] Users Digest, Vol 117, Issue 5

Klaus Wenninger kwenning at redhat.com
Thu Oct 10 14:52:54 UTC 2024


On Thu, Oct 10, 2024 at 3:58 PM Angelo Ruggiero via Users <
users at clusterlabs.org> wrote:

> Thanks for answering. It helps.
>
> >Main scenario where poison pill shines is 2-node-clusters where you don't
> >have usable quorum for watchdog-fencing.
>
> Not sure i understand. As if just 2 node and one node fails it cannot
> respond to the poision pilll. Maybe i mis your point.
>

If in a 2 node setup one node loses contact to the other or sees some other
reason why it would like
the partner-node to be fenced it will try to write the poison-pill message
to the shared disk and if that
goes Ok and after a configured wait time for the other node to read the
message, respond or the
watchdog to kick in it will assume the other node to be fenced.

>
> This also begs the followup question, what defines "usable quroum".  Do
> you mean for example on seperate  independent network hardware and power
> supply?
>

Quorum in 2 node clusters is a bit different as they will stay quorate when
losing connection. To prevent split brain there if they
reboot on top they will just regain quorum once they've seen each other
(search for 'wait-for-all' to read more).
This behavior is of course not usable for watchdog-fencing and thus SBD
automatically switches to not relying on quorum in
those 2-node setups.


>
> >Configured with pacemaker-awareness - default - availability of the
> shared-disk doesn't become an issue as, due to fallback to availability of
> the 2nd node,  the disk is >no spof (single point of failure) in these
> clusters.
>
> I did not get the jist of what you are trying to say here. 🙂
>
>
I was suggesting a scenario that has 2 cluster nodes + a single shared
disk. With kind of 'pure' SBD this would mean that a node
that is losing connection to the disk would have to self fence which would
mean that this disk would become a so called
single-point-of-failure - meaning that available of resources in the
cluster would be reduced to availability of this single disk.
So I tried to explain why you don't have to fear this reduction of
availability using pacemaker-awareness.


> >Other nodes btw. can still kill a node with watchdog-fencing. I
>
> How does that work when would the killing node tell the other node not to
> keep triggering its watchdog?
> Having written the above sentence maybe it should go and read up when does
> the poison pill get sent by the killing node!
>
>
It would either use cluster-communication to tell the node to self-fence
and if that isn't available the case
below kicks in.

Hope that makes things a bit clearer.

Regards,
Klaus


> >If the node isn't able to accept that wish of another
> >node for it to die it will have lost quorum, have stopped triggering the
> watchdog anyway.
>
> Yes that is clear to mean the self-fencing is quite powerful.
>
> Thanks for the response.
>
> ------------------------------
> *From:* Users <users-bounces at clusterlabs.org> on behalf of
> users-request at clusterlabs.org <users-request at clusterlabs.org>
> *Sent:* 10 October 2024 2:00 PM
> *To:* users at clusterlabs.org <users at clusterlabs.org>
> *Subject:* Users Digest, Vol 117, Issue 5
>
> Send Users mailing list submissions to
>         users at clusterlabs.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
>         https://lists.clusterlabs.org/mailman/listinfo/users
> or, via email, send a message with subject or body 'help' to
>         users-request at clusterlabs.org
>
> You can reach the person managing the list at
>         users-owner at clusterlabs.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of Users digest..."
>
>
> Today's Topics:
>
>    1. Re: Fencing Approach (Klaus Wenninger)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Wed, 9 Oct 2024 19:03:09 +0200
> From: Klaus Wenninger <kwenning at redhat.com>
> To: Cluster Labs - All topics related to open-source clustering
>         welcomed <users at clusterlabs.org>
> Cc: Angelo Ruggiero <angeloruggiero at yahoo.com>
> Subject: Re: [ClusterLabs] Fencing Approach
> Message-ID:
>         <CALrDAo332oqdTEMX82nc-BPWJ=
> Ea4n_citY71HLamSOv3Kw-cA at mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> On Wed, Oct 9, 2024 at 3:08?PM Angelo Ruggiero via Users <
> users at clusterlabs.org> wrote:
>
> > Hello,
> >
> > My setup....
> >
> >
> >    - We are setting up a pacemaker cluster to run SAP runnig on RHEL on
> >    Vmware virtual machines.
> >    - We will have two nodes for the application server of SAP and 2 nodes
> >    for the Hana database. SAP/RHEL provide good support on how to setup
> the
> >    cluster. ?
> >    - SAP will need a number of floating Ips to be moved around as well
> >    mountin/unmounted NFS file system coming from a NetApp device. SAP
> will
> >    need processes switching on and off when something happens planned or
> >    unplanned.I am not clear if the netapp devic is active and the other
> site
> >    is DR but what i know is the ip addresses just get moved during a DR
> >    incident. Just to be complete the HANA data sync is done by HANA
> itself
> >    most probably async with an RPO of 15mins or so.
> >    -  We will have a quorum node also with hopefully a seperate network
> >    not sure if it will be on a seperate vmware infra though.
> >    - I am hoping to be allowed to use the vmware watchdog although it
> >    might take some persuading as declared as "non standard" for us by our
> >    infra people. I have it already in DEV to play with now.
> >
> > I managed to set the above working just using a floating ip and a nfs
> > mount as my resources and I can see the following. The self fencing
> > approach works fine i.e the servers reboot when they loose network
> > connectivity and/or become in quorate as long as they are offering
> > resources.
> >
> > So my questions are in relation to further fencing .... I did a lot of
> > reading and saw various reference...
> >
> >
> >    1. Use of sbd shared storage
> >
> > The question is what does using sbd with a shated storage really give me.
> > I need to justify why i need this shared storage again to the infra guys
> > but to be honest also to myself.   I have been given this infra and will
> > play with it next few days.
> >
> >
> >    2. Use of fence vmware
> >
> > In addition there is the ability of course to fence using the
> fence_vmware
> > agents and I again I need to justify why i need this. In this particular
> > cases it will be a very hard sell because the dev/test and prod
> > environments run on the same vmware infra so to use fence_vmware would
> > effectively mean dev is connected to prod i.e the user id for a dev or
> test
> > box is being provided by a production environment. I do not have this
> > ability at all so cannot play with it.
> >
> >
> >
> > My current thought train...i.e the typical things i think about...
> >
> > Perhaps someone can help me be clear on the benefits of 1 and 2 over and
> > above the setup i think it doable.
> >
> >
> >    1.  gives me the ability to use poison pill
> >
> >    But what scenarios does poison pill really help why would the other
> >    parts of the cluster want to fence the node if the node itself has not
> >    killed it self as it lost quorum either because quorum devcice gone or
> >    network connectivity failed and resources needs to be switched off.
> >
> >               What i get is that it is very explict i.e the others nodes
> > tell the other server to die. So it must be a case initiated by the other
> > nodes.
> >               I am struggling to think of a scenarios where the other
> > nodes would want to fence it.
> >
>
> Main scenario where poison pill shines is 2-node-clusters where you don't
> have usable quorum for watchdog-fencing.
> Configured with pacemaker-awareness - default - availability of the
> shared-disk doesn't become an issue as, due to
> fallback to availability of the 2nd node,  the disk is no spof (single
> point of failure) in these clusters.
> Other nodes btw. can still kill a node with watchdog-fencing. If the node
> isn't able to accept that wish of another
> node for it to die it will have lost quorum, have stopped triggering the
> watchdog anyway.
>
> Regards,
> Klaus
>
> >
> > Possible Scenarios, did i miss any?
> >
> >    - Loss of network connection to the node. But that is covered by the
> >    node self fencing
> >    - If some monitoring said the node was not healthly or responding...
> >    Maybe this is the case it is good for but then it must be a partial
> failure
> >    where the node is still part fof the cluster and can respond. I.e not
> OS
> >    freeze or only it looses connection as then the watchdog or the self
> >    fencing will kick in.
> >    - HW failures, cpu, memory, disk For virtual hardware does that
> >    actually ever fail? Sorry if stupid question. I could ask our infra
> guys
> >    but....,
> >    So is virtual hardware so reliable that hw failures can be ignored.
> >    - Loss of shared storage SAP uses a lot of shared storage via NFS. Not
> >    sure what happens when that fails need to research it a bit but each
> node
> >    will sort that out itself I am presuming.
> >    - Human error: but no cluster will fix that and the human who makes a
> >    change will realise it and revert. ?
> >
> >        2. Fence vmware
> >
> >       I see this as a better poision pill as it works at the hardware
> > level. But if I do not need poision pill then i do not need this.
> >
> > In general OS freezes or even panics if take took long are covered by the
> > watchdog.
> >
> > regards
> > Angelo
> >
> >
> >
> >
> >
> > _______________________________________________
> > Manage your subscription:
> > https://lists.clusterlabs.org/mailman/listinfo/users
> >
> > ClusterLabs home: https://www.clusterlabs.org/
> >
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: <
> https://lists.clusterlabs.org/pipermail/users/attachments/20241009/b9a58eb1/attachment-0001.htm
> >
>
> ------------------------------
>
> Subject: Digest Footer
>
> _______________________________________________
> Manage your subscription:
> https://lists.clusterlabs.org/mailman/listinfo/users
>
> ClusterLabs home: https://www.clusterlabs.org/
>
>
> ------------------------------
>
> End of Users Digest, Vol 117, Issue 5
> *************************************
> _______________________________________________
> Manage your subscription:
> https://lists.clusterlabs.org/mailman/listinfo/users
>
> ClusterLabs home: https://www.clusterlabs.org/
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.clusterlabs.org/pipermail/users/attachments/20241010/f349f660/attachment-0001.htm>


More information about the Users mailing list