[ClusterLabs] Prevent cluster transition when resource unavailable on both nodes

Ken Gaillot kgaillot at redhat.com
Wed Dec 6 13:25:46 EST 2023


On Wed, 2023-12-06 at 17:55 +0100, Alexander Eastwood wrote:
> Hello, 
> 
> I administrate a Pacemaker cluster consisting of 2 nodes, which are
> connected to each other via ethernet cable to ensure that they are
> always able to communicate with each other. A network switch is also
> connected to each node via ethernet cable and provides external
> access.
> 
> One of the managed resources of the cluster is a virtual IP, which is
> assigned to a physical network interface card and thus depends on the
> network switch being available. The virtual IP is always hosted on
> the active node.
> 
> We had the situation where the network switch lost power or was
> rebooted, as a result both servers reported `NIC Link is Down`. The
> recover operation on the Virtual IP resource then failed repeatedly
> on the active node, and a transition was initiated. Since the other 

The default reaction to a start failure is to ban the resource from
that node. If it tries to recover repeatedly on the same node, I assume
you set start-failure-is-fatal to false, and/or have a very low
failure-timeout on starts?

> node was also unable to start the resource, the cluster was swaying
> between the 2 nodes until the NIC links were up again.
> 
> Is there a way to change this behaviour? I am thinking of the
> following sequence of events, but have not been able to find a way to
> configure this:
> 
>  1. active node detects NIC Link is Down, which affects a resource
> managed by the cluster (monitor operation on the resource starts to
> fail)
>  2. active node checks if the other (passive) node in the cluster
> would be able to start the resource

There's really no way to check without actually trying to start it, so
basically you're describing what Pacemaker does.

>  3. if passive node can start the resource, transition all resources
> to passive node

I think maybe the "all resources" part is key. Presumably that means
you have a bunch of other resources colocated with and/or ordered after
the IP, so they all have to stop to try to start the IP elsewhere.

If those resources really do require the IP to be active, then that's
the correct behavior. If they don't, then the constraints could be
dropped, reversed, or made optional or asymmetric.

It sounds like you might want an optional colocation, or a colocation
of the IP with the other resources (rather than vice versa).

>  4. if passive node is unable to start the resource, then there is
> nothing to be gained a transition, so no action should be taken

If start-failure-is-fatal is left to true, and no failure-timeout is
configured, then it will try once per node then wait for manual
cleanup. If the colocation is made optional or reversed, the other
resources can continue to run.

> 
> Any pointers or advice will be much appreciated!
> 
> Thank you and kind regards,
> 
> Alex Eastwood
-- 
Ken Gaillot <kgaillot at redhat.com>



More information about the Users mailing list