[ClusterLabs] How Pacemaker reacts to fast changes of the same parameter in configuration

Klaus Wenninger kwenning at redhat.com
Tue Nov 8 11:54:10 UTC 2016


On 11/08/2016 11:40 AM, Kostiantyn Ponomarenko wrote:
> Hi,
>
> I need a way to do a manual fail-back on demand.
> To be clear, I don't want it to be ON/OFF; I want it to be more like
> "one shot".
> So far I found that the most reasonable way to do it - is to set
> "resource stickiness" to a different value, and then set it back to
> what it was. 
> To do that I created a simple script with two lines:
>
>     crm configure rsc_defaults resource-stickiness=50
>     crm configure rsc_defaults resource-stickiness=150
>
> There are no timeouts before setting the original value back.
> If I call this script, I get what I want - Pacemaker moves resources
> to their preferred locations, and "resource stickiness" is set back to
> its original value. 
>
> Despite it works, I still have few concerns about this approach.
> Will I get the same behavior under a big load with delays on systems
> in cluster (which is truly possible and a normal case in my environment)?
> How Pacemaker treats fast change of this parameter?
> I am worried that if "resource stickiness" is set back to its original
> value to fast, then no fail-back will happen. Is it possible, or I
> shouldn't worry about it?

AFAIK pengine is interrupted when calculating a more complicated transition
and if the situation has changed a transition that is just being executed
is aborted if the input from pengine changed.
So I would definitely worry!
What you could do is to issue 'crm_simulate -Ls' in between and grep for
an empty transition.
There might be more elegant ways but that should be safe.

> Thank you,
> Kostia
>
>
> _______________________________________________
> Users mailing list: Users at clusterlabs.org
> http://clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org






More information about the Users mailing list