[ClusterLabs] Using pacemaker for manual failover only?

Klaus Wenninger kwenning at redhat.com
Fri May 27 02:59:38 EDT 2016


On 05/26/2016 08:55 PM, Stephano-Shachter, Dylan wrote:
> I tried the location -INFINITY trick and it seems to work quite well.
> Thanks for the advice.
>
> It seems to me that if I am not failing over automatically, then there
> is no good reason to run a stonith resource. Is this correct or is it
> still needed for some reason?

Well, even if you configured your cluster in a way that the rule-set
allows the critical resource/role
to be running on just one node the cluster will still help you enforce
that it is running nowhere else.
So e.g. in the situation where you switched around your -INFINITY and
the resource/role refuses
to die on the former node fencing would help you that you wouldn't have
to kill the node manually.

But I'd say that is just convenience. If you are there and doing the
config-change manually then
you can as well do the observation if everything is reacting as desired ...

>
> On Tue, May 24, 2016 at 11:02 AM, Ken Gaillot <kgaillot at redhat.com
> <mailto:kgaillot at redhat.com>> wrote:
>
>     On 05/24/2016 04:13 AM, Klaus Wenninger wrote:
>     > On 05/24/2016 09:50 AM, Jehan-Guillaume de Rorthais wrote:
>     >> Le Tue, 24 May 2016 01:53:22 -0400,
>     >> Digimer <lists at alteeve.ca <mailto:lists at alteeve.ca>> a écrit :
>     >>
>     >>> On 23/05/16 03:03 PM, Stephano-Shachter, Dylan wrote:
>     >>>> Hello,
>     >>>>
>     >>>> I am using pacemaker 1.1.14 with pcs 0.9.149. I have successfully
>     >>>> configured pacemaker for highly available nfs with drbd.
>     Pacemaker
>     >>>> allows me to easily failover without interrupting nfs
>     connections. I,
>     >>>> however, am only interested in failing over manually
>     (currently I use
>     >>>> "pcs resource move <drbd_rsc> <target_node> --master"). I
>     would like for
>     >>>> the cluster to do nothing when a node fails unexpectedly.
>     >>>>
>     >>>> Right now the solution I am going with is to run
>     >>>> "pcs property set is-managed-default=no"
>     >>>> until I need to failover, at which point I set
>     is-managed-default=yes,
>     >>>> then failover, then set it back to no.
>     >>>>
>     >>>> While this method works for me, it can be unpredictable if
>     people run
>     >>>> move commands at the wrong time.
>     >>>>
>     >>>> Is there a way to disable automatic failover permanently
>     while still
>     >>>> allowing manual failover (with "pcs resource move" or with
>     something else)?
>     >> Try to set up your cluster without the "interval" parameter on
>     the monitor
>     >> action? The resource will be probed during the target-action
>     (start/promote I
>     >> suppose), but then it should not get monitored anymore.
>     >
>     > Ignoring the general cluster yes/no question a simple solution would
>     > be to bind the master-role to a node-attribute that you move around
>     > manually.
>
>     This is the right track. There are a number of ways you could do
>     it, but
>     the basic idea is to use constraints to only allow the resources
>     to run
>     on one node. When you want to fail over, flip the constraints.
>
>     I'd colocate everything with one (most basic) resource, so then
>     all you
>     need is one constraint for that resource to flip. It could be as
>     simple
>     as a -INFINITY location constraint on the node you don't want to
>     run on.
>
>     _______________________________________________
>     Users mailing list: Users at clusterlabs.org
>     <mailto:Users at clusterlabs.org>
>     http://clusterlabs.org/mailman/listinfo/users
>
>     Project Home: http://www.clusterlabs.org
>     Getting started:
>     http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>     Bugs: http://bugs.clusterlabs.org
>
>
>
>
> _______________________________________________
> Users mailing list: Users at clusterlabs.org
> http://clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org





More information about the Users mailing list