[ClusterLabs] Coming in Pacemaker 2.0.4: shutdown locks

Ken Gaillot kgaillot at redhat.com
Wed Feb 26 14:53:01 EST 2020


On Wed, 2020-02-26 at 06:52 +0200, Strahil Nikolov wrote:
> On February 26, 2020 12:30:24 AM GMT+02:00, Ken Gaillot <
> kgaillot at redhat.com> wrote:
> > Hi all,
> > 
> > We are a couple of months away from starting the release cycle for
> > Pacemaker 2.0.4. I'll highlight some new features between now and
> > then.
> > 
> > First we have shutdown locks. This is a narrow use case that I
> > don't
> > expect a lot of interest in, but it helps give pacemaker feature
> > parity
> > with proprietary HA systems, which can help users feel more
> > comfortable
> > switching to pacemaker and open source.
> > 
> > The use case is a large organization with few cluster experts and
> > many
> > junior system administrators who reboot hosts for OS updates during
> > planned maintenance windows, without any knowledge of what the host
> > does. The cluster runs services that have a preferred node and take
> > a
> > very long time to start.
> > 
> > In this scenario, pacemaker's default behavior of moving the
> > service to
> > a failover node when the node shuts down, and moving it back when
> > the
> > node comes back up, results in needless downtime compared to just
> > leaving the service down for the few minutes needed for a reboot.
> > 
> > The goal could be accomplished with existing pacemaker features.
> > Maintenance mode wouldn't work because the node is being rebooted.
> > But
> > you could figure out what resources are active on the node, and use
> > a
> > location constraint with a rule to ban them on all other nodes
> > before
> > shutting down. That's a lot of work for something the cluster can
> > figure out automatically.
> > 
> > Pacemaker 2.0.4 will offer a new cluster property, shutdown-lock,
> > defaulting to false to keep the current behavior. If shutdown-lock
> > is
> > set to true, any resources active on a node when it is cleanly shut
> > down will be "locked" to the node (kept down rather than recovered
> > elsewhere). Once the node comes back up and rejoins the cluster,
> > they
> > will be "unlocked" (free to move again if circumstances warrant).
> > 
> > An additional cluster property, shutdown-lock-limit, allows you to
> > set
> > a timeout for the locks so that if the node doesn't come back
> > within
> > that time, the resources are free to be recovered elsewhere. This
> > defaults to no limit.
> > 
> > If you decide while the node is down that you need the resource to
> > be
> > recovered, you can manually clear a lock with "crm_resource --
> > refresh"
> > specifying both --node and --resource.
> > 
> > There are some limitations using shutdown locks with Pacemaker
> > Remote
> > nodes, so I'd avoid that with the upcoming release, though it is
> > possible.
> 
> Hi Ken,
> 
> Can it be 'shutdown-lock-timeout' instead of 'shutdown-lock-limit' ?

I thought about that, but I wanted to be clear that this is a maximum
bound. "timeout" could be a little ambiguous as to whether it is a
maximum or how long a lock will always last. On the other hand "limit"
is not obvious that it should be a time duration. I could see it going
either way.

> Also, I think that the default value could be something more
> reasonable - like 30min. Usually 30min are OK if you don't patch the
> firmware and 180min are the maximum if you do patch the firmware.

The primary goal is to ease the transition from other HA software,
which doesn't even offer the equivalent of shutdown-lock-limit, so I
wanted the default to match that behavior. Also "usually" is a mine
field :)

> The use case is odd. I have been in the same situation, and our
> solution was to train the team (internally) instead of using such
> feature.

Right, this is designed for situations where that isn't feasible :)

Though even with trained staff, this does make it easier, since you
don't have to figure out yourself what's active on the node.

> The interesting part will be the behaviour of the local cluster
> stack, when updates  happen. The risk is high for the node to be
> fenced due to unresponsiveness (during the update) or if
> corosync/pacemaker  use an old function changed in the libs.

That is a risk, but presumably one that a user transitioning from
another product would already be familiar with.

> Best Regards,
> Strahil Nikolov
-- 
Ken Gaillot <kgaillot at redhat.com>



More information about the Users mailing list