[ClusterLabs] Coming in Pacemaker 2.0.4: shutdown locks

Strahil Nikolov hunter86_bg at yahoo.com
Tue Feb 25 23:52:45 EST 2020


On February 26, 2020 12:30:24 AM GMT+02:00, Ken Gaillot <kgaillot at redhat.com> wrote:
>Hi all,
>
>We are a couple of months away from starting the release cycle for
>Pacemaker 2.0.4. I'll highlight some new features between now and then.
>
>First we have shutdown locks. This is a narrow use case that I don't
>expect a lot of interest in, but it helps give pacemaker feature parity
>with proprietary HA systems, which can help users feel more comfortable
>switching to pacemaker and open source.
>
>The use case is a large organization with few cluster experts and many
>junior system administrators who reboot hosts for OS updates during
>planned maintenance windows, without any knowledge of what the host
>does. The cluster runs services that have a preferred node and take a
>very long time to start.
>
>In this scenario, pacemaker's default behavior of moving the service to
>a failover node when the node shuts down, and moving it back when the
>node comes back up, results in needless downtime compared to just
>leaving the service down for the few minutes needed for a reboot.
>
>The goal could be accomplished with existing pacemaker features.
>Maintenance mode wouldn't work because the node is being rebooted. But
>you could figure out what resources are active on the node, and use a
>location constraint with a rule to ban them on all other nodes before
>shutting down. That's a lot of work for something the cluster can
>figure out automatically.
>
>Pacemaker 2.0.4 will offer a new cluster property, shutdown-lock,
>defaulting to false to keep the current behavior. If shutdown-lock is
>set to true, any resources active on a node when it is cleanly shut
>down will be "locked" to the node (kept down rather than recovered
>elsewhere). Once the node comes back up and rejoins the cluster, they
>will be "unlocked" (free to move again if circumstances warrant).
>
>An additional cluster property, shutdown-lock-limit, allows you to set
>a timeout for the locks so that if the node doesn't come back within
>that time, the resources are free to be recovered elsewhere. This
>defaults to no limit.
>
>If you decide while the node is down that you need the resource to be
>recovered, you can manually clear a lock with "crm_resource --refresh"
>specifying both --node and --resource.
>
>There are some limitations using shutdown locks with Pacemaker Remote
>nodes, so I'd avoid that with the upcoming release, though it is
>possible.

Hi Ken,

Can it be 'shutdown-lock-timeout' instead of 'shutdown-lock-limit' ?
Also, I think that the default value could be something more reasonable - like 30min. Usually 30min are OK if you don't patch the firmware and 180min are the maximum if you do patch the firmware.

The use case is odd. I have been in the same situation, and our solution was to train the team (internally) instead of using such feature.
The interesting part will be the behaviour of the local cluster stack, when updates  happen. The risk is high for the node to be fenced due to unresponsiveness (during the update) or if corosync/pacemaker  use an old function changed in the libs.

Best Regards,
Strahil Nikolov


More information about the Users mailing list