[ClusterLabs] Antw: Re: Antw: [EXT] Coming in Pacemaker 2.1.2: new fencing configuration options

Ulrich Windl Ulrich.Windl at rz.uni-regensburg.de
Mon Oct 11 02:29:02 EDT 2021


>>> Ken Gaillot <kgaillot at redhat.com> schrieb am 08.10.2021 um 20:16 in
Nachricht
<5d8817151b818580cc8cf1de5176d5d8ba9b5585.camel at redhat.com>:
> On Fri, 2021-10-08 at 08:18 +0200, Ulrich Windl wrote:
>> > > > Ken Gaillot <kgaillot at redhat.com> schrieb am 07.10.2021 um
>> > > > 22:53 in
>> Nachricht
>> <8bec6dc04c52d4ac5c2a8055eb7bae455f5a449d.camel at redhat.com>:
>> > Hi all,
>> > 
>> > We're looking ahead to the next Pacemaker release already. Even
>> > though
>> > we had a recent release for a regression fix, I want to get back to
>> > the
>> > goal of having a release before the holidays.
>> > 
>> > The new release will have a couple of enhancements to fencing
>> > configuration options.
>> > 
>> > The existing pcmk_delay_base option lets you set a delay before a
>> > fence
>> > action will be attempted. This is often used on one device in a
>> > 2‑node
>> > cluster to help avoid a "death match".
>> 
>> OK, could you summarize the situation in a 2-node cluster that leads
>> to both
>> nodes being fenced?
> 
> The classic situation is a network break, where the nodes are both
> still up but can't see each other (but presumably can still access the
> fence device, whether through an alternate network or something like a
> serial cable)

Ah, yes, I forgot.
I was thinking in the case of using SBD: If one slot would be allocated to the
cluster name, and any node trying to become DC would MUTEX-like put its name
and a timestamp there as a "proof of being alive", that situation could be
avoided I guess.

> 
>> Also how long would such a delay be: Long enough until the other node
>> is
>> fenced, or long enough until the other node was fenced, booted
>> (assuming it
>> does) and is running pacemaker?
> 
> The delay should be on the less-preferred node, long enough for that
> node to get fenced. The other node, with no delay, will fence it if it
> can. If the other node is for whatever reason unable to fence, the node
> with the delay will fence it after the delay.

So the "fence intention" will be lost when the node is being fenced?
Otherwise the surviving node would have to clean up the "fence intention".
Or does it mean the "fence intention" does not make it to the CIB and stays
local on the node?

> 
>> Does it make a difference whether the nodes are configured to do a
>> power-off
>> or reset, and or configured to automatically start pacemaker when
>> booting or
>> not.
> 
> It doesn't matter for the purposes of the delay

OK, see my worries from earlier lines.

> 
>> It seems it makes no sense if the nodes power off on fence or do not
>> start
>> pacemaker when booting, but I could be wrong.
> 
> Some people like to investigate a node that had problems before
> (manually) allowing it back in the cluster. It's a trade-off between
> restoring redundancy as quickly as possible, vs not reintroducing the
> same problem if it's not recoverable by a reboot (e.g. hardware
> issues).


Thanks for your explanations.

Regards,
Ulrich


> 
>> Regards,
>> Ulrich
>> 
>> > Previously, if you wanted different delays for different nodes, you
>> > had
>> > to configure separate fencing resources, even if they used the same
>> > device.
>> > 
>> > Now, pcmk_delay_base can take a map similar to pcmk_host_map. For
>> > example, to use no delay on node1 and a 5‑second delay on node2,
>> > you
>> > can configure a single fencing resource with
>> > pcmk_delay_base="node1:0s;node2:5s".
>> > 
>> > Separately, the pcmk_host_map option now supports backslash‑escaped
>> > characters (such as spaces) in the mapped name. For example, you
>> > could
>> > set pcmk_host_map="node1:Plug\ 1;node2:Plug\ 2" if the device
>> > expects
>> > "Plug 1" and "Plug 2" as the names.
>> > ‑‑ 
>> > Ken Gaillot <kgaillot at redhat.com>
>> > 
>> > _______________________________________________
>> > Manage your subscription:
>> > https://lists.clusterlabs.org/mailman/listinfo/users 
>> > 
>> > ClusterLabs home: https://www.clusterlabs.org/ 
>> 
>> 
>> _______________________________________________
>> Manage your subscription:
>> https://lists.clusterlabs.org/mailman/listinfo/users 
>> 
>> ClusterLabs home: https://www.clusterlabs.org/ 
> -- 
> Ken Gaillot <kgaillot at redhat.com>
> 
> _______________________________________________
> Manage your subscription:
> https://lists.clusterlabs.org/mailman/listinfo/users 
> 
> ClusterLabs home: https://www.clusterlabs.org/ 





More information about the Users mailing list