[ClusterLabs] Stopping the last node with pcs
Digimer
lists at alteeve.ca
Wed Apr 28 11:41:50 EDT 2021
On 2021-04-28 10:10 a.m., Ken Gaillot wrote:
> On Tue, 2021-04-27 at 23:23 -0400, Digimer wrote:
>> Hi all,
>>
>> I noticed something odd.
>>
>> ====
>> [root at an-a02n01 ~]# pcs cluster status
>> Cluster Status:
>> Cluster Summary:
>> * Stack: corosync
>> * Current DC: an-a02n01 (version 2.0.4-6.el8_3.2-2deceaa3ae) -
>> partition with quorum
>> * Last updated: Tue Apr 27 23:20:45 2021
>> * Last change: Tue Apr 27 23:12:40 2021 by root via cibadmin on
>> an-a02n01
>> * 2 nodes configured
>> * 12 resource instances configured (4 DISABLED)
>> Node List:
>> * Online: [ an-a02n01 ]
>> * OFFLINE: [ an-a02n02 ]
>>
>> PCSD Status:
>> an-a02n01: Online
>> an-a02n02: Offline
>> ====
>> [root at an-a02n01 ~]# pcs cluster stop
>> Error: Stopping the node will cause a loss of the quorum, use --force
>> to
>> override
>> ====
>>
>> Shouldn't pcs know it's the last node and shut down without
>> complaint?
>
> It knows, it's just not sure you know :)
>
> pcs's design philosophy is to hand-hold users by default and give
> expert users --force.
>
> The idea in this case is that (especially in 3-to-5-node clusters)
> someone might not realize that stopping one node could make all
> resources stop cluster-wide.
This makes total sense in 3+ node cluster. However, when you're asking
the last node in a two-node cluster to stop, then it seems odd. Perhaps
overriding this behaviour when 2-node is set?
In any case, I'm calling this from a program and that means I need to
use '--force' all the time (or add some complex logic of my own, which I
can do).
Well anyway, now I know it was intentional. :)
--
Digimer
Papers and Projects: https://alteeve.com/w/
"I am, somehow, less interested in the weight and convolutions of
Einstein’s brain than in the near certainty that people of equal talent
have lived and died in cotton fields and sweatshops." - Stephen Jay Gould
More information about the Users
mailing list