[ClusterLabs] Stopping the last node with pcs

Ken Gaillot kgaillot at redhat.com
Wed Apr 28 10:10:47 EDT 2021


On Tue, 2021-04-27 at 23:23 -0400, Digimer wrote:
> Hi all,
> 
>   I noticed something odd.
> 
> ====
> [root at an-a02n01 ~]# pcs cluster status
> Cluster Status:
>  Cluster Summary:
>    * Stack: corosync
>    * Current DC: an-a02n01 (version 2.0.4-6.el8_3.2-2deceaa3ae) -
> partition with quorum
>    * Last updated: Tue Apr 27 23:20:45 2021
>    * Last change:  Tue Apr 27 23:12:40 2021 by root via cibadmin on
> an-a02n01
>    * 2 nodes configured
>    * 12 resource instances configured (4 DISABLED)
>  Node List:
>    * Online: [ an-a02n01 ]
>    * OFFLINE: [ an-a02n02 ]
> 
> PCSD Status:
>   an-a02n01: Online
>   an-a02n02: Offline
> ====
> [root at an-a02n01 ~]# pcs cluster stop
> Error: Stopping the node will cause a loss of the quorum, use --force 
> to
> override
> ====
> 
>   Shouldn't pcs know it's the last node and shut down without
> complaint?

It knows, it's just not sure you know :)

pcs's design philosophy is to hand-hold users by default and give
expert users --force.

The idea in this case is that (especially in 3-to-5-node clusters)
someone might not realize that stopping one node could make all
resources stop cluster-wide.
-- 
Ken Gaillot <kgaillot at redhat.com>



More information about the Users mailing list