[ClusterLabs] Antw: [EXT] Stopping the last node with pcs

Ulrich Windl Ulrich.Windl at rz.uni-regensburg.de
Wed Apr 28 06:36:09 EDT 2021


>>> Digimer <lists at alteeve.ca> schrieb am 28.04.2021 um 05:23 in Nachricht
<721bab92-5686-955e-02c8-66269104c10c at alteeve.ca>:
> Hi all,
> 
>   I noticed something odd.
> 
> ====
> [root at an-a02n01 ~]# pcs cluster status
> Cluster Status:
>  Cluster Summary:
>    * Stack: corosync
>    * Current DC: an-a02n01 (version 2.0.4-6.el8_3.2-2deceaa3ae) -
> partition with quorum
>    * Last updated: Tue Apr 27 23:20:45 2021
>    * Last change:  Tue Apr 27 23:12:40 2021 by root via cibadmin on
> an-a02n01
>    * 2 nodes configured
>    * 12 resource instances configured (4 DISABLED)
>  Node List:
>    * Online: [ an-a02n01 ]
>    * OFFLINE: [ an-a02n02 ]
> 
> PCSD Status:
>   an-a02n01: Online
>   an-a02n02: Offline
> ====
> [root at an-a02n01 ~]# pcs cluster stop
> Error: Stopping the node will cause a loss of the quorum, use --force to
> override
> ====
> 
>   Shouldn't pcs know it's the last node and shut down without complaint?
> 

It's like the meanwhile old discussion on the lack of a "cluster stop"
command: If (I'm unsure) "pcs cluster stop" does the same as "crm cluster stop"
(i.e.: stop the _node_, not the cluster), then maybe "... cluster stop --all"
could be the confirmation that the user really wants to stop the cluster, and
not just the local node.
In short: I think the warning is justified.

Regards,
Ulrich

> -- 
> Digimer
> Papers and Projects: https://alteeve.com/w/ 
> "I am, somehow, less interested in the weight and convolutions of
> Einstein’s brain than in the near certainty that people of equal talent
> have lived and died in cotton fields and sweatshops." - Stephen Jay Gould
> _______________________________________________
> Manage your subscription:
> https://lists.clusterlabs.org/mailman/listinfo/users 
> 
> ClusterLabs home: https://www.clusterlabs.org/ 





More information about the Users mailing list