[ClusterLabs] what is the "best" way to completely shutdown a two-node cluster ?

Roger Zhou zzhou at suse.com
Thu Feb 10 09:15:07 EST 2022


On 2/9/22 17:46, Lentes, Bernd wrote:
> 
> 
> ----- On Feb 7, 2022, at 4:13 PM, Jehan-Guillaume de Rorthais jgdr at dalibo.com wrote:
> 
>> On Mon, 7 Feb 2022 14:24:44 +0100 (CET)
>> "Lentes, Bernd" <bernd.lentes at helmholtz-muenchen.de> wrote:
>>
>>> Hi,
>>>
>>> i'm currently changing a bit in my cluster because i realized that my
>>> configuration for a power outtage didn't work as i expected. My idea is
>>> currently:
>>> - first stop about 20 VirtualDomains, which are my services. This will surely
>>> takes some minutes. I'm thinking of stopping each with a time difference of
>>> about 20 seconds for not getting to much IO load. and then ...

This part is tricky. At one hand, it is good thinking to throttle IO load.

On the other hand, as Jehan and Ulrich mentioned, `crm resource stop <rsc>` 
introduces "target‑role=Stopped" for each VirtualDomain, and have to do `crm 
resource start <rsc>` to changed it back to "target‑role=Started" to start them 
after the power outage.

>>> - how to stop the other resources ?
>>
>> I would set cluster option "stop-all-resources" so all remaining resources are
>> stopped gracefully by the cluster.
>>
>> Then you can stop both nodes using eg. "crm cluster stop".

Here, for SLES12SP5, `crm cluster run "crm cluster stop"` could help a little.

 From crmsh-4.4.0 onward, `crm cluster stop --all` is recommended to simplify 
the whole procedure to do the cluster wide shutdown.

BR,
Roger

>>
>> On restart, after both nodes are up and joined to the cluster, you can set
>> "stop-all-resources=false", then start your VirtualDomains.
> 
> Aren't  the VirtualDomains already started by "stop-all-resources=false" ?
> 
> I wrote a script for the whole procedure which is triggered by the UPS.
> As i am not a big schellscript-writer please have a look and tell me your opinion.
> You find it here: https://nc-mcd.helmholtz-muenchen.de/nextcloud/s/rEA9bFxs5Ay6fYG
> Thanks.
> 
> Bernd
> 
> 
> _______________________________________________
> Manage your subscription:
> https://lists.clusterlabs.org/mailman/listinfo/users
> 
> ClusterLabs home: https://www.clusterlabs.org/



More information about the Users mailing list