[ClusterLabs] Antw: [EXT] what is the "best" way to completely shutdown a two‑node cluster ?

Ulrich Windl Ulrich.Windl at rz.uni-regensburg.de
Mon Feb 7 08:36:51 EST 2022


>>> "Lentes, Bernd" <bernd.lentes at helmholtz-muenchen.de> schrieb am 07.02.2022
um
14:24 in Nachricht
<1490802403.168299850.1644240284175.JavaMail.zimbra at helmholtz-muenchen.de>:
> Hi,
> 
> i'm currently changing a bit in my cluster because i realized the my 
> configuration for a power outtage didn't work as i expected.
> My idea is currently:
> - first stop about 20 VirtualDomains, which are my services. This will 
> surely takes some minutes.
> I'm thinking of stopping each with a time difference of about 20 seconds for

> not getting to much IO load.
> and then ...
> - how to stop the other resources ?
> - put the nodes into standby or offline ?
> - do a systemctl stop pacemaker ?
> - or do a crm cluster stop ?

Bernd,

what if you set the node affected to standby, or shut down the cluster
services? Or all all nodes powered by the same UPS?


> 
> And what is if both nodes are running ? Can i do that simultaneously on both

> nodes ?

I guess that should work.

> My OS is SLES 12 SP5, pacemaker is 1.1.23, corosync is 2.3.6-9.13.1

Your action plan depends on what the VMNs are doing: basically every HA
resource should survive a hard restart without much damange.
So maybe an option could be: Do nothing, or do an emergency shutdown of the
node without properly migrating all the VMs elsewhere.
You cannot make an application HA by putting it in a VM; at least not in
general.

Regards,
Ulrich

> 
> Thanks for your help.
> 
> Bernd
> 
> -- 
> 
> Bernd Lentes 
> System Administrator 
> Institute for Metabolism and Cell Death (MCD) 
> Building 25 - office 122 
> HelmholtzZentrum München 
> bernd.lentes at helmholtz-muenchen.de 
> phone: +49 89 3187 1241 
> fax: +49 89 3187 2294 
> http://www.helmholtz-muenchen.de/mcd 
> 
> 
> Public key: 
> 
> 30 82 01 0a 02 82 01 01 00 b3 72 3e ce 2c 0a 6f 58 49 2c 92 23 c7 b9 c1 ff 
> 6c 3a 53 be f7 9e e9 24 b7 49 fa 3c e8 de 28 85 2c d3 ed f7 70 03 3f 4d 82
fc 
> cc 96 4f 18 27 1f df 25 b3 13 00 db 4b 1d ec 7f 1b cf f9 cd e8 5b 1f 11 b3
a7 
> 48 f8 c8 37 ed 41 ff 18 9f d7 83 51 a9 bd 86 c2 32 b3 d6 2d 77 ff 32 83 92
67 
> 9e ae ae 9c 99 ce 42 27 6f bf d8 c2 a1 54 fd 2b 6b 12 65 0e 8a 79 56 be 53
89 
> 70 51 02 6a eb 76 b8 92 25 2d 88 aa 57 08 42 ef 57 fb fe 00 71 8e 90 ef b2
e3 
> 22 f3 34 4f 7b f1 c4 b1 7c 2f 1d 6f bd c8 a6 a1 1f 25 f3 e4 4b 6a 23 d3 d2
fa 
> 27 ae 97 80 a3 f0 5a c4 50 4a 45 e3 45 4d 82 9f 8b 87 90 d0 f9 92 2d a7 d2
67 
> 53 e6 ae 1e 72 3e e9 e0 c9 d3 1c 23 e0 75 78 4a 45 60 94 f8 e3 03 0b 09 85
08 
> d0 6c f3 ff ce fa 50 25 d9 da 81 7b 2a dc 9e 28 8b 83 04 b4 0a 9f 37 b8 ac
58 
> f1 38 43 0e 72 af 02 03 01 00 01





More information about the Users mailing list