[ClusterLabs] Antw: 2node cluster question

Ulrich Windl Ulrich.Windl at rz.uni-regensburg.de
Thu Aug 16 02:27:11 EDT 2018


>>> "Stefan K" <Shadow_7 at gmx.net> schrieb am 15.08.2018 um 10:58 in Nachricht
<trinity-80166f37-3ed1-474e-9df3-fcdff91a302c-1534323500296 at 3c-app-gmx-bs24>:
> Hello,
> 
> what is the 'best' 2‑node cluster config?
> What I want, if it run on nodeA and nodeA goes in standby or shut down, 
> everything must start at nodeB, if nodeA comes back, everything must still 
> run on nodeB.

Hi!

This is almost standard; you just have to add some default stickiness to
prevent "Load balancing" after the second node gets online again.

Regards,
Ulrich

> 
> pacemaker looks like:
>         have‑watchdog=false \
>         dc‑version=1.1.16‑94ff4df \
>         cluster‑infrastructure=corosync \
>         cluster‑name=zfs‑vmstorage \
>         no‑quorum‑policy=stop \
>         stonith‑enabled=true \
>         last‑lrm‑refresh=1528814481
> rsc_defaults rsc_defaults‑options: \
>         resource‑stickiness=100
> 
> and the corosync.config:
> totem {
>     version: 2
>     secauth: off
>     cluster_name: zfs‑vmstorage
>     transport: udpu
>     rrp_mode: passive
> }
> 
> nodelist {
>     node {
>         ring0_addr: zfs‑serv3
>         ring1_addr: 192.168.251.1
>         nodeid: 1
>     }
> 
>     node {
>         ring0_addr: zfs‑serv4
>         ring1_addr: 192.168.251.2
>         nodeid: 2
>     }
> }
> 
> quorum {
>     provider: corosync_votequorum
>     two_node: 1
> }
> 
> logging {
>     to_logfile: yes
>     logfile: /var/log/corosync/corosync.log
>     to_syslog: yes
> }
> 
> thanks in advance
> and best regards
> Stefan
> _______________________________________________
> Users mailing list: Users at clusterlabs.org 
> https://lists.clusterlabs.org/mailman/listinfo/users 
> 
> Project Home: http://www.clusterlabs.org 
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf 
> Bugs: http://bugs.clusterlabs.org 






More information about the Users mailing list