[ClusterLabs] Centreon HA Cluster - VIP issue

Jan Friesse jfriesse at redhat.com
Mon Sep 4 10:23:40 EDT 2023


Hi,


On 02/09/2023 17:16, Adil Bouazzaoui wrote:
>   Hello,
> 
> My name is Adil,i worked for Tman company, we are testing the Centreon HA
> cluster to monitor our infrastructure for 13 companies, for now we are
> using the 100 IT licence to test the platform, once everything is working
> fine then we can purchase a licence suitable for our case.
> 
> We're stuck at *scenario 2*: setting up Centreon HA Cluster with Master &
> Slave on a different datacenters.
> For *scenario 1*: setting up the Cluster with Master & Slave and VIP
> address on the same network (VLAN) it is working fine.
> 
> *Scenario 1: Cluster on Same network (same DC) ==> works fine*
> Master in DC 1 VLAN 1: 172.30.15.10 /24
> Slave in DC 1 VLAN 1: 172.30.15.20 /24
> VIP in DC 1 VLAN 1: 172.30.15.30/24
> Quorum in DC 1 LAN: 192.168.1.10/24
> Poller in DC 1 LAN: 192.168.1.20/24
> 
> *Scenario 2: Cluster on different networks (2 separate DCs connected with
> VPN) ==> still not working*

corosync on all nodes needs to have direct connection to any other node. 
VPN should work as long as routing is correctly configured. What exactly 
is "still not working"?

> Master in DC 1 VLAN 1: 172.30.15.10 /24
> Slave in DC 2 VLAN 2: 172.30.50.10 /24
> VIP: example 102.84.30.XXX. We used a public static IP from our internet
> service provider, we thought that using a IP from a site network won't
> work, if the site goes down then the VIP won't be reachable!
> Quorum: 192.168.1.10/24

No clue what you mean by Quorum, but placing it in DC1 doesn't feel right.

> Poller: 192.168.1.20/24
> 
> Our *goal *is to have Master & Slave nodes on different sites, so when Site
> A goes down, we keep monitoring with the slave.
> The problem is that we don't know how to set up the VIP address? Nor what
> kind of VIP address will work? or how can the VIP address work in this
> scenario? or is there anything else that can replace the VIP address to
> make things work.
> Also, can we use a backup poller? so if the poller 1 on Site A goes down,
> then the poller 2 on Site B can take the lead?
> 
> we looked everywhere (The watch, youtube, Reddit, Github...), and we still
> couldn't get a workaround!
> 
> the guide we used to deploy the 2 Nodes Cluster:
> https://docs.centreon.com/docs/installation/installation-of-centreon-ha/overview/
> 
> attached the 2 DCs architecture example.
> 
> We appreciate your support.
> Thank you in advance.
> 
> 
> Adil Bouazzaoui
> IT Infrastructure Engineer
> TMAN
> adil.bouazzaoui at tmandis.ma
> adilb574 at gmail.com
> +212 656 29 2020
> 
> 
> _______________________________________________
> Manage your subscription:
> https://lists.clusterlabs.org/mailman/listinfo/users
> 
> ClusterLabs home: https://www.clusterlabs.org/
> 



More information about the Users mailing list