[Pacemaker] Multinode cluster question

Attila Megyeri amegyeri at minerva-soft.com
Tue Nov 8 11:02:33 UTC 2011


Hi All,

I need some help/guidance, on how to make sure that certain resources (running in virtual cluster nodes) are run on the same physical server.

The setup:

I have a cluster made of two physical nodes, that I am willing to use for HA purposes (no LB for the time being).
I have a failover IP from the provider, that is controlled using a resource agent from one pair of the virtual machines (web1 and web2), and the IP is assigned always to one of the physical servers.
On the physical server I use iptables pre/postrouting to direct the traffic to the appropriate virtual node. The routing points to the web VIP, and red5 VIP.

On the physical servers I have 3-3 virtual servers, that host the specific roles of the solution, e.g. db1 db2, web1 web2, red5_1 red5_2.
The virtual servers use the default gateway of their own physical server to talk to the outside world.

My first idea was to create 3 independent two-node clusters. Db cluster, web cluster, red5 cluster.
The db cluster is a M/S psql, with a virtual IP.
The web cluster is an apache2 cluster, cloned on two virtual servers, with a failover IP RA (if node1 on phy1 fails, failover Ip is redirected to phy2 and vice versa).
Red5 is a red5 cluster running on two instances, with a virtual IP (internal).

This is where it gets interesting - because of the default gateway.
The db cluster is accessed from the intranet only - no worries here.

Red5 is different - but it needs further explanation.
Let's assume that all roles (db master, web, red5) are running on phisical server  1.
Web1 fails for some reason. Web2 role will become active, and the external failover IP will point from now on to physical node2.  The iptables script still points to the same VIP address, but it now runs on a different node. No issue shere, as Web2  gets its traffic properly, as it KNOWs that it is running on node2 now.

The issue is with Red5.
Red5 runs on node1, and uses default gw on node1. [it does not know that the external failover IP no longer points to node1].
When a request is received on the failover IP (now ph node2), iptables redirects it to red5's VIP. Red5, running on node1 gets this request, but does not know that it shall be routed through node2!
As such, the replies, will be routed through ph node1 - as it is the default gw. This is definitively not the right approach.

The actual question is:

-          Should I treat all nodes inside the same cluster (db1, db2, web1, web2, red1, red2) - and this way I could possibly detect that failover IP has changed and I should "do something" with red5?

-          "Do something" could mean for me one of the following:

o   If "web" VIP is running on physical node 2 (on node "web2"), then move "red" VIP to physical node2 (to node "red2")

o   Alternatively, only change the default gateway for red1, to use "node2" as the default gateway?

I hope my question is clear, and that the setup mentioned here is quite common.
I am asking the experts, what is the recommended approach in this case.


Thank you in advance,

Attila




-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.clusterlabs.org/pipermail/pacemaker/attachments/20111108/15c7cd9e/attachment-0003.html>


More information about the Pacemaker mailing list