[ClusterLabs] Working with 2 VIPs

Ken Gaillot kgaillot at redhat.com
Tue Feb 9 16:45:14 UTC 2016


On 02/08/2016 04:24 AM, Louis Chanouha wrote:
> Hello,
> I'm not sure if this mailign is the proper place to send ma request, please tell 
> me where i should send it if not :)

This is the right place :)

> I have an use case that i can't run acutally with corosync + pacemaker.
> 
> I have two nodes, two VIP and two services (one dupplicated), in order to 
> provide an active/active service (2 physical sites).

By "2 physical sites", do you mean 2 physical machines on the same LAN,
or 2 geographically separate locations?

> On a normal situation, one VIP is associated to one node via a prefered 
> location, and the service is running one the two nodes (cloned).
> 
> On failing situation, i want that the working node takes the IP of the other 
> host without migrating the service (listening on 0.0.0.0), so when :
>   - the service is down - not working
>   - the node is down (network or OS layer) - working
> 
> I can't find the proper way to conceptualize this problem with 
> group/colocation/order notions of pacemaker. I would be happy in you give me 
> some thoughs on appropriate options.

I believe your current configuration already does that. What problems
are you seeing?

> 
> Thank you in advance for your help.
> Sorry for my non-native English.
> 
> Louis Chanouha
> 
> **
> 
> My current configuration is this one. I can't translate it in XML if you need it.
> 
> /node Gollum//

This will likely cause a log warning that "Node names with capitals are
discouraged". It's one of those things that shouldn't matter, but better
safe than sorry ...

> //node edison//
> //primitive cups lsb:cups \//
> //        op monitor interval="2s"//
> //primitive vip_edison ocf:heartbeat:IPaddr2 \//
> //        params nic="eth0" ip="10.1.9.18" cidr_netmask="24" \//
> //        op monitor interval="2s"//
> //primitive vip_gollum ocf:heartbeat:IPaddr2 \//
> //        params nic="eth0" ip="10.1.9.23" cidr_netmask="24" \//
> //        op monitor interval="2s"//
> //clone ha-cups cups//
> //location pref_edison vip_edison 50: edison//
> //location pref_gollum vip_gollum 50: Gollum//
> //property $id="cib-bootstrap-options" \//
> //dc-version="1.1.7-ee0730e13d124c3d58f00016c3376a1de5323cff" \//
> //        cluster-infrastructure="openais" \//
> //        expected-quorum-votes="2" \//
> //        stonith-enabled="false" \//

Without stonith, the cluster will be unable to recover from certain
types of failures (for example, network failures). If both nodes are up
but can't talk to each other ("split brain"), they will both bring up
both IP addresses.

> //        no-quorum-policy="ignore"/

If you can use corosync 2, you can set "two_node: 1" in corosync.conf,
and then you wouldn't need this line.

> -- 
> 
> *Louis Chanouha | Ingénieur Système et Réseaux*
> Service Numérique de l'Université de Toulouse
> *Université Fédérale Toulouse Midi-Pyrénées*
> 15 rue des Lois - BP 61321 - 31013 Toulouse Cedex 6
> Tél. : +33(0)5 61 10 80 45 <tel:+33561108045> / poste int. : 18045
> 
> louis.chanouha at univ-toulouse.fr
> Facebook 
> <http://www.facebook.com/pages/Universit%C3%A9-de-Toulouse/189718587732582> | 
> Twitter <https://twitter.com/#%21/Univ_Toulouse> | 
> <http://www.univ-toulouse.fr/>www.univ-toulouse.fr





More information about the Users mailing list