[ClusterLabs] Antw: Running two independent clusters

Ulrich Windl Ulrich.Windl at rz.uni-regensburg.de
Wed Mar 22 03:11:41 EDT 2017


>>> Nikhil Utane <nikhil.subscribed at gmail.com> schrieb am 22.03.2017 um 07:48 in
Nachricht
<CAGNWmJV05-YG+f9VNG0Deu-2xo7Lp+kRQPOn9sWYy7Jz=0gNag at mail.gmail.com>:
> Hi All,
> 
> First of all, let me thank everyone here for providing excellent support
> from the time I started evaluating this tool about a year ago. It has
> helped me to make a timely and good quality release of our Redundancy
> solution using Pacemaker & Corosync. (Three cheers :))
> 
> Now for our next release we have a slightly different ask.
> We want to provide Redundancy to two different types of services (we can
> call them Service A and Service B) such that all cluster communication for
> Service A happens on one network/interface (say VLAN A) and for service B
> happens on a different network/interface (say VLAN B). Moreover we do not
> want the details of Service A (resource attributes etc) to be seen by
> Service B and vice-versa.
> 
> So essentially we want to be able to run two independent clusters. From
> what I gathered, we cannot run multiple instances of Pacemaker and Corosync
> on same node. I was thinking if we can use Containers and run two isolated

You conclude from two services that should not see each other that you need to instances of pacemaker on one node. Why?
If you want true separation, drop the VLANs, make real networks and two independent clusters.
Even if two pacemeaker on one node would work, you habe the problem of fencing, where at least one pacemaker instance will always be surprised badly if fencing takes place. I cannot imaging you want that!

> instances of Pacemaker + Corosync on same node.
> As per https://github.com/davidvossel/pacemaker_docker it looks do-able.
> I wanted to get an opinion on this forum before I can commit that it can be
> done.

Why are you designing it more complicated as necessary?

> 
> Please share your views if you have already done this and if there are any
> known challenges that I should be familiar with.
> 
> -Thanks
> Nikhil








More information about the Users mailing list