[ClusterLabs] Antw: Running two independent clusters

Ken Gaillot kgaillot at redhat.com
Thu Mar 23 14:05:34 UTC 2017


On 03/22/2017 11:08 PM, Nikhil Utane wrote:
> I simplified when I called it as a service. Essentially it is a complete
> system.
> It is an LTE eNB solution. It provides LTE service (service A) and now
> we need to provide redundancy for another different but related service
> (service B). The catch being, the LTE redundancy solution will be tied
> to one operator whereas the other service can span across multiple
> operators. Therefore ideally we want two completely independent clusters
> since different set of nodes will form the two clusters.
> Now what I am thinking is, to run additional instance of Pacemaker +
> Corosync in a container which can then notify the service B on host
> machine to start or stop it's service. That way my CIB file will be
> independent and I can run corosync on different interfaces.
> 
> Workable right?
> 
> -Regards
> Nikhil

It's not well-tested, but in theory it should work, as long as the
container is privileged.

I still think virtualizing the services would be more resilient. It
makes sense to have a single determination of quorum and fencing for the
same real hosts. I'd think of it like a cloud provider -- the cloud
instances are segregated by customer, but the underlying hosts are the same.

You could configure your cluster as asymmetric, and enable each VM only
on the nodes it's allowed on, so you get the two separate "clusters"
that way. You could set up the VMs as guest nodes if you want to monitor
and manage multiple services within them. If your services require
hardware access that's not easily passed to a VM, containerizing the
services might be a better option.

> On Wed, Mar 22, 2017 at 8:06 PM, Ken Gaillot <kgaillot at redhat.com
> <mailto:kgaillot at redhat.com>> wrote:
> 
>     On 03/22/2017 05:23 AM, Nikhil Utane wrote:
>     > Hi Ulrich,
>     >
>     > It's not an option unfortunately.
>     > Our product runs on a specialized hardware and provides both the
>     > services (A & B) that I am referring to. Hence I cannot have service A
>     > running on some nodes as cluster A and service B running on other nodes
>     > as cluster B.
>     > The two services HAVE to run on same node. The catch being service A and
>     > service B have to be independent of each other.
>     >
>     > Hence looking at Container option since we are using that for some other
>     > product (but not for Pacemaker/Corosync).
>     >
>     > -Regards
>     > Nikhil
> 
>     Instead of containerizing pacemaker, why don't you containerize or
>     virtualize the services, and have pacemaker manage the containers/VMs?
> 
>     Coincidentally, I am about to announce enhanced container support in
>     pacemaker. I should have a post with more details later today or
>     tomorrow.
> 
>     >
>     > On Wed, Mar 22, 2017 at 12:41 PM, Ulrich Windl
>     > <Ulrich.Windl at rz.uni-regensburg.de
>     <mailto:Ulrich.Windl at rz.uni-regensburg.de>
>     > <mailto:Ulrich.Windl at rz.uni-regensburg.de
>     <mailto:Ulrich.Windl at rz.uni-regensburg.de>>> wrote:
>     >
>     >     >>> Nikhil Utane <nikhil.subscribed at gmail.com <mailto:nikhil.subscribed at gmail.com>
>     >     <mailto:nikhil.subscribed at gmail.com
>     <mailto:nikhil.subscribed at gmail.com>>> schrieb am 22.03.2017 um 07:48 in
>     >     Nachricht
>     >   
>      <CAGNWmJV05-YG+f9VNG0Deu-2xo7Lp+kRQPOn9sWYy7Jz=0gNag at mail.gmail.com
>     <mailto:0gNag at mail.gmail.com>
>     >     <mailto:0gNag at mail.gmail.com <mailto:0gNag at mail.gmail.com>>>:
>     >     > Hi All,
>     >     >
>     >     > First of all, let me thank everyone here for providing
>     excellent support
>     >     > from the time I started evaluating this tool about a year
>     ago. It has
>     >     > helped me to make a timely and good quality release of our
>     Redundancy
>     >     > solution using Pacemaker & Corosync. (Three cheers :))
>     >     >
>     >     > Now for our next release we have a slightly different ask.
>     >     > We want to provide Redundancy to two different types of
>     services (we can
>     >     > call them Service A and Service B) such that all cluster
>     communication for
>     >     > Service A happens on one network/interface (say VLAN A) and
>     for service B
>     >     > happens on a different network/interface (say VLAN B).
>     Moreover we do not
>     >     > want the details of Service A (resource attributes etc) to
>     be seen by
>     >     > Service B and vice-versa.
>     >     >
>     >     > So essentially we want to be able to run two independent
>     clusters. From
>     >     > what I gathered, we cannot run multiple instances of
>     Pacemaker and Corosync
>     >     > on same node. I was thinking if we can use Containers and
>     run two isolated
>     >
>     >     You conclude from two services that should not see each other that
>     >     you need to instances of pacemaker on one node. Why?
>     >     If you want true separation, drop the VLANs, make real
>     networks and
>     >     two independent clusters.
>     >     Even if two pacemeaker on one node would work, you habe the
>     problem
>     >     of fencing, where at least one pacemaker instance will always be
>     >     surprised badly if fencing takes place. I cannot imaging you
>     want that!
>     >
>     >     > instances of Pacemaker + Corosync on same node.
>     >     > As per https://github.com/davidvossel/pacemaker_docker
>     <https://github.com/davidvossel/pacemaker_docker>
>     >     <https://github.com/davidvossel/pacemaker_docker
>     <https://github.com/davidvossel/pacemaker_docker>> it looks do-able.
>     >     > I wanted to get an opinion on this forum before I can commit
>     that it can be
>     >     > done.
>     >
>     >     Why are you designing it more complicated as necessary?
>     >
>     >     >
>     >     > Please share your views if you have already done this and if
>     there are any
>     >     > known challenges that I should be familiar with.
>     >     >
>     >     > -Thanks
>     >     > Nikhil




More information about the Users mailing list