[Pacemaker] Design: 8 vs 4x2 nodes Cluster

Andrew Beekhof andrew at beekhof.net
Thu Mar 18 10:45:21 UTC 2010


On Thu, Mar 18, 2010 at 11:32 AM,  <martin.braun at icw.de> wrote:
> Hi there,
> I want to realize a rather complex setup, so I have a couple of questions:
>
>
> The cluster (as a shared nothing variant) should provide:
>
> * 4 services (=server) depending on each other.
> * 3 of them can only be realized as active/passive failover, synched with
> DRBD (M/S)
> * The servers running the application will be Virtual Machines. So I will
> end up with three master-slave pairs each providing a VIP with a shared
> drbd-device in a master-slave setup.
> Most resources could only run on one of two distinct server nodes
> (active/passive). In sum I will have eight nodes resp. VMs
> Would you recommend the administration of all nodes with a common
> corosync/pacemaker cluster?

personally, i'd say yes

> I am a bit afraid of having too many location and collocation constraints
> for all these resources. Is there a way to define subclusters?

Not yet, thats coming in 1.1

> How would
> one bind a resource group to specific nodes - as a constraint to
> hostnames?

You can do it in one of two ways. Start reading here:
   http://www.clusterlabs.org/doc/en-US/Pacemaker/1.0/html/Pacemaker_Explained/s-resource-location.html#id1908403

> Or would it be better to have 4 two node clusters communicating on
> disjunct/ subnets, with the advantage of a less complex crm configuration?
>
> Is there a neat method to administer four separate clusters from a console
> or workstation?
> Without introducing a new SPOF?

You can connect to the cluster from non-cluster nodes:
   http://www.clusterlabs.org/doc/en-US/Pacemaker/1.0/html/Pacemaker_Explained/ch-advanced-options.html#s-remote-connection




More information about the Pacemaker mailing list