[ClusterLabs] Setting n+1 cluster steps

Sayed Mujtaba mujtaba at riversilica.com
Thu Jun 18 10:33:14 EDT 2015


Hi Gaillot ,

Thank you  for the information.. One query .

1. Set up master/slave resources for each database, where the slaves use the database software's native replication to keep the data in sync.
If the master fails, Pacemaker will promote one of the slaves to be the new master.

  Is there any information available in the documents  to set this  set up as I want to avoid DRBD or shared storage for time being .



-----Original Message-----
From: Ken Gaillot [mailto:kgaillot at redhat.com] 
Sent: Wednesday, June 17, 2015 8:40 PM
To: Cluster Labs - All topics related to open-source clustering welcomed
Subject: Re: [ClusterLabs] Setting n+1 cluster steps

----- Original Message -----
> Hi ,
> 
> Thank you very much  for the information ..
> 
> In each of my nodes the data base copy will be different  (used by 
> some
> application)  . In case of failover of one node I want it to failover 
> to other node with same data base copy  from failed node .
> Is it possible with pace maker to do achieve this ?

Yes, but every node has to keep a synchronized copy of every database.
Two common alternatives:

1. Set up master/slave resources for each database, where the slaves use the database software's native replication to keep the data in sync.
If the master fails, Pacemaker will promote one of the slaves to be the new master.

2. Set up shared storage, and run each database server on only one node.
This can be done via DRBD, a SAN or NAS, clustered filesystems, etc.
A variation of this would have the database server in a container or virtual server running on the shared storage, and then have Pacemaker manage the container or virtual server rather than the database server directly.

Fencing must be configured in any case, to avoid the possibility of data corruption in a split-brain situation.

> Thanks
> 
> 
> -----Original Message-----
> From: Ken Gaillot [mailto:kgaillot at redhat.com]
> Sent: Wednesday, June 17, 2015 7:41 PM
> To: Cluster Labs - All topics related to open-source clustering 
> welcomed
> Subject: Re: [ClusterLabs] Setting n+1 cluster steps
> 
> ----- Original Message -----
> > Hi  ,
> > 
> > I am referring to cluster from scratch document by pacemaker which 
> > only talks about configuration to 1+1 cluster .
> > I need to set n+1 and n+m clusters .Can someone please send me the 
> > information where it describes the steps to set these configurations .
> 
> Every cluster is unique, so Clusters From Scratch just gives you an 
> example to show the basic workings. You're probably using a high-level 
> configuration tool such as crm or pcs; refer to its documentation for how to add nodes.
> 
> Usually it's as simple as
> 
>    pcs cluster node add $NODENAME --start
> 
> to add a node to an existing cluster. Every situation is different 
> however, so you'll have to research and experiment. If you configured 
> two_node in corosync.conf, or no-quorum-policy=ignore in Pacemaker, 
> you'll want to remove that when going above 2 nodes.
> 
> For N+1/N+M you just add all your nodes to the cluster, and Pacemaker 
> will host your resources on whichever nodes are available, 
> automatically moving them if one goes down or comes up. There's no 
> need to specify one or more nodes as "backup", but if you really want 
> to (perhaps because one node is slower than the rest), you can use 
> location constraints to prefer any node more or less than the others.
> 
> Again, refer to your configuration tool's documentation for how to 
> configure constraints etc., but if you want a lower-level view of how 
> it all works, see Pacemaker Explained:
> 
> http://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html-single/Pacemak
> er_Explained/index.html

_______________________________________________
Users mailing list: Users at clusterlabs.org http://clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org




More information about the Users mailing list