[Pacemaker] running same resource on both nodes through clone

Andrew Beekhof andrew at beekhof.net
Tue Jun 11 22:11:45 UTC 2013


On 08/06/2013, at 1:17 AM, ESWAR RAO <eswar7028 at gmail.com> wrote:

> Hi Dejan,
> 
> Thanks for the response.
> 
> In our setup, we want the resources to start on the 2 nodes (active/active) so that the downtime would be less.
> 
> All clients connect to the VIP. If the resource on any one node goes down, I expect the VIP should be moved to another node and since the resource is already running on the another node the down time would be less.
> 

Sounds like the colocation constraint should be an ordering one.

> I thought of configuring them as is-manged= false then the pacemaker wouldn't restart the resources on failed node.
> 
> Thanks
> Eswar
> 
> 
> On Fri, Jun 7, 2013 at 7:32 PM, Dejan Muhamedagic <dejanmm at fastmail.fm> wrote:
> Hi,
> 
> On Fri, Jun 07, 2013 at 12:49:49PM +0530, ESWAR RAO wrote:
> > Hi All,
> >
> > I am trying to run same RA on both nodes using clone.
> > My set up is a 2 node cluster with HB+pacemaker.
> >
> > The RA aren't started automatically.
> > They are started through pacemaker only.
> >
> > +++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> > #crm configure primitive ha_vip ocf:IPaddr2 params ip=192.168.101.205
> > cidr_netmask=32 nic=eth1 op monitor interval=30s
> >
> > #crm configure primitive oc_d1 lsb::testd1 meta allow-migrate="true"
> > migration-threshold="1" failure-timeout="30s" op monitor interval="3s"
> > #crm configure clone oc_d1_clone oc_d1 meta clone-max="2"
> > clone-node-max="1" globally-unique="false" interleave="true"
> >
> > #crm configure primitive oc_d2 lsb::testd2 meta allow-migrate="true"
> > migration-threshold="3" failure-timeout="30s" op monitor interval="5s"
> > #crm configure clone oc_d2_clone oc_d2 meta clone-max="2"
> > clone-node-max="1" globally-unique="false" interleave="true"
> >
> > # crm configure colocation oc-ha_vip inf: ha_vip oc_d1_clone oc_d2_clone
> > +++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> >
> > I observe the RAs are not getting started in the other node.
> >
> > ha_vip    (ocf::heartbeat:IPaddr2):    Started ubuntu190
> > Clone Set: oc_d1_clone [oc_d1]
> >      Started: [ ubuntu190 ]
> >      Stopped: [ oc_d1:1 ]
> >  Clone Set: oc_d2_clone [oc_d2]
> >      Started: [ ubuntu190 ]
> >      Stopped: [ oc_d2:1 ]
> >
> >
> > But if I remove the colocation constraint then the RA are starting on the 2
> > nodes.
> > But without colocation, if any RA fails the vip will not migrate which is
> > bad.
> 
> Can you explain why do you need the oc_* resources running both
> nodes but at the same time they depend on the IP address which is
> not cloned. Looks to me a condition which is simply impossible to
> meet.
> 
> Thanks,
> 
> Dejan
> 
> > Can some one help me out in my issue.
> >
> >
> > Thanks
> > Eswar
> 
> > _______________________________________________
> > Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
> > http://oss.clusterlabs.org/mailman/listinfo/pacemaker
> >
> > Project Home: http://www.clusterlabs.org
> > Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> > Bugs: http://bugs.clusterlabs.org
> 
> 
> _______________________________________________
> Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
> 
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
> 
> _______________________________________________
> Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
> 
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org





More information about the Pacemaker mailing list