[Pacemaker] Reg. clone and order attributes

Vladislav Bogdanov bubble at hoster-ok.com
Sun Dec 8 17:22:10 UTC 2013


07.12.2013 14:30, ESWAR RAO wrote:
> 
> Hi Vladislav,
> 
> Thanks for the response.
> I will follow your suggestions.
> 
> I have configured my required configuration in a different manner:
> (1)  crm configure : all the 3 primitives
> (2)  crm configure colocation : for all the 3 primitives
> (3)  crm configure order : for the primitives
> 
> (4) Now I did clone for the 3 primitives 

Once you have a clone, then you should not refer primitive it is build
on any more, but use that clone name in all constraints.
Thus, you need to repeat your steps in order 1 4 2 3 and replace
"primitives' with "clones" in latter two.

> 
> But I couldn't understand why pacemaker is giving errors with this type
> of configuration.
> 
> Thanks
> Eswar
> 
> 
> On Sat, Dec 7, 2013 at 2:47 PM, Vladislav Bogdanov <bubble at hoster-ok.com
> <mailto:bubble at hoster-ok.com>> wrote:
> 
>     06.12.2013 14:07, ESWAR RAO wrote:
>     > Hi Vladislav,
>     >
>     > I used the below advisory colocation but its not working.
> 
>     Do be frank, I'm not sure if it even possible to achieve such "exotic"
>     behavior with just pacemaker in a non-fragile way, that was just a
>     suggestion.
>     But you may also play with CIB editing from within resource agent code,
>     f.e. remove some node attributes your resources depend on (with location
>     constraints) when threshold is reached, or something similar.
> 
> 
>     > On 3 node setup:
>     >
>     > I have configured all 3 resources in clone mode to start only on node1
>     > and node2 with a fail-count of only 1.
>     > +++++++++++++++++++++++++++++++++++++++
>     > + crm configure primitive res_dummy_1 lsb::dummy_1 meta
>     > allow-migrate=false migration-threshold=1 op monitor interval=5s
>     > + crm configure clone dummy_1_clone res_dummy_1 meta clone-max=2
>     > globally-unique=false
>     > + crm configure location dummy_1_clone_prefer_node dummy_1_clone -inf:
>     > node-3
>     > +++++++++++++++++++++++++++++++++++++++
>     > advisory ordering:
>     > + crm configure order 1-BEFORE-2 0: dummy_1_clone dummy_2_clone
>     > + crm configure order 2-BEFORE-3 0: dummy_2_clone dummy_3_clone
>     >
>     > +++++++++++++++++++++++++++++++++++++++
>     > advisory colocation:
>     > #  crm configure colocation node-with-apps inf: dummy_1_clone
>     > dummy_2_clone dummy_3_clone
>     > +++++++++++++++++++++++++++++++++++++++
>     >
>     > After I killed dummy_1 on node1 , i expected the pacemaker to kill
>     > dummy_2 and dummy_3 on node1 and not to disturb the apps on node2.
>     >
>     > But with above colocation rule, it stopped the apps on node1 but it
>     > restarted dummy_2 and dummy_3 on node2.
>     >
>     > With a score of 0: it didn't stop dummy_2 and dummy_3 on node1.
>     > With a score of 500: it stopped only dummy_2 and restarted dummy_2
>     on node2.
>     >
>     >
>     > Thanks
>     > Eswar
>     >
>     >
>     >
>     > On Fri, Dec 6, 2013 at 12:20 PM, ESWAR RAO <eswar7028 at gmail.com
>     <mailto:eswar7028 at gmail.com>
>     > <mailto:eswar7028 at gmail.com <mailto:eswar7028 at gmail.com>>> wrote:
>     >
>     >
>     >     Thanks Vladislav.
>     >     I will work on that.
>     >
>     >     Thanks
>     >     Eswar
>     >
>     >     On Fri, Dec 6, 2013 at 11:05 AM, Vladislav Bogdanov
>     >     <bubble at hoster-ok.com <mailto:bubble at hoster-ok.com>
>     <mailto:bubble at hoster-ok.com <mailto:bubble at hoster-ok.com>>> wrote:
>     >
>     >         06.12.2013 07:58, ESWAR RAO wrote:
>     >         > Hi All,
>     >         >
>     >         > Can someone help me with below configuration??
>     >         >
>     >         > I have a 3 node HB setup (node1, node2, node3) which runs
>     >         HB+pacemaker.
>     >         > I have 3 apps dummy1, dummy2 , dummy3 which needs to be
>     run on
>     >         only 2
>     >         > nodes among the 3 nodes.
>     >         >
>     >         > By using the below configuration, I was able to run 3
>     >         resources on 2 nodes.
>     >         >
>     >         > # crm configure primitive res_dummy1 lsb::dummy1 meta
>     >         > allow-migrate="false" migration-threshold=3
>     failure-timeout="30s"
>     >         > op monitor interval="5s"
>     >         > # crm configure location app_prefer_node res_dummy1
>     -inf: node3
>     >
>     >         First that comes to mind is that you should put above line
>     below the
>     >         next one and refer app_clone instead of res_dummy1.
>     >
>     >         > # crm configure clone app_clone res_dummy1 meta
>     clone-max="2"
>     >         > globally-unique="false"
>     >         >
>     >         >
>     >         > I have a dependency order like dummy2 should start after
>     >         dummy1 and
>     >         > dummy3 should start only after dummy2.
>     >         >
>     >         > For now I am keeping a sleep in the script and starting the
>     >         resources by
>     >         > using crm.
>     >         >
>     >         > Is there any clean way to have the dependency on the
>     resources
>     >         so that
>     >         > ordering is maintained while clone is run on bot the nodes??
>     >         >
>     >         > I have tried with below config but couldn't succeed.
>     >         > # crm configure order dum1-BEFORE-dum2 0: res_dummy1
>     res_dummy2
>     >
>     >         The same is here.
>     >
>     >         > # crm configure order dum2-BEFORE-dum3 0: res_dummy2
>     res_dummy3
>     >
>     >         So, your example should look like:
>     >         # crm configure primitive res_dummy1 lsb::dummy1 meta
>     >         allow-migrate="false" migration-threshold=3
>     failure-timeout="30s"
>     >         op monitor interval="5s"
>     >         # crm configure clone app_clone res_dummy1 meta clone-max="2"
>     >         globally-unique="false"
>     >         # crm configure location app_prefer_node app_clone -inf: node3
>     >         # crm configure order dum1-BEFORE-dum2 0: app_clone res_dummy2
>     >         # crm configure order dum2-BEFORE-dum3 0: res_dummy2
>     res_dummy3
>     >
>     >         > Instead of group i used order , so that even if 1 app gets
>     >         restarts
>     >         > others will not be affected.
>     >
>     >         Yep, advisory ordering is fine for that.
>     >
>     >         >
>     >         > Also is there any way so that if 1 app fails more than
>     >         > migration-threshold times, we can stop all 3 resources
>     on that
>     >         node??
>     >
>     >         Maybe advisory colocations can do something similar (I'm
>     not sure)?
>     >        
>     http://clusterlabs.org/doc/en-US/Pacemaker/1.1/html/Pacemaker_Explained/_advisory_placement.html
>     >
>     >         You should find correct value for its score (positive or
>     negative)
>     >         though. crm_simulate is your friend for that.
>     >
>     >         >
>     >         > Thanks
>     >         > Eswar
>     >         >
>     >         >
>     >         > _______________________________________________
>     >         > Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
>     <mailto:Pacemaker at oss.clusterlabs.org>
>     >         <mailto:Pacemaker at oss.clusterlabs.org
>     <mailto:Pacemaker at oss.clusterlabs.org>>
>     >         > http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>     >         >
>     >         > Project Home: http://www.clusterlabs.org
>     >         > Getting started:
>     >         http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>     >         > Bugs: http://bugs.clusterlabs.org
>     >         >
>     >
>     >
>     >         _______________________________________________
>     >         Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
>     <mailto:Pacemaker at oss.clusterlabs.org>
>     >         <mailto:Pacemaker at oss.clusterlabs.org
>     <mailto:Pacemaker at oss.clusterlabs.org>>
>     >         http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>     >
>     >         Project Home: http://www.clusterlabs.org
>     >         Getting started:
>     >         http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>     >         Bugs: http://bugs.clusterlabs.org
>     >
>     >
>     >
>     >
>     >
>     > _______________________________________________
>     > Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
>     <mailto:Pacemaker at oss.clusterlabs.org>
>     > http://oss.clusterlabs.org/mailman/listinfo/pacemaker
>     >
>     > Project Home: http://www.clusterlabs.org
>     > Getting started:
>     http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>     > Bugs: http://bugs.clusterlabs.org
>     >
> 
> 
>     _______________________________________________
>     Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
>     <mailto:Pacemaker at oss.clusterlabs.org>
>     http://oss.clusterlabs.org/mailman/listinfo/pacemaker
> 
>     Project Home: http://www.clusterlabs.org
>     Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>     Bugs: http://bugs.clusterlabs.org
> 
> 
> 
> 
> _______________________________________________
> Pacemaker mailing list: Pacemaker at oss.clusterlabs.org
> http://oss.clusterlabs.org/mailman/listinfo/pacemaker
> 
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
> 





More information about the Pacemaker mailing list