[ClusterLabs] Using "mandatory" startup order but avoiding depending clones from restart after member of parent clone fails

Alejandro Comisario alejandro at nubeliu.com
Thu Feb 9 09:50:27 EST 2017


Ken, thanks for yor reply.

Since in our setup, we use active/active mysql clone, so i think that order
is the only way to ensure what i want.
So, simple question, making order "Advisory", and tiking into consideration
that "maybe" keystone starts before mysql, making it fail because of
database connecrion.

If i set on the keystone clone (and all the dependant clones)
on-fail="restart" for start and monitor actions (of course setting the cib
option start-failure-is-fatal=false ) to make sure that if it fails, it
will restart till everything is ok.

would that make sense to "workaround" that ?

best.

On Thu, Feb 9, 2017 at 12:18 AM, Ken Gaillot <kgaillot at redhat.com> wrote:

> On 02/06/2017 05:25 PM, Alejandro Comisario wrote:
> > guys, really happy to post my first doubt.
> >
> > i'm kinda having an "conceptual" issue that's bringing me, lots of issues
> > i need to ensure that order of starting resources are mandatory but
> > that is causing me a huge issue, that is if just one of the members of
> > a clone goes down and up (but not all members) all resources depending
> > on it are restarted (wich is bad), my workaround is to set order as
> > advisory, but that doesnt asure strict order startup.
> >
> > eg. clone_b runs on servers_B, and depends on clone_a that runs on
> servers_A.
> >
> > I'll put an example on how i have everything defined between this two
> clones.
> >
> > ### clone_A running on servers A (location rule)
> > primitive p_mysql mysql-wss \
> > op monitor timeout=55 interval=60 enabled=true on-fail=restart \
> > op start timeout=475 interval=0 on-fail=restart \
> > op stop timeout=175 interval=0 \
> > params socket="/var/run/mysqld/mysqld.sock"
> > pid="/var/run/mysqld/mysqld.pid" test_passwd="XXX" test_user=root \
> > meta is-managed=true
> >
> > clone p_mysql-clone p_mysql \
> > meta target-role=Started interleave=false globally-unique=false
> >
> > location mysql_location p_mysql-clone resource-discovery=never \
> > rule -inf: galera ne 1
> >
> > ### clone_B running on servers B (location rule)
> > primitive p_keystone apache \
> > params configfile="/etc/apache2/apache2.conf" \
> > op monitor on-fail=restart interval=60s timeout=60s \
> > op start on-fail=restart interval=0 \
> > meta target-role=Started migration-threshold=2 failure-timeout=60s
> > resource-stickiness=300
> >
> > clone p_keystone-clone p_keystone \
> > meta target-role=Started interleave=false globally-unique=false
> >
> > location keystone_location p_keystone-clone resource-discovery=never \
> > rule -inf: keystone ne 1
> >
> > order p_clone-mysql-before-p_keystone INF: p_mysql-clone
> p_keystone-clone:start
> >
> > Again just to make my point, if p_mysql-clone looses even one member
> > of the clone, ONLY when that member gets back, all members of
> > p_keystone-clone gets restarted, and thats NOT what i need, so if i
> > change the order from mandatory to advisory, i get what i want
> > regarding behaviour of what happens when instances of the clone comes
> > and goes, but i loos the strictness of the startup order, which is
> > critial for me.
> >
> > How can i fix this problem ?
> > .. can i ?
>
> I don't think pacemaker can model your desired situation currently.
>
> In OpenStack configs that I'm familiar with, the mysql server (usually
> galera) is a master-slave clone, and the constraint used is "promote
> mysql then start keystone". That way, if a slave goes away and comes
> back, it has no effect.
>
> _______________________________________________
> Users mailing list: Users at clusterlabs.org
> http://lists.clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
>



-- 
*Alejandro Comisario*
*CTO | NUBELIU*
E-mail: alejandro at nubeliu.comCell: +54 9 11 3770 1857
_
www.nubeliu.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.clusterlabs.org/pipermail/users/attachments/20170209/94d1e3de/attachment-0003.html>


More information about the Users mailing list