[ClusterLabs] Does anyone use clone instance constraints from pacemaker-next schema?
Jehan-Guillaume de Rorthais
jgdr at dalibo.com
Thu Jan 11 15:08:44 EST 2018
On Thu, 11 Jan 2018 12:00:25 -0600
Ken Gaillot <kgaillot at redhat.com> wrote:
> On Thu, 2018-01-11 at 20:11 +0300, Andrei Borzenkov wrote:
> > 11.01.2018 19:21, Ken Gaillot пишет:
> > > On Thu, 2018-01-11 at 01:16 +0100, Jehan-Guillaume de Rorthais
> > > wrote:
> > > > On Wed, 10 Jan 2018 12:23:59 -0600
> > > > Ken Gaillot <kgaillot at redhat.com> wrote:
> > > > ...
> > > > > My question is: has anyone used or tested this, or is anyone
> > > > > interested
> > > > > in this? We won't promote it to the default schema unless it is
> > > > > tested.
> > > > >
> > > > > My feeling is that it is more likely to be confusing than
> > > > > helpful,
> > > > > and
> > > > > there are probably ways to achieve any reasonable use case with
> > > > > existing syntax.
> > > >
> > > > For what it worth, I tried to implement such solution to dispatch
> > > > mulitple
> > > > IP addresses to slaves in a 1 master 2 slaves cluster. This is
> > > > quite
> > > > time
> > > > consuming to wrap its head around sides effects with colocation,
> > > > scores and
> > > > stickiness. My various tests shows everything sounds to behave
> > > > correctly now,
> > > > but I don't feel really 100% confident about my setup.
> > > >
> > > > I agree that there are ways to achieve such a use case with
> > > > existing
> > > > syntax.
> > > > But this is quite confusing as well. As instance, I experienced a
> > > > master
> > > > relocation when messing with a slave to make sure its IP would
> > > > move
> > > > to the
> > > > other slave node...I don't remember exactly what was my error,
> > > > but I
> > > > could
> > > > easily dig for it if needed.
> > > >
> > > > I feel like it fits in the same area that the usability of
> > > > Pacemaker.
> > > > Making it
> > > > easier to understand. See the recent discussion around the
> > > > gocardless
> > > > war story.
> > > >
> > > > My tests was mostly for labs, demo and tutorial purpose. I don't
> > > > have
> > > > a
> > > > specific field use case. But if at some point this feature is
> > > > promoted
> > > > officially as preview, I'll give it some testing and report here
> > > > (barring the
> > > > fact I'm actually aware some feedback are requested ;)).
> > >
> > > It's ready to be tested now -- just do this:
> > >
> > > cibadmin --upgrade
> > > cibadmin --modify --xml-text '<cib validate-with="pacemaker-
> > > next"/>'
> > >
> > > Then use constraints like:
> > >
> > > <rsc_colocation id="id0" score="100"
> > > rsc="rsc1"
> > > with-rsc="clone1" with-rsc-instance="1" />
> > >
> > > <rsc_colocation id="id1" score="100"
> > > rsc="rsc2"
> > > with-rsc="clone1" with-rsc-instance="2" />
> > >
> > > to colocate rsc1 and rsc2 with separate instances of clone1. There
> > > is
> > > no way to know *which* instance of clone1 will be 1, 2, etc.; this
> > > just
> > > allows you to ensure the colocations are separate.
> > >
> >
> > Is it possible to designate master/slave as well?
>
> If you mean constrain one resource to the master, and a bunch of other
> resources to the slaves, then no, this new syntax doesn't support that.
> But it should be possible with existing syntax, by constraining with
> role=master or role=slave, then anticolocating the resources with each
> other.
>
Oh, wait, this is a deal breaker then... This was exactly my use case:
* giving a specific IP address to the master
* provide various IP addresses to slaves
I suppose I'm stucked with the existing syntaxe then.
More information about the Users
mailing list