[ClusterLabs] Colocation constraint for grouping all master-mode stateful resources with important stateless resources
Ken Gaillot
kgaillot at redhat.com
Mon Mar 26 15:33:12 EDT 2018
On Mon, 2018-03-26 at 15:42 +0000, Sam Gardner wrote:
> Thanks, Andrei and Alberto.
>
> Alberto, I will look into the node-constraint parameters, though I
> suspect Andrei is correct - my "base" resource is DRBDFS in this
> case, and the issue I'm seeing is that a failure in my secondary
> resources does not cause the other secondary resources nor the "base"
> resource to move to the other node.
>
> Andrei, I have no restrictions on the particulars of the rules that
> I'm putting in place - I can completely discard the rules that I have
> implemented already.
>
> Here's a simple diagram:
> https://imgur.com/a/5LTmJ
>
> These are my restrictions:
> 1) If any of DRBD-Master, DRBDFS, INIF-Master, or OUTIF-Master moves
> to D2, all other resources should move to D2.
> 2) If DRBDFS or DRBD-Master cannot run on either D1 or D2, all other
> resources should be stopped.
> 3) If INIF-Master or OUTIF-Master cannot run on either D1 or D2, no
> other resources should be stopped.
4) Keep INIF-Master with a working IN interface and OUTIF-Master with a
working OUT interface.
One problem is that 4) conflicts with 1). It's possible for INIF to be
working on only one node and OUTIF to be working only on the other
node.
I'm thinking you want something like this:
* Don't constrain DRBD-Master
* Colocate DRBDFS with DRBD-Master, +INFINITY
* Colocate INIF-Master and OUTIF-Master with DRBDFS, +INFINITY (using
separate individual constraints)
* Keep your INIF/OUTIF rules, but use a finite negative score. That
way, they *must* stay with DRBFDS and DRBD-Master (due to the previous
constraints), but will *prefer* a node where their interface is up. If
one of them is more important than the other, give it a stronger score,
to break the tie when one interface is up on each node.
* Order DRBDFS start after DRBD-Master promote
No order is necessary on INIF/OUTIF, since an IP can work regardless of
the file system.
That should meet your intentions.
> This sounds like a particular constraint that may not be possible to
> do per our discussions in this thread.
>
> I can get pretty close with a workaround - I'm using ethmonitor on
> the Master/Slave resources as you can see in the config, so if I
> create new "heartbeat:Dummy" active resources with the same
> ethmonitor location constraint, unplugging the interface will move
> everything over.
>
> However, a failure of a different type on the master/slave VIPs that
> would not also be apparent on the dummy base resource would not cause
> a failover of the entire group, which isn't ideal (though admittedly
> unlikely in this particular use case).
>
> Thanks much for all of the help,
> --
> Sam Gardner
> Trustwave | SMART SECURITY ON DEMAND
>
>
>
>
>
>
>
>
> On 3/25/18, 6:06 AM, "Users on behalf of Andrei Borzenkov" <users-bou
> nces at clusterlabs.org on behalf of arvidjaar at gmail.com> wrote:
>
> > 25.03.2018 10:21, Alberto Mijares пишет:
> > > On Sat, Mar 24, 2018 at 2:16 PM, Andrei Borzenkov <arvidjaar at gmai
> > > l.com> wrote:
> > > > 23.03.2018 20:42, Sam Gardner пишет:
> > > > > Thanks, Ken.
> > > > >
> > > > > I just want all master-mode resources to be running wherever
> > > > > DRBDFS is running (essentially). If the cluster detects that
> > > > > any of the master-mode resources can't run on the current
> > > > > node (but can run on the other per ethmon), all other master-
> > > > > mode resources as well as DRBDFS should move over to the
> > > > > other node.
> > > > >
> > > > > The current set of constraints I have will let DRBDFS move to
> > > > > the standby node and "take" the Master mode resources with
> > > > > it, but the Master mode resources failing over to the other
> > > > > node won't take the other Master resources or DRBDFS.
> > > > >
> > > >
> > > > I do not think it is possible. There is no way to express
> > > > symmetrical
> > > > colocation rule like "always run A and B together". You start
> > > > with A and
> > > > place B relative to A; but then A is not affected by B's state.
> > > > Attempting now to place A relative to B will result in a loop
> > > > and is
> > > > ignored. See also old discussion:
> > > >
> > >
> > >
> > > It is possible. Check this thread
> > > https://scanmail.trustwave.com/?c=4062&d=qYK32i8YnPIdkrPQRoURDTOy
> > > qGVIytWo2-
> > > H2bJ__2w&s=5&u=https%3a%2f%2flists%2eclusterlabs%2eorg%2fpipermai
> > > l%2fusers%2f2017-November%2f006788%2ehtml
> > >
> >
> > I do not see how it answers the question. It explains how to use
> > other
> > criteria than node name for colocating resources, but it does not
> > change
> > basic fact that colocating is asymmetrical. Actually this thread
> > explicitly suggests "Pick one resource as your base resource that
> > everything else should go along with".
> >
> > If you you actually have configuration that somehow implements
> > symmetrical colocation between resources, I would appreciate if you
> > could post your configuration.
> >
> > Regarding the original problem, the root cause is slightly
> > different though.
> >
> > @Sam, the behavior you describe is correct for your constraints
> > that you
> > show. When colocating with resource set, all resources in the set
> > must
> > be active on the same node. It means that in your case of
> >
> > <rsc_colocation
> > id="pcs_rsc_colocation_set_drbdfs_set_drbd.master_inside-interface-
> > sameip.master_outside-interface-sameip.master"
> > score="INFINITY">
> > <resource_set id="pcs_rsc_set_drbdfs" sequential="false">
> > <resource_ref id="drbdfs"/>
> > </resource_set>
> > <resource_set
> > id="pcs_rsc_set_drbd.master_inside-interface-sameip.master_outside-
> > interface-sameip.master"
> > role="Master" sequential="false">
> > <resource_ref id="drbd.master"/>
> > <resource_ref id="inside-interface-sameip.master"/>
> > <resource_ref id="outside-interface-sameip.master"/>
> > </resource_set>
> > </rsc_colocation>
> >
> > if one IP resource (master) is moved to another node, dependent
> > resource
> > (drbdfs) simply cannot run anywhere.
> >
> > Before discussing low level pacemaker implementation you really
> > need to
> > have high level model of resources relationship. On one hand you
> > apparently intend to always run everything on the same node - on
> > the
> > other hand you have two rules that independently decide where to
> > place
> > two resources. That does not fit together.
--
Ken Gaillot <kgaillot at redhat.com>
More information about the Users
mailing list