[ClusterLabs] Colocation constraint for grouping all master-mode stateful resources with important stateless resources

Sam Gardner SGardner at trustwave.com
Mon Mar 26 11:42:54 EDT 2018


Thanks, Andrei and Alberto.

Alberto, I will look into the node-constraint parameters, though I suspect Andrei is correct - my "base" resource is DRBDFS in this case, and the issue I'm seeing is that a failure in my secondary resources does not cause the other secondary resources nor the "base" resource to move to the other node.

Andrei, I have no restrictions on the particulars of the rules that I'm putting in place - I can completely discard the rules that I have implemented already.

Here's a simple diagram:
https://imgur.com/a/5LTmJ

These are my restrictions:
1) If any of DRBD-Master, DRBDFS, INIF-Master, or OUTIF-Master moves to D2, all other resources should move to D2.
2) If DRBDFS or DRBD-Master cannot run on either D1 or D2, all other resources should be stopped.
3) If INIF-Master or OUTIF-Master cannot run on either D1 or D2, no other resources should be stopped.


This sounds like a particular constraint that may not be possible to do per our discussions in this thread.

I can get pretty close with a workaround - I'm using ethmonitor on the Master/Slave resources as you can see in the config, so if I create new "heartbeat:Dummy" active resources with the same ethmonitor location constraint, unplugging the interface will move everything over.

However, a failure of a different type on the master/slave VIPs that would not also be apparent on the dummy base resource would not cause a failover of the entire group, which isn't ideal (though admittedly unlikely in this particular use case).

Thanks much for all of the help,
-- 
Sam Gardner
Trustwave | SMART SECURITY ON DEMAND








On 3/25/18, 6:06 AM, "Users on behalf of Andrei Borzenkov" <users-bounces at clusterlabs.org on behalf of arvidjaar at gmail.com> wrote:

>25.03.2018 10:21, Alberto Mijares пишет:
>> On Sat, Mar 24, 2018 at 2:16 PM, Andrei Borzenkov <arvidjaar at gmail.com> wrote:
>>> 23.03.2018 20:42, Sam Gardner пишет:
>>>> Thanks, Ken.
>>>>
>>>> I just want all master-mode resources to be running wherever DRBDFS is running (essentially). If the cluster detects that any of the master-mode resources can't run on the current node (but can run on the other per ethmon), all other master-mode resources as well as DRBDFS should move over to the other node.
>>>>
>>>> The current set of constraints I have will let DRBDFS move to the standby node and "take" the Master mode resources with it, but the Master mode resources failing over to the other node won't take the other Master resources or DRBDFS.
>>>>
>>>
>>> I do not think it is possible. There is no way to express symmetrical
>>> colocation rule like "always run A and B together". You start with A and
>>> place B relative to A; but then A is not affected by B's state.
>>> Attempting now to place A relative to B will result in a loop and is
>>> ignored. See also old discussion:
>>>
>> 
>> 
>> It is possible. Check this thread
>> https://scanmail.trustwave.com/?c=4062&d=qYK32i8YnPIdkrPQRoURDTOyqGVIytWo2-H2bJ__2w&s=5&u=https%3a%2f%2flists%2eclusterlabs%2eorg%2fpipermail%2fusers%2f2017-November%2f006788%2ehtml
>> 
>
>I do not see how it answers the question. It explains how to use other
>criteria than node name for colocating resources, but it does not change
>basic fact that colocating is asymmetrical. Actually this thread
>explicitly suggests "Pick one resource as your base resource that
>everything else should go along with".
>
>If you you actually have configuration that somehow implements
>symmetrical colocation between resources, I would appreciate if you
>could post your configuration.
>
>Regarding the original problem, the root cause is slightly different though.
>
>@Sam, the behavior you describe is correct for your constraints that you
>show. When colocating with resource set, all resources in the set must
>be active on the same node. It means that in your case of
>
>      <rsc_colocation
>id="pcs_rsc_colocation_set_drbdfs_set_drbd.master_inside-interface-sameip.master_outside-interface-sameip.master"
>score="INFINITY">
>	<resource_set id="pcs_rsc_set_drbdfs" sequential="false">
>	  <resource_ref id="drbdfs"/>
>	</resource_set>
>	<resource_set
>id="pcs_rsc_set_drbd.master_inside-interface-sameip.master_outside-interface-sameip.master"
>role="Master" sequential="false">
>	  <resource_ref id="drbd.master"/>
>	  <resource_ref id="inside-interface-sameip.master"/>
>	  <resource_ref id="outside-interface-sameip.master"/>
>	</resource_set>
>      </rsc_colocation>
>
>if one IP resource (master) is moved to another node, dependent resource
>(drbdfs) simply cannot run anywhere.
>
>Before discussing low level pacemaker implementation you really need to
>have high level model of resources relationship. On one hand you
>apparently intend to always run everything on the same node - on the
>other hand you have two rules that independently decide where to place
>two resources. That does not fit together.
>_______________________________________________
>Users mailing list: Users at clusterlabs.org
>https://scanmail.trustwave.com/?c=4062&d=qYK32i8YnPIdkrPQRoURDTOyqGVIytWo2-H0aZn72g&s=5&u=https%3a%2f%2flists%2eclusterlabs%2eorg%2fmailman%2flistinfo%2fusers
>
>Project Home: http://scanmail.trustwave.com/?c=4062&d=qYK32i8YnPIdkrPQRoURDTOyqGVIytWo27bwaZCrhg&s=5&u=http%3a%2f%2fwww%2eclusterlabs%2eorg
>Getting started: http://scanmail.trustwave.com/?c=4062&d=qYK32i8YnPIdkrPQRoURDTOyqGVIytWo27WlYMyo0w&s=5&u=http%3a%2f%2fwww%2eclusterlabs%2eorg%2fdoc%2fCluster%5ffrom%5fScratch%2epdf
>Bugs: http://scanmail.trustwave.com/?c=4062&d=qYK32i8YnPIdkrPQRoURDTOyqGVIytWo27f1bpivhw&s=5&u=http%3a%2f%2fbugs%2eclusterlabs%2eorg


More information about the Users mailing list