[ClusterLabs] ClusterIP won't return to recovered node
Klaus Wenninger
kwenning at redhat.com
Wed Jun 28 03:41:25 EDT 2017
On 06/27/2017 09:22 PM, Dan Ragle wrote:
>
>
> On 6/19/2017 5:32 AM, Klaus Wenninger wrote:
>> On 06/16/2017 09:08 PM, Ken Gaillot wrote:
>>> On 06/16/2017 01:18 PM, Dan Ragle wrote:
>>>>
>>>> On 6/12/2017 10:30 AM, Ken Gaillot wrote:
>>>>> On 06/12/2017 09:23 AM, Klaus Wenninger wrote:
>>>>>> On 06/12/2017 04:02 PM, Ken Gaillot wrote:
>>>>>>> On 06/10/2017 10:53 AM, Dan Ragle wrote:
>>>>>>>> So I guess my bottom line question is: How does one tell Pacemaker
>>>>>>>> that
>>>>>>>> the individual legs of globally unique clones should *always* be
>>>>>>>> spread
>>>>>>>> across the available nodes whenever possible, regardless of the
>>>>>>>> number
>>>>>>>> of processes on any one of the nodes? For kicks I did try:
>>>>>>>>
>>>>>>>> pcs constraint location ClusterIP:0 prefers node1-pcs=INFINITY
>>>>>>>>
>>>>>>>> but it responded with an error about an invalid character (:).
>>>>>>> There isn't a way currently. It will try to do that when initially
>>>>>>> placing them, but once they've moved together, there's no simple
>>>>>>> way to
>>>>>>> tell them to move. I suppose a workaround might be to create a
>>>>>>> dummy
>>>>>>> resource that you constrain to that node so it looks like the other
>>>>>>> node
>>>>>>> is less busy.
>>>>>> Another ugly dummy resource idea - maybe less fragile -
>>>>>> and not tried out:
>>>>>> One could have 2 dummy resources that would rather like
>>>>>> to live on different nodes - no issue with primitives - and
>>>>>> do depend collocated on ClusterIP.
>>>>>> Wouldn't that pull them apart once possible?
>>>>> Sounds like a good idea
>>>> Hmmmm... still no luck with this.
>>>>
>>>> Based on your suggestion, I thought this would work (leaving out
>>>> all the
>>>> status displays this time):
>>>>
>>>> # pcs resource create Test1 systemd:test1
>>>> # pcs resource create Test2 systemd:test2
>>>> # pcs constraint location Test1 prefers node1-pcs=INFINITY
>>>> # pcs constraint location Test2 prefers node1-pcs=INFINITY
>>>> # pcs resource create Test3 systemd:test3
>>>> # pcs resource create Test4 systemd:test4
>>>> # pcs constraint location Test3 prefers node1-pcs=INFINITY
>>>> # pcs constraint location Test4 prefers node2-pcs=INFINITY
>>>> # pcs resource create ClusterIP ocf:heartbeat:IPaddr2
>>>> ip=162.220.75.138
>>>> nic=bond0 cidr_netmask=24
>>>> # pcs resource meta ClusterIP resource-stickiness=0
>>>> # pcs resource clone ClusterIP clone-max=2 clone-node-max=2
>>>> globally-unique=true
>>>> # pcs constraint colocation add ClusterIP-clone with Test3 INFINITY
>>>> # pcs constraint colocation add ClusterIP-clone with Test4 INFINITY
>>
>> What I had meant was the other way round. So that trying to have
>> both Test3 and Test4 running pacemaker would have to have
>> instances of ClusterIP running on both nodes but they wouldn't
>> depend on Test3 and Test4.
>>
>
> Klaus, so did you mean:
>
> # pcs constraint colocation add Test3 with ClusterIP-clone INFINITY
> # pcs constraint colocation add Test4 with ClusterIP-clone INFINITY
>
> ? I actually did try that (with the rest of the recipe the same) and
> ended up with the same problem I started with. Immediately after setup
> both clone instances were on node2. After standby/unstandby of node2
> they (the clones) did in fact split; but if I then followed that with
> a standby/unstandby of node 1 they both remained on node 2.
As said - haven't tried it.
You could play with the priority of Test3 & Test4 (raise above the clones).
And you could instead of prefers use avoids.
Are you having both Test3 & Test4 then running on node2? If yes the -INF
location constraint might do the trick.
>
> Dan
>
>>>>
>>>> But that simply refuses to run ClusterIP at all ("Resource
>>>> ClusterIP:0/1
>>>> cannot run anywhere"). And if I change the last two colocation
>>>> constraints to a numeric then it runs, but with the same problem I had
>>>> before (both ClusterIP instances on one node).
>>>>
>>>> I also tried it reversing the colocation definition (add Test3 with
>>>> ClusterIP-clone) and trying differing combinations of scores
>>>> between the
>>>> location and colocation constraints, still with no luck.
>>>>
>>>> Thanks,
>>>>
>>>> Dan
>>> Ah of course, the colocation with both means they all have to run on
>>> the
>>> same node, which is impossible.
>>>
>>> FYI you can create dummy resources with ocf:pacemaker:Dummy so you
>>> don't
>>> have to write your own agents.
>>>
>>> OK, this is getting even hackier, but I'm thinking you can use
>>> utilization for this:
>>>
>>> http://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html-single/Pacemaker_Explained/index.html#idm139683960632560
>>>
>>>
>>> * Create two dummy resources, each with a -INFINITY location preference
>>> for one of the nodes, so each is allowed to run on only one node.
>>>
>>> * Set the priority meta-attribute to a positive number on all your real
>>> resources, and leave the dummies at 0 (so if the cluster can't run all
>>> of them, it will stop the dummies first).
>>>
>>> * Set placement-strategy=utilization.
>>>
>>> * Define a utilization attribute, with values for each node and
>>> resource
>>> like this:
>>> ** Set a utilization of 1 on all resources except the dummies and the
>>> clone, so that their total utilization is N.
>>> ** Set a utilization of 100 on the dummies and the clone.
>>> ** Set a utilization capacity of 200 + N on each node.
>>>
>>> (I'm assuming you never expect to have more than 99 other resources. If
>>> that's not the case, just raise the 100 usage accordingly.)
>>>
>>> With those values, if only one node is up, that node can host all the
>>> real resources (including both clone instances), with the dummies
>>> stopped. If both nodes are up, the only way the cluster can run all
>>> resources (including the clone instances and dummies) is to spread the
>>> clone instances out.
>>>
>>> Again, it's hacky, and I haven't tested it, but I think it would work.
>>>
>>> _______________________________________________
>>> Users mailing list: Users at clusterlabs.org
>>> http://lists.clusterlabs.org/mailman/listinfo/users
>>>
>>> Project Home: http://www.clusterlabs.org
>>> Getting started:
>>> http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>>> Bugs: http://bugs.clusterlabs.org
>>>
>>
>> _______________________________________________
>> Users mailing list: Users at clusterlabs.org
>> http://lists.clusterlabs.org/mailman/listinfo/users
>>
>> Project Home: http://www.clusterlabs.org
>> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
>> Bugs: http://bugs.clusterlabs.org
>>
>
> _______________________________________________
> Users mailing list: Users at clusterlabs.org
> http://lists.clusterlabs.org/mailman/listinfo/users
>
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
More information about the Users
mailing list