<div dir="ltr">Yes Ulrich, Somehow I missed pursuing on that.<div>I will be doing both, configure stickiness to INFINITY and use utilization attributes.</div><div>This should probably take care of it. </div><div><br></div><div>Thanks</div><div>Nikhil</div></div><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Oct 18, 2016 at 11:45 AM, Ulrich Windl <span dir="ltr"><<a href="mailto:Ulrich.Windl@rz.uni-regensburg.de" target="_blank">Ulrich.Windl@rz.uni-regensburg.de</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">>>> Nikhil Utane <<a href="mailto:nikhil.subscribed@gmail.com">nikhil.subscribed@gmail.com</a>> schrieb am 17.10.2016 um 16:46 in<br>
Nachricht<br>
<CAGNWmJUfVS1bcSfSG4=<a href="mailto:Rmu5u9ckC4HyUgE3psakrnWQsbi1O2w@mail.gmail.com">Rmu5u9ckC<wbr>4HyUgE3psakrnWQsbi1O2w@mail.<wbr>gmail.com</a>>:<br>
<span class="">> This is driving me insane.<br>
<br>
</span>Why don't you try the utilization approach?<br>
<div class="HOEnZb"><div class="h5"><br>
><br>
> This is how the resources were started. Redund_CU1_WB30 was the DC which I<br>
> rebooted.<br>
> cu_4 (ocf::redundancy:RedundancyRA)<wbr>: Started Redund_CU1_WB30<br>
> cu_2 (ocf::redundancy:RedundancyRA)<wbr>: Started Redund_CU5_WB30<br>
> cu_3 (ocf::redundancy:RedundancyRA)<wbr>: Started Redun_CU4_Wb30<br>
><br>
> Since the standby node was not UP. I was expecting resource cu_4 to be<br>
> waiting to be scheduled.<br>
> But then it re-arranged everything as below.<br>
> cu_4 (ocf::redundancy:RedundancyRA)<wbr>: Started Redun_CU4_Wb30<br>
> cu_2 (ocf::redundancy:RedundancyRA)<wbr>: Stopped<br>
> cu_3 (ocf::redundancy:RedundancyRA)<wbr>: Started Redund_CU5_WB30<br>
><br>
> There is not much information available in the logs on new DC. It just<br>
> shows what it has decided to do but nothing to suggest why it did it that<br>
> way.<br>
><br>
> notice: Start cu_4 (Redun_CU4_Wb30)<br>
> notice: Stop cu_2 (Redund_CU5_WB30)<br>
> notice: Move cu_3 (Started Redun_CU4_Wb30 -> Redund_CU5_WB30)<br>
><br>
> I have default stickiness set to 100 which is higher than any score that I<br>
> have configured.<br>
> I have migration_threshold set to 1. Should I bump that up instead?<br>
><br>
> -Thanks<br>
> Nikhil<br>
><br>
> On Sat, Oct 15, 2016 at 12:36 AM, Ken Gaillot <<a href="mailto:kgaillot@redhat.com">kgaillot@redhat.com</a>> wrote:<br>
><br>
>> On 10/14/2016 06:56 AM, Nikhil Utane wrote:<br>
>> > Hi,<br>
>> ><br>
>> > Thank you for the responses so far.<br>
>> > I added reverse colocation as well. However seeing some other issue in<br>
>> > resource movement that I am analyzing.<br>
>> ><br>
>> > Thinking further on this, why doesn't "/a not with b" does not imply "b<br>
>> > not with a"?/<br>
>> > Coz wouldn't putting "b with a" violate "a not with b"?<br>
>> ><br>
>> > Can someone confirm that colocation is required to be configured both<br>
>> ways?<br>
>><br>
>> The anti-colocation should only be defined one-way. Otherwise, you get a<br>
>> dependency loop (as seen in logs you showed elsewhere).<br>
>><br>
>> The one-way constraint is enough to keep the resources apart. However,<br>
>> the question is whether the cluster might move resources around<br>
>> unnecessarily.<br>
>><br>
>> For example, "A not with B" means that the cluster will place B first,<br>
>> then place A somewhere else. So, if B's node fails, can the cluster<br>
>> decide that A's node is now the best place for B, and move A to a free<br>
>> node, rather than simply start B on the free node?<br>
>><br>
>> The cluster does take dependencies into account when placing a resource,<br>
>> so I would hope that wouldn't happen. But I'm not sure. Having some<br>
>> stickiness might help, so that A has some preference against moving.<br>
>><br>
>> > -Thanks<br>
>> > Nikhil<br>
>> ><br>
>> > /<br>
>> > /<br>
>> ><br>
>> > On Fri, Oct 14, 2016 at 1:09 PM, Vladislav Bogdanov<br>
>> > <<a href="mailto:bubble@hoster-ok.com">bubble@hoster-ok.com</a> <mailto:<a href="mailto:bubble@hoster-ok.com">bubble@hoster-ok.com</a>>> wrote:<br>
>> ><br>
>> > On October 14, 2016 10:13:17 AM GMT+03:00, Ulrich Windl<br>
>> > <<a href="mailto:Ulrich.Windl@rz.uni-regensburg.de">Ulrich.Windl@rz.uni-<wbr>regensburg.de</a><br>
>> > <mailto:<a href="mailto:Ulrich.Windl@rz.uni-regensburg.de">Ulrich.Windl@rz.uni-<wbr>regensburg.de</a>>> wrote:<br>
>> > >>>> Nikhil Utane <<a href="mailto:nikhil.subscribed@gmail.com">nikhil.subscribed@gmail.com</a><br>
>> > <mailto:<a href="mailto:nikhil.subscribed@gmail.com">nikhil.subscribed@<wbr>gmail.com</a>>> schrieb am 13.10.2016 um<br>
>> > >16:43 in<br>
>> > >Nachricht<br>
>> > ><<a href="mailto:CAGNWmJUbPucnBGXroHkHSbQ0LXovwsLFPkUPg1R8gJqRFqM9Dg@mail.gmail.com">CAGNWmJUbPucnBGXroHkHSbQ0LXo<wbr>vwsLFPkUPg1R8gJqRFqM9Dg@mail.<wbr>gmail.com</a><br>
>> > <mailto:<a href="mailto:CAGNWmJUbPucnBGXroHkHSbQ0LXovwsLFPkUPg1R8gJqRFqM9Dg@">CAGNWmJUbPucnBGXroHkHS<wbr>bQ0LXovwsLFPkUPg1R8gJqRFqM9Dg@</a><br>
>> <a href="http://mail.gmail.com" rel="noreferrer" target="_blank">mail.gmail.com</a>>>:<br>
>> > >> Ulrich,<br>
>> > >><br>
>> > >> I have 4 resources only (not 5, nodes are 5). So then I only need<br>
>> 6<br>
>> > >> constraints, right?<br>
>> > >><br>
>> > >> [,1] [,2] [,3] [,4] [,5] [,6]<br>
>> > >> [1,] "A" "A" "A" "B" "B" "C"<br>
>> > >> [2,] "B" "C" "D" "C" "D" "D"<br>
>> > ><br>
>> > >Sorry for my confusion. As Andrei Borzenkovsaid in<br>
>> > ><<a href="mailto:CAA91j0W%2BepAHFLg9u6VX_X8LgFkf9Rp55g3nocY4oZNA9BbZ%2Bg@mail.gmail.com">CAA91j0W+epAHFLg9u6VX_<wbr>X8LgFkf9Rp55g3nocY4oZNA9BbZ+g@<wbr>mail.gmail.com</a><br>
>> > <mailto:<a href="mailto:CAA91j0W%252BepAHFLg9u6VX_X8LgFkf9Rp55g3nocY4oZNA9BbZ%25">CAA91j0W%<wbr>2BepAHFLg9u6VX_<wbr>X8LgFkf9Rp55g3nocY4oZNA9BbZ%</a><br>
>> <a href="mailto:2Bg@mail.gmail.com">2Bg@mail.gmail.com</a>>><br>
>> > >you probably have to add (A, B) _and_ (B, A)! Thinking about it, I<br>
>> > >wonder whether an easier solution would be using "utilization": If<br>
>> > >every node has one token to give, and every resource needs on<br>
>> token, no<br>
>> > >two resources will run on one node. Sounds like an easier solution<br>
>> to<br>
>> > >me.<br>
>> > ><br>
>> > >Regards,<br>
>> > >Ulrich<br>
>> > ><br>
>> > ><br>
>> > >><br>
>> > >> I understand that if I configure constraint of R1 with R2 score as<br>
>> > >> -infinity, then the same applies for R2 with R1 score as -infinity<br>
>> > >(don't<br>
>> > >> have to configure it explicitly).<br>
>> > >> I am not having a problem of multiple resources getting schedule<br>
>> on<br>
>> > >the<br>
>> > >> same node. Rather, one working resource is unnecessarily getting<br>
>> > >relocated.<br>
>> > >><br>
>> > >> -Thanks<br>
>> > >> Nikhil<br>
>> > >><br>
>> > >><br>
>> > >> On Thu, Oct 13, 2016 at 7:45 PM, Ulrich Windl <<br>
>> > >> <a href="mailto:Ulrich.Windl@rz.uni-regensburg.de">Ulrich.Windl@rz.uni-<wbr>regensburg.de</a><br>
>> > <mailto:<a href="mailto:Ulrich.Windl@rz.uni-regensburg.de">Ulrich.Windl@rz.uni-<wbr>regensburg.de</a>>> wrote:<br>
>> > >><br>
>> > >>> Hi!<br>
>> > >>><br>
>> > >>> Don't you need 10 constraints, excluding every possible pair of<br>
>> your<br>
>> > >5<br>
>> > >>> resources (named A-E here), like in this table (produced with R):<br>
>> > >>><br>
>> > >>> [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10]<br>
>> > >>> [1,] "A" "A" "A" "A" "B" "B" "B" "C" "C" "D"<br>
>> > >>> [2,] "B" "C" "D" "E" "C" "D" "E" "D" "E" "E"<br>
>> > >>><br>
>> > >>> Ulrich<br>
>> > >>><br>
>> > >>> >>> Nikhil Utane <<a href="mailto:nikhil.subscribed@gmail.com">nikhil.subscribed@gmail.com</a><br>
>> > <mailto:<a href="mailto:nikhil.subscribed@gmail.com">nikhil.subscribed@<wbr>gmail.com</a>>> schrieb am 13.10.2016<br>
>> > >um<br>
>> > >>> 15:59 in<br>
>> > >>> Nachricht<br>
>> > >>><br>
>> > ><<a href="mailto:CAGNWmJW0CWMr3bvR3L9xZCAcJUzyczQbZEzUzpaJxi%2BPn7Oj_A@mail.gmail.com">CAGNWmJW0CWMr3bvR3L9xZCAcJUz<wbr>yczQbZEzUzpaJxi+Pn7Oj_A@mail.<wbr>gmail.com</a><br>
>> > <mailto:<a href="mailto:CAGNWmJW0CWMr3bvR3L9xZCAcJUzyczQbZEzUzpaJxi%252BPn7Oj_">CAGNWmJW0CWMr3bvR3L9xZ<wbr>CAcJUzyczQbZEzUzpaJxi%2BPn7Oj_</a><br>
>> <a href="mailto:A@mail.gmail.com">A@mail.gmail.com</a>>>:<br>
>> > >>> > Hi,<br>
>> > >>> ><br>
>> > >>> > I have 5 nodes and 4 resources configured.<br>
>> > >>> > I have configured constraint such that no two resources can be<br>
>> > >>> co-located.<br>
>> > >>> > I brought down a node (which happened to be DC). I was<br>
>> expecting<br>
>> > >the<br>
>> > >>> > resource on the failed node would be migrated to the 5th<br>
>> waiting<br>
>> > >node<br>
>> > >>> (that<br>
>> > >>> > is not running any resource).<br>
>> > >>> > However what happened was the failed node resource was started<br>
>> on<br>
>> > >another<br>
>> > >>> > active node (after stopping it's existing resource) and that<br>
>> > >node's<br>
>> > >>> > resource was moved to the waiting node.<br>
>> > >>> ><br>
>> > >>> > What could I be doing wrong?<br>
>> > >>> ><br>
>> > >>> > <nvpair id="cib-bootstrap-options-<wbr>have-watchdog" value="true"<br>
>> > >>> > name="have-watchdog"/><br>
>> > >>> > <nvpair id="cib-bootstrap-options-dc-<wbr>version"<br>
>> > >value="1.1.14-5a6cdd1"<br>
>> > >>> > name="dc-version"/><br>
>> > >>> > <nvpair id="cib-bootstrap-options-<wbr>cluster-infrastructure"<br>
>> > >>> value="corosync"<br>
>> > >>> > name="cluster-infrastructure"/<wbr>><br>
>> > >>> > <nvpair id="cib-bootstrap-options-<wbr>stonith-enabled"<br>
>> value="false"<br>
>> > >>> > name="stonith-enabled"/><br>
>> > >>> > <nvpair id="cib-bootstrap-options-no-<wbr>quorum-policy"<br>
>> value="ignore"<br>
>> > >>> > name="no-quorum-policy"/><br>
>> > >>> > <nvpair id="cib-bootstrap-options-<wbr>default-action-timeout"<br>
>> > >value="240"<br>
>> > >>> > name="default-action-timeout"/<wbr>><br>
>> > >>> > <nvpair id="cib-bootstrap-options-<wbr>symmetric-cluster"<br>
>> value="false"<br>
>> > >>> > name="symmetric-cluster"/><br>
>> > >>> ><br>
>> > >>> > # pcs constraint<br>
>> > >>> > Location Constraints:<br>
>> > >>> > Resource: cu_2<br>
>> > >>> > Enabled on: Redun_CU4_Wb30 (score:0)<br>
>> > >>> > Enabled on: Redund_CU2_WB30 (score:0)<br>
>> > >>> > Enabled on: Redund_CU3_WB30 (score:0)<br>
>> > >>> > Enabled on: Redund_CU5_WB30 (score:0)<br>
>> > >>> > Enabled on: Redund_CU1_WB30 (score:0)<br>
>> > >>> > Resource: cu_3<br>
>> > >>> > Enabled on: Redun_CU4_Wb30 (score:0)<br>
>> > >>> > Enabled on: Redund_CU2_WB30 (score:0)<br>
>> > >>> > Enabled on: Redund_CU3_WB30 (score:0)<br>
>> > >>> > Enabled on: Redund_CU5_WB30 (score:0)<br>
>> > >>> > Enabled on: Redund_CU1_WB30 (score:0)<br>
>> > >>> > Resource: cu_4<br>
>> > >>> > Enabled on: Redun_CU4_Wb30 (score:0)<br>
>> > >>> > Enabled on: Redund_CU2_WB30 (score:0)<br>
>> > >>> > Enabled on: Redund_CU3_WB30 (score:0)<br>
>> > >>> > Enabled on: Redund_CU5_WB30 (score:0)<br>
>> > >>> > Enabled on: Redund_CU1_WB30 (score:0)<br>
>> > >>> > Resource: cu_5<br>
>> > >>> > Enabled on: Redun_CU4_Wb30 (score:0)<br>
>> > >>> > Enabled on: Redund_CU2_WB30 (score:0)<br>
>> > >>> > Enabled on: Redund_CU3_WB30 (score:0)<br>
>> > >>> > Enabled on: Redund_CU5_WB30 (score:0)<br>
>> > >>> > Enabled on: Redund_CU1_WB30 (score:0)<br>
>> > >>> > Ordering Constraints:<br>
>> > >>> > Colocation Constraints:<br>
>> > >>> > cu_3 with cu_2 (score:-INFINITY)<br>
>> > >>> > cu_4 with cu_2 (score:-INFINITY)<br>
>> > >>> > cu_4 with cu_3 (score:-INFINITY)<br>
>> > >>> > cu_5 with cu_2 (score:-INFINITY)<br>
>> > >>> > cu_5 with cu_3 (score:-INFINITY)<br>
>> > >>> > cu_5 with cu_4 (score:-INFINITY)<br>
>> > >>> ><br>
>> > >>> > -Thanks<br>
>> > >>> > Nikhil<br>
>> > >>><br>
>> > >>><br>
>> > >>><br>
>> > >>><br>
>> > >>><br>
>> > >>> ______________________________<wbr>_________________<br>
>> > >>> Users mailing list: <a href="mailto:Users@clusterlabs.org">Users@clusterlabs.org</a><br>
>> > <mailto:<a href="mailto:Users@clusterlabs.org">Users@clusterlabs.org</a>><br>
>> > >>> <a href="http://clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://clusterlabs.org/<wbr>mailman/listinfo/users</a><br>
>> > <<a href="http://clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://clusterlabs.org/<wbr>mailman/listinfo/users</a>><br>
>> > >>><br>
>> > >>> Project Home: <a href="http://www.clusterlabs.org" rel="noreferrer" target="_blank">http://www.clusterlabs.org</a><br>
>> > >>> Getting started:<br>
>> > ><a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" rel="noreferrer" target="_blank">http://www.clusterlabs.org/<wbr>doc/Cluster_from_Scratch.pdf</a><br>
>> > <<a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" rel="noreferrer" target="_blank">http://www.clusterlabs.org/<wbr>doc/Cluster_from_Scratch.pdf</a>><br>
>> > >>> Bugs: <a href="http://bugs.clusterlabs.org" rel="noreferrer" target="_blank">http://bugs.clusterlabs.org</a><br>
>> > >>><br>
>> > ><br>
>> > ><br>
>> > ><br>
>> > ><br>
>> > >_____________________________<wbr>__________________<br>
>> > >Users mailing list: <a href="mailto:Users@clusterlabs.org">Users@clusterlabs.org</a><br>
>> > <mailto:<a href="mailto:Users@clusterlabs.org">Users@clusterlabs.org</a>><br>
>> > ><a href="http://clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://clusterlabs.org/<wbr>mailman/listinfo/users</a><br>
>> > <<a href="http://clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://clusterlabs.org/<wbr>mailman/listinfo/users</a>><br>
>> > ><br>
>> > >Project Home: <a href="http://www.clusterlabs.org" rel="noreferrer" target="_blank">http://www.clusterlabs.org</a><br>
>> > >Getting started:<br>
>> > ><a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" rel="noreferrer" target="_blank">http://www.clusterlabs.org/<wbr>doc/Cluster_from_Scratch.pdf</a><br>
>> > <<a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" rel="noreferrer" target="_blank">http://www.clusterlabs.org/<wbr>doc/Cluster_from_Scratch.pdf</a>><br>
>> > >Bugs: <a href="http://bugs.clusterlabs.org" rel="noreferrer" target="_blank">http://bugs.clusterlabs.org</a><br>
>> ><br>
>> > Hi,<br>
>> ><br>
>> > use of utilization (balanced strategy) has one caveat: resources are<br>
>> > not moved just because of utilization of one node is less, when<br>
>> > nodes have the same allocation score for the resource.<br>
>> > So, after the simultaneus outage of two nodes in a 5-node cluster,<br>
>> > it may appear that one node runs two resources and two recovered<br>
>> > nodes run nothing.<br>
>> ><br>
>> > Original 'utilization' strategy only limits resource placement, it<br>
>> > is not considered when choosing a node for a resource.<br>
>> ><br>
>> > Vladislav<br>
>><br>
>> ______________________________<wbr>_________________<br>
>> Users mailing list: <a href="mailto:Users@clusterlabs.org">Users@clusterlabs.org</a><br>
>> <a href="http://clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://clusterlabs.org/<wbr>mailman/listinfo/users</a><br>
>><br>
>> Project Home: <a href="http://www.clusterlabs.org" rel="noreferrer" target="_blank">http://www.clusterlabs.org</a><br>
>> Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" rel="noreferrer" target="_blank">http://www.clusterlabs.org/<wbr>doc/Cluster_from_Scratch.pdf</a><br>
>> Bugs: <a href="http://bugs.clusterlabs.org" rel="noreferrer" target="_blank">http://bugs.clusterlabs.org</a><br>
>><br>
<br>
<br>
<br>
<br>
______________________________<wbr>_________________<br>
Users mailing list: <a href="mailto:Users@clusterlabs.org">Users@clusterlabs.org</a><br>
<a href="http://clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://clusterlabs.org/<wbr>mailman/listinfo/users</a><br>
<br>
Project Home: <a href="http://www.clusterlabs.org" rel="noreferrer" target="_blank">http://www.clusterlabs.org</a><br>
Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" rel="noreferrer" target="_blank">http://www.clusterlabs.org/<wbr>doc/Cluster_from_Scratch.pdf</a><br>
Bugs: <a href="http://bugs.clusterlabs.org" rel="noreferrer" target="_blank">http://bugs.clusterlabs.org</a><br>
</div></div></blockquote></div><br></div>