<div dir="ltr">Thanks Ken.<div>I will give it a shot.</div><div><br></div><div><a href="http://oss.clusterlabs.org/pipermail/pacemaker/2011-August/011271.html">http://oss.clusterlabs.org/pipermail/pacemaker/2011-August/011271.html</a><br></div><div>On this thread, if I interpret it correctly, his problem was solved when he swapped the anti-location constraint </div><div><br></div><div>From (mapping to my example)</div><div><span style="color:rgb(80,0,80);font-size:12.8px">cu_2 with cu_4 (score:-INFINITY)</span><br style="color:rgb(80,0,80);font-size:12.8px"><span style="color:rgb(80,0,80);font-size:12.8px">cu_3 with cu_4 (score:-INFINITY)</span><br style="color:rgb(80,0,80);font-size:12.8px"><span style="color:rgb(80,0,80);font-size:12.8px">cu_2 with cu_3 (score:-INFINITY)</span><br></div><div><span style="color:rgb(80,0,80);font-size:12.8px"><br></span></div><div><span style="color:rgb(80,0,80);font-size:12.8px">To</span></div><div><div><span style="color:rgb(80,0,80);font-size:12.8px">cu_2 with cu_4 (score:-INFINITY)</span><br style="color:rgb(80,0,80);font-size:12.8px"><span style="color:rgb(80,0,80);font-size:12.8px">cu_4 with cu_3 (score:-INFINITY)</span><br style="color:rgb(80,0,80);font-size:12.8px"><span style="color:rgb(80,0,80);font-size:12.8px">cu_3 with cu_2 (score:-INFINITY)</span><br></div></div><div><span style="color:rgb(80,0,80);font-size:12.8px"><br></span></div><div><span style="color:rgb(80,0,80);font-size:12.8px">Do you think that would make any difference? The way you explained it, sounds to me it might.</span></div><div><span style="color:rgb(80,0,80);font-size:12.8px"><br></span></div><div><span style="color:rgb(80,0,80);font-size:12.8px">-Regards</span></div><div><span style="color:rgb(80,0,80);font-size:12.8px">Nikhil</span></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Mon, Oct 17, 2016 at 11:36 PM, Ken Gaillot <span dir="ltr">&lt;<a href="mailto:kgaillot@redhat.com" target="_blank">kgaillot@redhat.com</a>&gt;</span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><span class="">On 10/17/2016 09:55 AM, Nikhil Utane wrote:<br>
&gt; I see these prints.<br>
&gt;<br>
&gt; pengine:     info: rsc_merge_weights:cu_4: Rolling back scores from cu_3<br>
&gt; pengine:    debug: native_assign_node:Assigning Redun_CU4_Wb30 to cu_4<br>
&gt; pengine:     info: rsc_merge_weights:cu_3: Rolling back scores from cu_2<br>
&gt; pengine:    debug: native_assign_node:Assigning Redund_CU5_WB30 to cu_3<br>
&gt;<br>
&gt; Looks like rolling back the scores is causing the new decision to<br>
&gt; relocate the resources.<br>
&gt; Am I using the scores incorrectly?<br>
<br>
</span>No, I think this is expected.<br>
<br>
Your anti-colocation constraints place cu_2 and cu_3 relative to cu_4,<br>
so that means the cluster will place cu_4 first if possible, before<br>
deciding where the others should go. Similarly, cu_2 has a constraint<br>
relative to cu_3, so cu_3 gets placed next, and cu_2 is the one left out.<br>
<br>
The anti-colocation scores of -INFINITY outweigh the stickiness of 100.<br>
I&#39;m not sure whether setting stickiness to INFINITY would change<br>
anything; hopefully, it would stop cu_3 from moving, but cu_2 would<br>
still be stopped.<br>
<br>
I don&#39;t see a good way around this. The cluster has to place some<br>
resource first, in order to know not to place some other resource on the<br>
same node. I don&#39;t think there&#39;s a way to make them &quot;equal&quot;, because<br>
then none of them could be placed to begin with -- unless you went with<br>
utilization attributes, as someone else suggested, with<br>
placement-strategy=balanced:<br>
<br>
<a href="http://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html-single/Pacemaker_Explained/index.html#idm140521708557280" rel="noreferrer" target="_blank">http://clusterlabs.org/doc/en-<wbr>US/Pacemaker/1.1-pcs/html-<wbr>single/Pacemaker_Explained/<wbr>index.html#idm140521708557280</a><br>
<span class=""><br>
&gt;<br>
&gt; [root@Redund_CU5_WB30 root]# pcs constraint<br>
&gt; Location Constraints:<br>
&gt;   Resource: cu_2<br>
&gt;     Enabled on: Redun_CU4_Wb30 (score:0)<br>
&gt;     Enabled on: Redund_CU5_WB30 (score:0)<br>
&gt;     Enabled on: Redund_CU3_WB30 (score:0)<br>
&gt;     Enabled on: Redund_CU1_WB30 (score:0)<br>
&gt;   Resource: cu_3<br>
&gt;     Enabled on: Redun_CU4_Wb30 (score:0)<br>
&gt;     Enabled on: Redund_CU5_WB30 (score:0)<br>
&gt;     Enabled on: Redund_CU3_WB30 (score:0)<br>
&gt;     Enabled on: Redund_CU1_WB30 (score:0)<br>
&gt;   Resource: cu_4<br>
&gt;     Enabled on: Redun_CU4_Wb30 (score:0)<br>
&gt;     Enabled on: Redund_CU5_WB30 (score:0)<br>
&gt;     Enabled on: Redund_CU3_WB30 (score:0)<br>
&gt;     Enabled on: Redund_CU1_WB30 (score:0)<br>
&gt; Ordering Constraints:<br>
&gt; Colocation Constraints:<br>
&gt;   cu_2 with cu_4 (score:-INFINITY)<br>
&gt;   cu_3 with cu_4 (score:-INFINITY)<br>
&gt;   cu_2 with cu_3 (score:-INFINITY)<br>
&gt;<br>
&gt;<br>
&gt; On Mon, Oct 17, 2016 at 8:16 PM, Nikhil Utane<br>
</span><span class="">&gt; &lt;<a href="mailto:nikhil.subscribed@gmail.com">nikhil.subscribed@gmail.com</a> &lt;mailto:<a href="mailto:nikhil.subscribed@gmail.com">nikhil.subscribed@<wbr>gmail.com</a>&gt;&gt; wrote:<br>
&gt;<br>
&gt;     This is driving me insane.<br>
&gt;<br>
&gt;     This is how the resources were started. Redund_CU1_WB30  was the DC<br>
&gt;     which I rebooted.<br>
&gt;      cu_4(ocf::redundancy:<wbr>RedundancyRA):Started Redund_CU1_WB30<br>
&gt;      cu_2(ocf::redundancy:<wbr>RedundancyRA):Started Redund_CU5_WB30<br>
</span>&gt;      cu_3(ocf::redundancy:<wbr>RedundancyRA):Started Redun_CU4_Wb30<br>
<span class="">&gt;<br>
&gt;     Since the standby node was not UP. I was expecting resource cu_4 to<br>
&gt;     be waiting to be scheduled.<br>
&gt;     But then it re-arranged everything as below.<br>
&gt;      cu_4(ocf::redundancy:<wbr>RedundancyRA):Started Redun_CU4_Wb30<br>
&gt;      cu_2(ocf::redundancy:<wbr>RedundancyRA):Stopped<br>
</span>&gt;      cu_3(ocf::redundancy:<wbr>RedundancyRA):Started Redund_CU5_WB30<br>
<span class="">&gt;<br>
&gt;     There is not much information available in the logs on new DC. It<br>
&gt;     just shows what it has decided to do but nothing to suggest why it<br>
&gt;     did it that way.<br>
&gt;<br>
&gt;     notice: Start   cu_4(Redun_CU4_Wb30)<br>
&gt;     notice: Stop    cu_2(Redund_CU5_WB30)<br>
</span>&gt;     notice: Move    cu_3(Started Redun_CU4_Wb30 -&gt; Redund_CU5_WB30)<br>
<span class="">&gt;<br>
&gt;     I have default stickiness set to 100 which is higher than any score<br>
&gt;     that I have configured.<br>
&gt;     I have migration_threshold set to 1. Should I bump that up instead?<br>
&gt;<br>
&gt;     -Thanks<br>
&gt;     Nikhil<br>
&gt;<br>
&gt;     On Sat, Oct 15, 2016 at 12:36 AM, Ken Gaillot &lt;<a href="mailto:kgaillot@redhat.com">kgaillot@redhat.com</a><br>
</span><div><div class="h5">&gt;     &lt;mailto:<a href="mailto:kgaillot@redhat.com">kgaillot@redhat.com</a>&gt;&gt; wrote:<br>
&gt;<br>
&gt;         On 10/14/2016 06:56 AM, Nikhil Utane wrote:<br>
&gt;         &gt; Hi,<br>
&gt;         &gt;<br>
&gt;         &gt; Thank you for the responses so far.<br>
&gt;         &gt; I added reverse colocation as well. However seeing some other issue in<br>
&gt;         &gt; resource movement that I am analyzing.<br>
&gt;         &gt;<br>
&gt;         &gt; Thinking further on this, why doesn&#39;t &quot;/a not with b&quot; does not<br>
&gt;         imply &quot;b<br>
&gt;         &gt; not with a&quot;?/<br>
&gt;         &gt; Coz wouldn&#39;t putting &quot;b with a&quot; violate &quot;a not with b&quot;?<br>
&gt;         &gt;<br>
&gt;         &gt; Can someone confirm that colocation is required to be configured both ways?<br>
&gt;<br>
&gt;         The anti-colocation should only be defined one-way. Otherwise,<br>
&gt;         you get a<br>
&gt;         dependency loop (as seen in logs you showed elsewhere).<br>
&gt;<br>
&gt;         The one-way constraint is enough to keep the resources apart.<br>
&gt;         However,<br>
&gt;         the question is whether the cluster might move resources around<br>
&gt;         unnecessarily.<br>
&gt;<br>
&gt;         For example, &quot;A not with B&quot; means that the cluster will place B<br>
&gt;         first,<br>
&gt;         then place A somewhere else. So, if B&#39;s node fails, can the cluster<br>
&gt;         decide that A&#39;s node is now the best place for B, and move A to<br>
&gt;         a free<br>
&gt;         node, rather than simply start B on the free node?<br>
&gt;<br>
&gt;         The cluster does take dependencies into account when placing a<br>
&gt;         resource,<br>
&gt;         so I would hope that wouldn&#39;t happen. But I&#39;m not sure. Having some<br>
&gt;         stickiness might help, so that A has some preference against moving.<br>
&gt;<br>
&gt;         &gt; -Thanks<br>
&gt;         &gt; Nikhil<br>
&gt;         &gt;<br>
&gt;         &gt; /<br>
&gt;         &gt; /<br>
&gt;         &gt;<br>
&gt;         &gt; On Fri, Oct 14, 2016 at 1:09 PM, Vladislav Bogdanov<br>
&gt;         &gt; &lt;<a href="mailto:bubble@hoster-ok.com">bubble@hoster-ok.com</a> &lt;mailto:<a href="mailto:bubble@hoster-ok.com">bubble@hoster-ok.com</a>&gt;<br>
</div></div><span class="">&gt;         &lt;mailto:<a href="mailto:bubble@hoster-ok.com">bubble@hoster-ok.com</a> &lt;mailto:<a href="mailto:bubble@hoster-ok.com">bubble@hoster-ok.com</a>&gt;&gt;<wbr>&gt; wrote:<br>
&gt;         &gt;<br>
&gt;         &gt;     On October 14, 2016 10:13:17 AM GMT+03:00, Ulrich Windl<br>
&gt;         &gt;     &lt;<a href="mailto:Ulrich.Windl@rz.uni-regensburg.de">Ulrich.Windl@rz.uni-<wbr>regensburg.de</a><br>
&gt;         &lt;mailto:<a href="mailto:Ulrich.Windl@rz.uni-regensburg.de">Ulrich.Windl@rz.uni-<wbr>regensburg.de</a>&gt;<br>
</span>&gt;         &gt;     &lt;mailto:<a href="mailto:Ulrich.Windl@rz.uni-regensburg.de">Ulrich.Windl@rz.uni-<wbr>regensburg.de</a><br>
<span class="">&gt;         &lt;mailto:<a href="mailto:Ulrich.Windl@rz.uni-regensburg.de">Ulrich.Windl@rz.uni-<wbr>regensburg.de</a>&gt;&gt;&gt; wrote:<br>
&gt;         &gt;     &gt;&gt;&gt;&gt; Nikhil Utane &lt;<a href="mailto:nikhil.subscribed@gmail.com">nikhil.subscribed@gmail.com</a> &lt;mailto:<a href="mailto:nikhil.subscribed@gmail.com">nikhil.subscribed@<wbr>gmail.com</a>&gt;<br>
</span>&gt;         &gt;     &lt;mailto:<a href="mailto:nikhil.subscribed@gmail.com">nikhil.subscribed@<wbr>gmail.com</a><br>
<span class="">&gt;         &lt;mailto:<a href="mailto:nikhil.subscribed@gmail.com">nikhil.subscribed@<wbr>gmail.com</a>&gt;&gt;&gt; schrieb am 13.10.2016 um<br>
&gt;         &gt;     &gt;16:43 in<br>
&gt;         &gt;     &gt;Nachricht<br>
&gt;         &gt;     &gt;&lt;<a href="mailto:CAGNWmJUbPucnBGXroHkHSbQ0LXovwsLFPkUPg1R8gJqRFqM9Dg@mail.gmail.com">CAGNWmJUbPucnBGXroHkHSbQ0LXo<wbr>vwsLFPkUPg1R8gJqRFqM9Dg@mail.<wbr>gmail.com</a><br>
&gt;         &lt;mailto:<a href="mailto:CAGNWmJUbPucnBGXroHkHSbQ0LXovwsLFPkUPg1R8gJqRFqM9Dg@mail.gmail.com">CAGNWmJUbPucnBGXroHkHS<wbr>bQ0LXovwsLFPkUPg1R8gJqRFqM9Dg@<wbr>mail.gmail.com</a>&gt;<br>
&gt;         &gt;<br>
</span>&gt;          &lt;mailto:<a href="mailto:CAGNWmJUbPucnBGXroHkHSbQ0LXovwsLFPkUPg1R8gJqRFqM9Dg@mail.gmail.com">CAGNWmJUbPucnBGXroHkHS<wbr>bQ0LXovwsLFPkUPg1R8gJqRFqM9Dg@<wbr>mail.gmail.com</a><br>
<span class="">&gt;         &lt;mailto:<a href="mailto:CAGNWmJUbPucnBGXroHkHSbQ0LXovwsLFPkUPg1R8gJqRFqM9Dg@mail.gmail.com">CAGNWmJUbPucnBGXroHkHS<wbr>bQ0LXovwsLFPkUPg1R8gJqRFqM9Dg@<wbr>mail.gmail.com</a>&gt;&gt;&gt;:<br>
&gt;         &gt;     &gt;&gt; Ulrich,<br>
&gt;         &gt;     &gt;&gt;<br>
&gt;         &gt;     &gt;&gt; I have 4 resources only (not 5, nodes are 5). So then I only need 6<br>
&gt;         &gt;     &gt;&gt; constraints, right?<br>
&gt;         &gt;     &gt;&gt;<br>
&gt;         &gt;     &gt;&gt;      [,1]   [,2]   [,3]   [,4]   [,5]  [,6]<br>
&gt;         &gt;     &gt;&gt; [1,] &quot;A&quot;  &quot;A&quot;  &quot;A&quot;    &quot;B&quot;   &quot;B&quot;    &quot;C&quot;<br>
&gt;         &gt;     &gt;&gt; [2,] &quot;B&quot;  &quot;C&quot;  &quot;D&quot;   &quot;C&quot;  &quot;D&quot;    &quot;D&quot;<br>
&gt;         &gt;     &gt;<br>
&gt;         &gt;     &gt;Sorry for my confusion. As Andrei Borzenkovsaid in<br>
&gt;         &gt;     &gt;&lt;<a href="mailto:CAA91j0W%2BepAHFLg9u6VX_X8LgFkf9Rp55g3nocY4oZNA9BbZ%2Bg@mail.gmail.com">CAA91j0W+epAHFLg9u6VX_<wbr>X8LgFkf9Rp55g3nocY4oZNA9BbZ+g@<wbr>mail.gmail.com</a><br>
&gt;         &lt;mailto:<a href="mailto:CAA91j0W%252BepAHFLg9u6VX_X8LgFkf9Rp55g3nocY4oZNA9BbZ%252Bg@mail.gmail.com">CAA91j0W%<wbr>2BepAHFLg9u6VX_<wbr>X8LgFkf9Rp55g3nocY4oZNA9BbZ%<wbr>2Bg@mail.gmail.com</a>&gt;<br>
&gt;         &gt;<br>
</span>&gt;          &lt;mailto:<a href="mailto:CAA91j0W%252BepAHFLg9u6VX_X8LgFkf9Rp55g3nocY4oZNA9BbZ%252Bg@mail.gmail.com">CAA91j0W%<wbr>2BepAHFLg9u6VX_<wbr>X8LgFkf9Rp55g3nocY4oZNA9BbZ%<wbr>2Bg@mail.gmail.com</a><br>
&gt;         &lt;mailto:<a href="mailto:CAA91j0W%25252BepAHFLg9u6VX_X8LgFkf9Rp55g3nocY4oZNA9BbZ%25252Bg@mail.gmail.com">CAA91j0W%<wbr>252BepAHFLg9u6VX_<wbr>X8LgFkf9Rp55g3nocY4oZNA9BbZ%<wbr>252Bg@mail.gmail.com</a>&gt;&gt;&gt;<br>
<span class="">&gt;         &gt;     &gt;you probably have to add (A, B) _and_ (B, A)! Thinking about it, I<br>
&gt;         &gt;     &gt;wonder whether an easier solution would be using &quot;utilization&quot;: If<br>
&gt;         &gt;     &gt;every node has one token to give, and every resource needs on token, no<br>
&gt;         &gt;     &gt;two resources will run on one node. Sounds like an easier solution to<br>
&gt;         &gt;     &gt;me.<br>
&gt;         &gt;     &gt;<br>
&gt;         &gt;     &gt;Regards,<br>
&gt;         &gt;     &gt;Ulrich<br>
&gt;         &gt;     &gt;<br>
&gt;         &gt;     &gt;<br>
&gt;         &gt;     &gt;&gt;<br>
&gt;         &gt;     &gt;&gt; I understand that if I configure constraint of R1 with R2 score as<br>
&gt;         &gt;     &gt;&gt; -infinity, then the same applies for R2 with R1 score as -infinity<br>
&gt;         &gt;     &gt;(don&#39;t<br>
&gt;         &gt;     &gt;&gt; have to configure it explicitly).<br>
&gt;         &gt;     &gt;&gt; I am not having a problem of multiple resources getting schedule on<br>
&gt;         &gt;     &gt;the<br>
&gt;         &gt;     &gt;&gt; same node. Rather, one working resource is unnecessarily getting<br>
&gt;         &gt;     &gt;relocated.<br>
&gt;         &gt;     &gt;&gt;<br>
&gt;         &gt;     &gt;&gt; -Thanks<br>
&gt;         &gt;     &gt;&gt; Nikhil<br>
&gt;         &gt;     &gt;&gt;<br>
&gt;         &gt;     &gt;&gt;<br>
&gt;         &gt;     &gt;&gt; On Thu, Oct 13, 2016 at 7:45 PM, Ulrich Windl &lt;<br>
&gt;         &gt;     &gt;&gt; <a href="mailto:Ulrich.Windl@rz.uni-regensburg.de">Ulrich.Windl@rz.uni-<wbr>regensburg.de</a><br>
&gt;         &lt;mailto:<a href="mailto:Ulrich.Windl@rz.uni-regensburg.de">Ulrich.Windl@rz.uni-<wbr>regensburg.de</a>&gt;<br>
</span>&gt;         &gt;     &lt;mailto:<a href="mailto:Ulrich.Windl@rz.uni-regensburg.de">Ulrich.Windl@rz.uni-<wbr>regensburg.de</a><br>
<span class="">&gt;         &lt;mailto:<a href="mailto:Ulrich.Windl@rz.uni-regensburg.de">Ulrich.Windl@rz.uni-<wbr>regensburg.de</a>&gt;&gt;&gt; wrote:<br>
&gt;         &gt;     &gt;&gt;<br>
&gt;         &gt;     &gt;&gt;&gt; Hi!<br>
&gt;         &gt;     &gt;&gt;&gt;<br>
&gt;         &gt;     &gt;&gt;&gt; Don&#39;t you need 10 constraints, excluding every possible pair of your<br>
&gt;         &gt;     &gt;5<br>
&gt;         &gt;     &gt;&gt;&gt; resources (named A-E here), like in this table (produced with R):<br>
&gt;         &gt;     &gt;&gt;&gt;<br>
&gt;         &gt;     &gt;&gt;&gt;      [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10]<br>
&gt;         &gt;     &gt;&gt;&gt; [1,] &quot;A&quot;  &quot;A&quot;  &quot;A&quot;  &quot;A&quot;  &quot;B&quot;  &quot;B&quot;  &quot;B&quot;  &quot;C&quot;  &quot;C&quot;  &quot;D&quot;<br>
&gt;         &gt;     &gt;&gt;&gt; [2,] &quot;B&quot;  &quot;C&quot;  &quot;D&quot;  &quot;E&quot;  &quot;C&quot;  &quot;D&quot;  &quot;E&quot;  &quot;D&quot;  &quot;E&quot;  &quot;E&quot;<br>
&gt;         &gt;     &gt;&gt;&gt;<br>
&gt;         &gt;     &gt;&gt;&gt; Ulrich<br>
&gt;         &gt;     &gt;&gt;&gt;<br>
&gt;         &gt;     &gt;&gt;&gt; &gt;&gt;&gt; Nikhil Utane &lt;<a href="mailto:nikhil.subscribed@gmail.com">nikhil.subscribed@gmail.com</a> &lt;mailto:<a href="mailto:nikhil.subscribed@gmail.com">nikhil.subscribed@<wbr>gmail.com</a>&gt;<br>
</span>&gt;         &gt;     &lt;mailto:<a href="mailto:nikhil.subscribed@gmail.com">nikhil.subscribed@<wbr>gmail.com</a><br>
<span class="">&gt;         &lt;mailto:<a href="mailto:nikhil.subscribed@gmail.com">nikhil.subscribed@<wbr>gmail.com</a>&gt;&gt;&gt; schrieb am 13.10.2016<br>
&gt;         &gt;     &gt;um<br>
&gt;         &gt;     &gt;&gt;&gt; 15:59 in<br>
&gt;         &gt;     &gt;&gt;&gt; Nachricht<br>
&gt;         &gt;     &gt;&gt;&gt;<br>
&gt;         &gt;     &gt;&lt;<a href="mailto:CAGNWmJW0CWMr3bvR3L9xZCAcJUzyczQbZEzUzpaJxi%2BPn7Oj_A@mail.gmail.com">CAGNWmJW0CWMr3bvR3L9xZCAcJUz<wbr>yczQbZEzUzpaJxi+Pn7Oj_A@mail.<wbr>gmail.com</a><br>
&gt;         &lt;mailto:<a href="mailto:CAGNWmJW0CWMr3bvR3L9xZCAcJUzyczQbZEzUzpaJxi%252BPn7Oj_A@mail.gmail.com">CAGNWmJW0CWMr3bvR3L9xZ<wbr>CAcJUzyczQbZEzUzpaJxi%2BPn7Oj_<wbr>A@mail.gmail.com</a>&gt;<br>
&gt;         &gt;<br>
</span>&gt;          &lt;mailto:<a href="mailto:CAGNWmJW0CWMr3bvR3L9xZCAcJUzyczQbZEzUzpaJxi%252BPn7Oj_A@mail.gmail.com">CAGNWmJW0CWMr3bvR3L9xZ<wbr>CAcJUzyczQbZEzUzpaJxi%2BPn7Oj_<wbr>A@mail.gmail.com</a><br>
&gt;         &lt;mailto:<a href="mailto:CAGNWmJW0CWMr3bvR3L9xZCAcJUzyczQbZEzUzpaJxi%25252BPn7Oj_A@mail.gmail.com">CAGNWmJW0CWMr3bvR3L9xZ<wbr>CAcJUzyczQbZEzUzpaJxi%<wbr>252BPn7Oj_A@mail.gmail.com</a>&gt;&gt;&gt;:<br>
<div class="HOEnZb"><div class="h5">&gt;         &gt;     &gt;&gt;&gt; &gt; Hi,<br>
&gt;         &gt;     &gt;&gt;&gt; &gt;<br>
&gt;         &gt;     &gt;&gt;&gt; &gt; I have 5 nodes and 4 resources configured.<br>
&gt;         &gt;     &gt;&gt;&gt; &gt; I have configured constraint such that no two<br>
&gt;         resources can be<br>
&gt;         &gt;     &gt;&gt;&gt; co-located.<br>
&gt;         &gt;     &gt;&gt;&gt; &gt; I brought down a node (which happened to be DC). I<br>
&gt;         was expecting<br>
&gt;         &gt;     &gt;the<br>
&gt;         &gt;     &gt;&gt;&gt; &gt; resource on the failed node would be migrated to the<br>
&gt;         5th waiting<br>
&gt;         &gt;     &gt;node<br>
&gt;         &gt;     &gt;&gt;&gt; (that<br>
&gt;         &gt;     &gt;&gt;&gt; &gt; is not running any resource).<br>
&gt;         &gt;     &gt;&gt;&gt; &gt; However what happened was the failed node resource<br>
&gt;         was started on<br>
&gt;         &gt;     &gt;another<br>
&gt;         &gt;     &gt;&gt;&gt; &gt; active node (after stopping it&#39;s existing resource)<br>
&gt;         and that<br>
&gt;         &gt;     &gt;node&#39;s<br>
&gt;         &gt;     &gt;&gt;&gt; &gt; resource was moved to the waiting node.<br>
&gt;         &gt;     &gt;&gt;&gt; &gt;<br>
&gt;         &gt;     &gt;&gt;&gt; &gt; What could I be doing wrong?<br>
&gt;         &gt;     &gt;&gt;&gt; &gt;<br>
&gt;         &gt;     &gt;&gt;&gt; &gt; &lt;nvpair id=&quot;cib-bootstrap-options-<wbr>have-watchdog&quot;<br>
&gt;         value=&quot;true&quot;<br>
&gt;         &gt;     &gt;&gt;&gt; &gt; name=&quot;have-watchdog&quot;/&gt;<br>
&gt;         &gt;     &gt;&gt;&gt; &gt; &lt;nvpair id=&quot;cib-bootstrap-options-dc-<wbr>version&quot;<br>
&gt;         &gt;     &gt;value=&quot;1.1.14-5a6cdd1&quot;<br>
&gt;         &gt;     &gt;&gt;&gt; &gt; name=&quot;dc-version&quot;/&gt;<br>
&gt;         &gt;     &gt;&gt;&gt; &gt; &lt;nvpair<br>
&gt;         id=&quot;cib-bootstrap-options-<wbr>cluster-infrastructure&quot;<br>
&gt;         &gt;     &gt;&gt;&gt; value=&quot;corosync&quot;<br>
&gt;         &gt;     &gt;&gt;&gt; &gt; name=&quot;cluster-infrastructure&quot;/<wbr>&gt;<br>
&gt;         &gt;     &gt;&gt;&gt; &gt; &lt;nvpair id=&quot;cib-bootstrap-options-<wbr>stonith-enabled&quot;<br>
&gt;         value=&quot;false&quot;<br>
&gt;         &gt;     &gt;&gt;&gt; &gt; name=&quot;stonith-enabled&quot;/&gt;<br>
&gt;         &gt;     &gt;&gt;&gt; &gt; &lt;nvpair id=&quot;cib-bootstrap-options-no-<wbr>quorum-policy&quot;<br>
&gt;         value=&quot;ignore&quot;<br>
&gt;         &gt;     &gt;&gt;&gt; &gt; name=&quot;no-quorum-policy&quot;/&gt;<br>
&gt;         &gt;     &gt;&gt;&gt; &gt; &lt;nvpair<br>
&gt;         id=&quot;cib-bootstrap-options-<wbr>default-action-timeout&quot;<br>
&gt;         &gt;     &gt;value=&quot;240&quot;<br>
&gt;         &gt;     &gt;&gt;&gt; &gt; name=&quot;default-action-timeout&quot;/<wbr>&gt;<br>
&gt;         &gt;     &gt;&gt;&gt; &gt; &lt;nvpair id=&quot;cib-bootstrap-options-<wbr>symmetric-cluster&quot;<br>
&gt;         value=&quot;false&quot;<br>
&gt;         &gt;     &gt;&gt;&gt; &gt; name=&quot;symmetric-cluster&quot;/&gt;<br>
&gt;         &gt;     &gt;&gt;&gt; &gt;<br>
&gt;         &gt;     &gt;&gt;&gt; &gt; # pcs constraint<br>
&gt;         &gt;     &gt;&gt;&gt; &gt; Location Constraints:<br>
&gt;         &gt;     &gt;&gt;&gt; &gt;   Resource: cu_2<br>
&gt;         &gt;     &gt;&gt;&gt; &gt;     Enabled on: Redun_CU4_Wb30 (score:0)<br>
&gt;         &gt;     &gt;&gt;&gt; &gt;     Enabled on: Redund_CU2_WB30 (score:0)<br>
&gt;         &gt;     &gt;&gt;&gt; &gt;     Enabled on: Redund_CU3_WB30 (score:0)<br>
&gt;         &gt;     &gt;&gt;&gt; &gt;     Enabled on: Redund_CU5_WB30 (score:0)<br>
&gt;         &gt;     &gt;&gt;&gt; &gt;     Enabled on: Redund_CU1_WB30 (score:0)<br>
&gt;         &gt;     &gt;&gt;&gt; &gt;   Resource: cu_3<br>
&gt;         &gt;     &gt;&gt;&gt; &gt;     Enabled on: Redun_CU4_Wb30 (score:0)<br>
&gt;         &gt;     &gt;&gt;&gt; &gt;     Enabled on: Redund_CU2_WB30 (score:0)<br>
&gt;         &gt;     &gt;&gt;&gt; &gt;     Enabled on: Redund_CU3_WB30 (score:0)<br>
&gt;         &gt;     &gt;&gt;&gt; &gt;     Enabled on: Redund_CU5_WB30 (score:0)<br>
&gt;         &gt;     &gt;&gt;&gt; &gt;     Enabled on: Redund_CU1_WB30 (score:0)<br>
&gt;         &gt;     &gt;&gt;&gt; &gt;   Resource: cu_4<br>
&gt;         &gt;     &gt;&gt;&gt; &gt;     Enabled on: Redun_CU4_Wb30 (score:0)<br>
&gt;         &gt;     &gt;&gt;&gt; &gt;     Enabled on: Redund_CU2_WB30 (score:0)<br>
&gt;         &gt;     &gt;&gt;&gt; &gt;     Enabled on: Redund_CU3_WB30 (score:0)<br>
&gt;         &gt;     &gt;&gt;&gt; &gt;     Enabled on: Redund_CU5_WB30 (score:0)<br>
&gt;         &gt;     &gt;&gt;&gt; &gt;     Enabled on: Redund_CU1_WB30 (score:0)<br>
&gt;         &gt;     &gt;&gt;&gt; &gt;   Resource: cu_5<br>
&gt;         &gt;     &gt;&gt;&gt; &gt;     Enabled on: Redun_CU4_Wb30 (score:0)<br>
&gt;         &gt;     &gt;&gt;&gt; &gt;     Enabled on: Redund_CU2_WB30 (score:0)<br>
&gt;         &gt;     &gt;&gt;&gt; &gt;     Enabled on: Redund_CU3_WB30 (score:0)<br>
&gt;         &gt;     &gt;&gt;&gt; &gt;     Enabled on: Redund_CU5_WB30 (score:0)<br>
&gt;         &gt;     &gt;&gt;&gt; &gt;     Enabled on: Redund_CU1_WB30 (score:0)<br>
&gt;         &gt;     &gt;&gt;&gt; &gt; Ordering Constraints:<br>
&gt;         &gt;     &gt;&gt;&gt; &gt; Colocation Constraints:<br>
&gt;         &gt;     &gt;&gt;&gt; &gt;   cu_3 with cu_2 (score:-INFINITY)<br>
&gt;         &gt;     &gt;&gt;&gt; &gt;   cu_4 with cu_2 (score:-INFINITY)<br>
&gt;         &gt;     &gt;&gt;&gt; &gt;   cu_4 with cu_3 (score:-INFINITY)<br>
&gt;         &gt;     &gt;&gt;&gt; &gt;   cu_5 with cu_2 (score:-INFINITY)<br>
&gt;         &gt;     &gt;&gt;&gt; &gt;   cu_5 with cu_3 (score:-INFINITY)<br>
&gt;         &gt;     &gt;&gt;&gt; &gt;   cu_5 with cu_4 (score:-INFINITY)<br>
&gt;         &gt;     &gt;&gt;&gt; &gt;<br>
&gt;         &gt;     &gt;&gt;&gt; &gt; -Thanks<br>
&gt;         &gt;     &gt;&gt;&gt; &gt; Nikhil<br>
&gt;         &gt;     &gt;&gt;&gt;<br>
&gt;         &gt;     &gt;&gt;&gt;<br>
&gt;         &gt;     &gt;&gt;&gt;<br>
&gt;         &gt;<br>
</div></div><div class="HOEnZb"><div class="h5">&gt;         &gt;     Hi,<br>
&gt;         &gt;<br>
&gt;         &gt;     use of utilization (balanced strategy) has one caveat:<br>
&gt;         resources are<br>
&gt;         &gt;     not moved just because of utilization of one node is less,<br>
&gt;         when<br>
&gt;         &gt;     nodes have the same allocation score for the resource.<br>
&gt;         &gt;     So, after the simultaneus outage of two nodes in a 5-node<br>
&gt;         cluster,<br>
&gt;         &gt;     it may appear that one node runs two resources and two<br>
&gt;         recovered<br>
&gt;         &gt;     nodes run nothing.<br>
&gt;         &gt;<br>
&gt;         &gt;     Original &#39;utilization&#39; strategy only limits resource<br>
&gt;         placement, it<br>
&gt;         &gt;     is not considered when choosing a node for a resource.<br>
&gt;         &gt;<br>
&gt;         &gt;     Vladislav<br>
</div></div></blockquote></div><br></div>