<div dir="ltr"><div>Klaus,</div><div><br></div><div>yes, these constraints were defined by pcs after manual move (pcs resource move) and help about this action is clear:<br><span style="font-family:monospace"><br>Usage: pcs resource move...<br> move <resource id> [destination node] [--master] [lifetime=<lifetime>]<br> [--wait[=n]]<br> Move the resource off the node it is currently running on by creating<br> a -INFINITY location constraint to ban the node. If destination node is<br> specified the resource will be moved to that node by creating<br> an INFINITY location constraint to prefer the destination node. If<br> --master is used the scope of the command is limited to the master role<br> and you must use the promotable clone id (instead of the resource id).<br><br> If lifetime is specified then the constraint will expire after that<br> time, otherwise it defaults to infinity and the constraint can be<br> cleared manually with 'pcs resource clear' or 'pcs constraint delete'.<br> Lifetime is expected to be specified as ISO 8601 duration (see<br> <a href="https://en.wikipedia.org/wiki/ISO_8601#Durations">https://en.wikipedia.org/wiki/ISO_8601#Durations</a>).<br><br> If --wait is specified, pcs will wait up to 'n' seconds for the<br> resource to move and then return 0 on success or 1 on error. If 'n' is<br> not specified it defaults to 60 minutes.<br><br> If you want the resource to preferably avoid running on some nodes but<br> be able to failover to them use 'pcs constraint location avoids'.</span></div><div><span style="font-family:monospace"><br></span></div><div><span class="gmail-HwtZe" lang="en"><span class="gmail-jCAhz gmail-ChMk0b"><span class="gmail-ryNqvb">It wasn't obvious, that move works just like constraint definition) </span></span></span><span class="gmail-HwtZe" lang="en"><span class="gmail-jCAhz gmail-ChMk0b"><span class="gmail-ryNqvb">I should have read the help carefully.<br><br>Thank you for your help!</span></span></span><span class="gmail-ZSCsVd"></span><div class="gmail-OvtS8d"><br></div></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">вт, 28 мая 2024 г. в 16:30, Klaus Wenninger <<a href="mailto:kwenning@redhat.com">kwenning@redhat.com</a>>:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div dir="ltr"><br></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, May 28, 2024 at 12:34 PM Александр Руденко <<a href="mailto:a.rudikk@gmail.com" target="_blank">a.rudikk@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div>Andrei, thank you!<br><br></div><div>I tried to find node's scores and have found location constraints for these 3 resources:</div><div><br></div><div>pcs constraint<br>Location Constraints:<br> Resource: fsmt-28085F00<br> Enabled on:<br> Node: vdc16 (score:INFINITY) (role:Started)<br> Resource: fsmt-41CC55C0<br> Enabled on:<br> Node: vdc16 (score:INFINITY) (role:Started)<br> Resource: fsmt-A7C0E2A0<br> Enabled on:<br> Node: vdc16 (score:INFINITY) (role:Started)</div><div><br></div><div>but, I can't understand how these constraints were set. Can it be defined by pacemaker in some conditions or it's only manual configuration?<br></div></div></blockquote><div><br></div><div>Interesting: Didn't have that mail when I just answered you previous.</div><div>Anyway - the constraints are probably leftovers from deliberately moving resources</div><div>from ne node to another before using pcs commands.</div><div>iirc there is meanwhile a way how pcs removes them automatically.</div><div><br></div><div>Klaus</div><div> </div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr"><div><br></div><div>BTW, how can I see the node's score?<br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">вт, 28 мая 2024 г. в 11:59, Andrei Borzenkov <<a href="mailto:arvidjaar@gmail.com" target="_blank">arvidjaar@gmail.com</a>>:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">On Tue, May 28, 2024 at 11:39 AM Александр Руденко <<a href="mailto:a.rudikk@gmail.com" target="_blank">a.rudikk@gmail.com</a>> wrote:<br>
><br>
> Hi!<br>
><br>
> I can't understand this strange behavior, help me please.<br>
><br>
> I have 3 nodes in my cluster, 4 vCPU/8GB RAM each. And about 70 groups, 2 resources in each group. First one resource is our custom resource which configures Linux VRF and second one is systemd unit. Everything works fine.<br>
><br>
> We have next defaults:<br>
> pcs resource defaults<br>
> Meta Attrs: rsc_defaults-meta_attributes<br>
> resource-stickiness=100<br>
><br>
> When I shutdown pacemaker service on NODE1, all the resources move to NODE2 and NODE3, it's okay. But when I start pacemaker service on NODE1, 3 of 70 groups move back to NODE1.<br>
> But I expect that no one resource will be moved back to NODE1.<br>
><br>
> I tried to set resource-stickiness=100 exactly for these 3 groups, but it didn't help.<br>
><br>
> pcs resource config fsmt-41CC55C0<br>
> Group: fsmt-41CC55C0<br>
> Meta Attrs: resource-stickiness=100<br>
> ...<br>
><br>
> Why are these 3 resource groups moving back?<br>
><br>
<br>
Because NODE1 score is higher than NODE2 score + 100. E.g. NODE1 score<br>
may be infinity.<br>
_______________________________________________<br>
Manage your subscription:<br>
<a href="https://lists.clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">https://lists.clusterlabs.org/mailman/listinfo/users</a><br>
<br>
ClusterLabs home: <a href="https://www.clusterlabs.org/" rel="noreferrer" target="_blank">https://www.clusterlabs.org/</a><br>
</blockquote></div>
_______________________________________________<br>
Manage your subscription:<br>
<a href="https://lists.clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">https://lists.clusterlabs.org/mailman/listinfo/users</a><br>
<br>
ClusterLabs home: <a href="https://www.clusterlabs.org/" rel="noreferrer" target="_blank">https://www.clusterlabs.org/</a><br>
</blockquote></div></div>
_______________________________________________<br>
Manage your subscription:<br>
<a href="https://lists.clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">https://lists.clusterlabs.org/mailman/listinfo/users</a><br>
<br>
ClusterLabs home: <a href="https://www.clusterlabs.org/" rel="noreferrer" target="_blank">https://www.clusterlabs.org/</a><br>
</blockquote></div>