<html><head></head><body><div>On Thu, 2023-03-02 at 14:30 +0100, Ulrich Windl wrote:</div><blockquote type="cite" style="margin:0 0 0 .8ex; border-left:2px #729fcf solid;padding-left:1ex"><blockquote type="cite" style="margin:0 0 0 .8ex; border-left:2px #729fcf solid;padding-left:1ex"><blockquote type="cite" style="margin:0 0 0 .8ex; border-left:2px #729fcf solid;padding-left:1ex"><blockquote type="cite" style="margin:0 0 0 .8ex; border-left:2px #729fcf solid;padding-left:1ex"><div>Gerald Vogt <<a href="mailto:vogt@spamcop.net">vogt@spamcop.net</a>> schrieb am 02.03.2023 um 08:41 in Nachricht<br></div></blockquote></blockquote></blockquote><div><<a href="mailto:624d0b70-5983-4d21-6777-55be91688bbe@spamcop.net">624d0b70-5983-4d21-6777-55be91688bbe@spamcop.net</a>>:<br></div><blockquote type="cite" style="margin:0 0 0 .8ex; border-left:2px #729fcf solid;padding-left:1ex"><div>Hi,<br></div><div><br></div><div>I am setting up a mail relay cluster which main purpose is to maintain <br></div><div>the service ips via IPaddr2 and move them between cluster nodes when <br></div><div>necessary.<br></div><div><br></div><div>The service ips should only be active on nodes which are running all <br></div><div>necessary mail (systemd) services.<br></div><div><br></div><div>So I have set up a resource for each of those services, put them into a <br></div><div>group in order they should start, cloned the group as they are normally <br></div><div>supposed to run on the nodes at all times.<br></div><div><br></div><div>Then I added an order constraint<br></div><div> start mail-services-clone then start mail1-ip<br></div><div> start mail-services-clone then start mail2-ip<br></div><div><br></div><div>and colocations to prefer running the ips on different nodes but only <br></div><div>with the clone running:<br></div><div><br></div><div> colocation add mail2-ip with mail1-ip -1000<br></div><div> colocation ip1 with mail-services-clone<br></div><div> colocation ip2 with mail-services-clone<br></div><div><br></div><div>as well as a location constraint to prefer running the first ip on the <br></div><div>first node and the second on the second<br></div><div><br></div><div> location ip1 prefers ha1=2000<br></div><div> location ip2 prefers ha2=2000<br></div><div><br></div><div>Now if I stop pacemaker on one of those nodes, e.g. on node ha2, it's <br></div><div>fine. ip2 will be moved immediately to ha3. Good.<br></div><div><br></div><div>However, if pacemaker on ha2 starts up again, it will immediately remove <br></div><div>ip2 from ha3 and keep it offline, while the services in the group are <br></div><div>starting on ha2. As the services unfortunately take some time to come <br></div><div>up, ip2 is offline for more than a minute.<br></div></blockquote><div><br></div><div>That is because you wanted "ip2 prefers ha2=2000", so if the cluster _can_ run it there, then it will, even if it's running elsewhere.<br></div><div><br></div></blockquote><div><br></div><div>Pacemaker sometime places actions in the transition in a suboptimal order (prom the humans point of view).</div><div>So instead of</div><div><br></div><div>start group on nodeB</div><div>stop vip on nodeA</div><div>start vip on nodeB</div><div><br></div><div>it runs</div><div><br></div><div style="caret-color: rgb(34, 31, 30); color: rgb(34, 31, 30); font-family: "Noto Sans"; font-size: 13.333333px; font-style: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: auto; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: auto; word-spacing: 0px; -webkit-tap-highlight-color: rgba(0, 0, 0, 0.4); -webkit-text-stroke-width: 0px; text-decoration: none;">stop vip on nodeA</div><div style="font-size: 13.333333px;">start group on nodeB</div><div style="font-size: 13.333333px;"><span style="font-size: 13.333333px;">start vip on nodeB</span></div><div><br></div><div>So, if start of group takes a lot of time, then vip is not available on any node during that start.</div><div><br></div><div>One more techniques to minimize the time during which vip is stopped would be to add resource migration support to IPAddr2.</div><div>That could help, but I'm not sure.</div><div>At least I know for sure pacemaker behaves differently with migratable resources and MAY decide to use the first order I provided..</div><div><br></div><blockquote type="cite" style="margin:0 0 0 .8ex; border-left:2px #729fcf solid;padding-left:1ex"><div>Maybe explain what you really want.<br></div><div><br></div><blockquote type="cite" style="margin:0 0 0 .8ex; border-left:2px #729fcf solid;padding-left:1ex"><div><br></div><div>It seems the colocations with the clone are already good once the clone <br></div><div>group begins to start services and thus allows the ip to be removed from <br></div><div>the current node.<br></div><div><br></div><div>I was wondering how can I define the colocation to be accepted only if <br></div><div>all services in the clone have been started? And not once the first <br></div><div>service in the clone is starting?<br></div><div><br></div><div>Thanks,<br></div><div><br></div><div>Gerald<br></div><div><br></div><div><br></div><div>_______________________________________________<br></div><div>Manage your subscription:<br></div><div><a href="https://lists.clusterlabs.org/mailman/listinfo/users">https://lists.clusterlabs.org/mailman/listinfo/users</a> <br></div><div><br></div><div>ClusterLabs home: <a href="https://www.clusterlabs.org/">https://www.clusterlabs.org/</a> <br></div></blockquote><div><br></div><div><br></div><div><br></div><div><br></div><div>_______________________________________________<br></div><div>Manage your subscription:<br></div><div><a href="https://lists.clusterlabs.org/mailman/listinfo/users">https://lists.clusterlabs.org/mailman/listinfo/users</a><br></div><div><br></div><div>ClusterLabs home: <a href="https://www.clusterlabs.org/">https://www.clusterlabs.org/</a><br></div></blockquote><div><br></div><div><span></span></div></body></html>