<div dir="ltr"><div>Hi, Strahil.</div><div><br></div><div>Based on the constraints documented in the article you're following (RH KB solution 5423971), I think I see what's happening.</div><div><br></div><div>The SAPHanaTopology resource requires the appropriate nfs-active attribute in order to run. That means that if the nfs-active attribute is set to false, the SAPHanaTopology resource must stop.</div><div><br></div><div>However, there's no rule saying SAPHanaTopology must finish stopping before the nfs-active attribute resource stops. In fact, it's quite the opposite: the SAPHanaTopology resource stops only after the nfs-active resource stops.</div><div><br></div><div>At the same time, the NFS resources are allowed to stop after the nfs-active attribute resource has stopped. So the NFS resources are stopping while the SAPHana* resources are likely still active.</div><div><br></div><div>Try something like this:</div><div> # pcs constraint order hana_nfs1_active-clone then SAPHanaTopology_<SID>_<instance_num>-clone kind=Optional<br></div><div> # pcs constraint order hana_nfs2_active-clone then SAPHanaTopology_<SID>_<instance_num>-clone kind=Optional<br></div><div><br></div><div>This says "if both hana_nfs1_active and SAPHanaTopology are scheduled to start, then make hana_nfs1_active start first. If both are scheduled to stop, then make SAPHanaTopology stop first."</div><div><br></div><div>"kind=Optional" means there's no order dependency unless both resources are already going to be scheduled for the action. I'm using kind=Optional here even though kind=Mandatory (the default) would make sense, because IIRC there were some unexpected interactions with ordering constraints for clones, where events on one node had unwanted effects on other nodes.</div><div><br></div><div>I'm not able to test right now since setting up an environment for this even with dummy resources is non-trivial -- but you're welcome to try this both with and without kind=Optional if you'd like.</div><div><br></div><div>Please let us know how this goes.<br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Fri, Apr 2, 2021 at 2:20 AM Strahil Nikolov <<a href="mailto:hunter86_bg@yahoo.com">hunter86_bg@yahoo.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">Hello All,<div id="gmail-m_7372687452019563461yMail_cursorElementTracker_1617354778410"><br></div><div id="gmail-m_7372687452019563461yMail_cursorElementTracker_1617354778660">I am testing the newly built HANA (Scale-out) cluster and it seems that:</div><div id="gmail-m_7372687452019563461yMail_cursorElementTracker_1617354821725">Neither SAPHanaController, nor SAPHanaTopology are stopping the HANA when I put the nodes (same DC = same HANA) in standby. This of course leads to a situation where the NFS cannot be umounted and despite the stop timeout - leads to fencing(on-fail=fence).</div><div id="gmail-m_7372687452019563461yMail_cursorElementTracker_1617354872797"><br></div><div id="gmail-m_7372687452019563461yMail_cursorElementTracker_1617354873002">I thought that the Controller resource agent is stopping the HANA and the slave role should not be 'stopped' before that .</div><div id="gmail-m_7372687452019563461yMail_cursorElementTracker_1617355125304"><br></div><div id="gmail-m_7372687452019563461yMail_cursorElementTracker_1617355125510">Maybe my expectations are wrong ?</div><div id="gmail-m_7372687452019563461yMail_cursorElementTracker_1617355148982"><br></div><div id="gmail-m_7372687452019563461yMail_cursorElementTracker_1617355149213">Best Regards,</div><div id="gmail-m_7372687452019563461yMail_cursorElementTracker_1617355153489">Strahil Nikolov</div><div id="gmail-m_7372687452019563461yMail_cursorElementTracker_1617354902362"><br></div>_______________________________________________<br>
Manage your subscription:<br>
<a href="https://lists.clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">https://lists.clusterlabs.org/mailman/listinfo/users</a><br>
<br>
ClusterLabs home: <a href="https://www.clusterlabs.org/" rel="noreferrer" target="_blank">https://www.clusterlabs.org/</a><br>
</blockquote></div><br clear="all"><br>-- <br><div dir="ltr" class="gmail_signature"><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div dir="ltr"><div><div>Regards,<br><br></div>Reid Wahl, RHCA<br></div><div>Senior Software Maintenance Engineer, Red Hat<br></div>CEE - Platform Support Delivery - ClusterHA</div></div></div></div></div></div></div></div></div></div></div></div></div></div>