<div dir="ltr"><div><div><div><div>Hi Ken,<br>Indeed the migration-threshold was the problem :-(<br></div><br></div>BTW, for a master-slave resource, is it possible to have different migration-thresholds?<br></div>I.e. I'd like the slave to be restarted where it failed, but master to be migrated to the</div><div>other node right away (by promoting the slave there).</div><div><br></div><div>I've tried configuring something like this:</div><div><br></div><div>[root@test-236 ~]# pcs resource show test-ha<br> Master: test-ha<br> Meta Attrs: master-node-max=1 clone-max=2 notify=true master-max=1 clone-node-max=1 requires=nothing migration-threshold=1 <br> Resource: test (class=ocf provider=heartbeat type=test)<br> Meta Attrs: migration-threshold=INFINITY <br> Operations: start interval=0s on-fail=restart timeout=120s (test-start-interval-0s)<br> monitor interval=10s on-fail=restart timeout=60s (test-monitor-interval-10s)<br> monitor interval=11s on-fail=restart role=Master timeout=60s (test-monitor-interval-11s)<br> promote interval=0s on-fail=restart timeout=60s (test-promote-interval-0s)<br> demote interval=0s on-fail=stop timeout=60s (test-demote-interval-0s)<br> stop interval=0s on-fail=block timeout=60s (test-stop-interval-0s)<br> notify interval=0s timeout=60s (test-notify-interval-0s)<br>[root@test-236 ~]#</div><div><br></div><div>but It does not seem to help as both master and slave are always restarted on the same node</div><div>due to test resource's migration-threshold set to INFINITY </div><div><br></div><div>Thank you in advance.</div><div>Regards,</div><div>Paolo<br></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Tue, Oct 3, 2017 at 7:12 AM, Ken Gaillot <span dir="ltr"><<a href="mailto:kgaillot@redhat.com" target="_blank">kgaillot@redhat.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div class="HOEnZb"><div class="h5">On Mon, 2017-10-02 at 12:32 -0700, Paolo Zarpellon wrote:<br>
> Hi,<br>
> on a basic 2-node cluster, I have a master-slave resource where<br>
> master runs on a node and slave on the other one. If I kill the slave<br>
> resource, the resource status goes to "stopped".<br>
> Similarly, if I kill the the master resource, the slave one is<br>
> promoted to master but the failed one does not restart as slave.<br>
> Is there a way to restart failing resources on the same node they<br>
> were running?<br>
> Thank you in advance.<br>
> Regards,<br>
> Paolo<br>
<br>
</div></div>Restarting on the same node is the default behavior -- something must<br>
be blocking it. For example, check your migration-threshold (if<br>
restarting fails this many times, it has nowhere to go and will stop).<br>
<br>
______________________________<wbr>_________________<br>
Users mailing list: <a href="mailto:Users@clusterlabs.org">Users@clusterlabs.org</a><br>
<a href="http://lists.clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">http://lists.clusterlabs.org/<wbr>mailman/listinfo/users</a><br>
<br>
Project Home: <a href="http://www.clusterlabs.org" rel="noreferrer" target="_blank">http://www.clusterlabs.org</a><br>
Getting started: <a href="http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf" rel="noreferrer" target="_blank">http://www.clusterlabs.org/<wbr>doc/Cluster_from_Scratch.pdf</a><br>
Bugs: <a href="http://bugs.clusterlabs.org" rel="noreferrer" target="_blank">http://bugs.clusterlabs.org</a><br>
</blockquote></div><br></div>