<div dir="auto">I am testing aşk scenarios because I will use real machines with pacemaker. Scenarios;</div><div dir="auto"><br></div><div dir="auto">1- </div><div dir="auto">node1 master</div><div dir="auto">node2 slave </div><div dir="auto">Shutting node1, then node2 become master</div><div dir="auto">Successfully</div><div dir="auto"><br></div><div dir="auto"><div dir="auto">2-</div><div dir="auto">node1 slave</div><div dir="auto">node2 master </div><div dir="auto">Shutting node2, then node1 become master</div><div dir="auto">Successfully</div></div><div dir="auto"><br></div><div><div dir="auto">3-</div><div dir="auto">node1 slave</div><div dir="auto">node2 slave </div><div dir="auto">One node become master after 60s</div><div dir="auto">Successfully</div><div class="gmail_quote"><div dir="ltr" class="gmail_attr"><br></div><div dir="ltr" class="gmail_attr"><div dir="auto">4-</div><div dir="auto">node1 master</div><div dir="auto">node2 master </div><div dir="auto">First machine fail, and not fix unlike send command cleanup </div><div dir="auto">Fail</div></div><div dir="ltr" class="gmail_attr"><br></div><div dir="ltr" class="gmail_attr">I haven’t got physical fencing device. But all machines must online for redundancy. So I guess we don’t use fencing. Because servers havent got connection for remote help and internet. They must fix their:)</div><div dir="ltr" class="gmail_attr"><br></div><div dir="ltr" class="gmail_attr"><br></div><div dir="ltr" class="gmail_attr"><br></div><div dir="ltr" class="gmail_attr">On 21 Feb 2021 Sun at 12:14 damiano giuliani <<a href="mailto:damianogiuliani87@gmail.com">damianogiuliani87@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="auto">My question is:<div dir="auto">Why you are pausing one VM?there is any specific scope in that?you should never have 2 master resources, pausing one vm could make unexpected behaviours.</div><div dir="auto">If you are testing failovers or simulated faults you must configure a fencing mechanism.</div><div dir="auto">Dont expect your cluster is working properly without it.</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sun, 21 Feb 2021, 07:29 İsmet BALAT, <<a href="mailto:bcalbatros@gmail.com" target="_blank">bcalbatros@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="auto">Sorry, I am in +3utc and was sleeping. I will try first fix node, then start cluster. Thank you </div><div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On 21 Feb 2021 Sun at 00:00 damiano giuliani <<a href="mailto:damianogiuliani87@gmail.com" rel="noreferrer" target="_blank">damianogiuliani87@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="auto"><div>resources configured in a master/slave mode</div><div dir="auto">If you got 2 masters something is not working right. You should never have 2 node in master.</div><div dir="auto">Disable pacemaker and corosync services to autostart on both nodes</div><div dir="auto">systemctl disable corosync</div><div dir="auto">Systemctl disable pacemaker</div><div dir="auto"><br></div><div dir="auto">You can start the faulty node using pcs cli:</div><div dir="auto">pcs cluster start</div><div dir="auto"><br></div><div dir="auto">You can start the whole cluster using</div><div dir="auto">pcs cluster start --all</div><div dir="auto"><br></div><div dir="auto">First of all configure a fencing mechanism to make the cluster consistent. Its mandatory.</div></div><div dir="auto"><div dir="auto"><br></div><div dir="auto"><br><br><div class="gmail_quote" dir="auto"><div dir="ltr" class="gmail_attr">On Sat, 20 Feb 2021, 21:47 İsmet BALAT, <<a href="mailto:bcalbatros@gmail.com" rel="noreferrer" target="_blank">bcalbatros@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="auto">I am not using fence. If I disable pacemaker,how node join cluster (for first example in video - master/slave changing)? So I need a check script for fault states :( </div><div dir="auto"><br></div><div dir="auto">And thank you for reply </div><div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On 20 Feb 2021 Sat at 23:40 damiano giuliani <<a href="mailto:damianogiuliani87@gmail.com" rel="noreferrer noreferrer" target="_blank">damianogiuliani87@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="auto">Hi,<div dir="auto"><br></div><div dir="auto">Have you correcly configure a working fencing mechanism?without it you cant rely on a safe and consistent environment.</div><div dir="auto">My suggestion is to disable the autostart services (and so the autojoin into the cluster) on both nodes.</div><div dir="auto"> if there is a fault you have to investigate before you rejoin the old fault master node.</div><div dir="auto">Pacemaker (and paf if u are using it) as far i know, doesnt support the autoheal of the old master, so you should resync or pg_rewind eveythime there is a fault.</div><div dir="auto"><br></div><div dir="auto"><br></div></div><br><div class="gmail_quote"></div><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Sat, 20 Feb 2021, 19:03 İsmet BALAT, <<a href="mailto:bcalbatros@gmail.com" rel="noreferrer noreferrer" target="_blank">bcalbatros@gmail.com</a>> wrote:<br></div></div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"></blockquote></div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex"><div dir="ltr"><span style="color:rgb(36,39,41);font-family:Arial,"Helvetica Neue",Helvetica,sans-serif;font-size:15px">I am using Pacemaker with Centos 8 and Postgresql 12. Failover master/slave states successfully run. But if all nodes are masters, pacemaker can't repair its unlikely send command 'pcs resources cleanup'. Wheras I set 60s in resource config. How can I fix it?</span><div><br></div><div><div><span style="color:rgb(36,39,41);font-family:Arial,"Helvetica Neue",Helvetica,sans-serif;font-size:15px">StackOverFlow link: </span><a href="https://stackoverflow.com/questions/66292304/pacemaker-postgresql-master-master-state" rel="noreferrer noreferrer noreferrer" target="_blank">https://stackoverflow.com/questions/66292304/pacemaker-postgresql-master-master-state</a></div></div><div><br></div><div>Thanks</div><div><br></div><div>İsmet BALAT</div></div></blockquote></div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
_______________________________________________<br>
Manage your subscription:<br>
<a href="https://lists.clusterlabs.org/mailman/listinfo/users" rel="noreferrer noreferrer noreferrer noreferrer" target="_blank">https://lists.clusterlabs.org/mailman/listinfo/users</a><br>
<br>
ClusterLabs home: <a href="https://www.clusterlabs.org/" rel="noreferrer noreferrer noreferrer noreferrer" target="_blank">https://www.clusterlabs.org/</a><br>
</blockquote></div>
_______________________________________________<br>
Manage your subscription:<br>
<a href="https://lists.clusterlabs.org/mailman/listinfo/users" rel="noreferrer noreferrer noreferrer" target="_blank">https://lists.clusterlabs.org/mailman/listinfo/users</a><br>
<br>
ClusterLabs home: <a href="https://www.clusterlabs.org/" rel="noreferrer noreferrer noreferrer" target="_blank">https://www.clusterlabs.org/</a><br>
</blockquote></div></div>
_______________________________________________<br>
Manage your subscription:<br>
<a href="https://lists.clusterlabs.org/mailman/listinfo/users" rel="noreferrer noreferrer noreferrer" target="_blank">https://lists.clusterlabs.org/mailman/listinfo/users</a><br>
<br>
ClusterLabs home: <a href="https://www.clusterlabs.org/" rel="noreferrer noreferrer noreferrer" target="_blank">https://www.clusterlabs.org/</a><br>
</blockquote></div></div></div>
_______________________________________________<br>
Manage your subscription:<br>
<a href="https://lists.clusterlabs.org/mailman/listinfo/users" rel="noreferrer noreferrer" target="_blank">https://lists.clusterlabs.org/mailman/listinfo/users</a><br>
<br>
ClusterLabs home: <a href="https://www.clusterlabs.org/" rel="noreferrer noreferrer" target="_blank">https://www.clusterlabs.org/</a><br>
</blockquote></div></div>
_______________________________________________<br>
Manage your subscription:<br>
<a href="https://lists.clusterlabs.org/mailman/listinfo/users" rel="noreferrer noreferrer" target="_blank">https://lists.clusterlabs.org/mailman/listinfo/users</a><br>
<br>
ClusterLabs home: <a href="https://www.clusterlabs.org/" rel="noreferrer noreferrer" target="_blank">https://www.clusterlabs.org/</a><br>
</blockquote></div>
_______________________________________________<br>
Manage your subscription:<br>
<a href="https://lists.clusterlabs.org/mailman/listinfo/users" rel="noreferrer" target="_blank">https://lists.clusterlabs.org/mailman/listinfo/users</a><br>
<br>
ClusterLabs home: <a href="https://www.clusterlabs.org/" rel="noreferrer" target="_blank">https://www.clusterlabs.org/</a><br>
</blockquote></div></div>