<html><head></head><body><div class="ydpab1347fayahoo-style-wrap" style="font-family: courier new, courier, monaco, monospace, sans-serif; font-size: 16px;"><div></div>
<div dir="ltr" data-setdir="false">Hi Derek,</div><div dir="ltr" data-setdir="false"><br></div><div dir="ltr" data-setdir="false">have you run a simulation via crm_simulate before that ? Usually it indicates what will happen ,when you remove the maintenance.</div><div dir="ltr" data-setdir="false"><br></div><div dir="ltr" data-setdir="false">What comes first to my mind is:</div><div dir="ltr" data-setdir="false">1. Are you abe to do a rolling upgrade?</div><div dir="ltr" data-setdir="false">2. When you remove the maintenance , do you have a postgres DB in master mode ? Is it on the same node it was before the maintenance ?</div><div dir="ltr" data-setdir="false"><br></div><div dir="ltr" data-setdir="false">I have the feeling that you are starting the postgres afterwards, but there is no DB in master mode.</div><div dir="ltr" data-setdir="false"><br></div><div dir="ltr" data-setdir="false">3. Have you checked the logs about any indication ? Usually lrmd indicates local (for the node ) resource issue, crmd - global one.</div><div dir="ltr" data-setdir="false"><br></div><div dir="ltr" data-setdir="false">Keep in mind that the current DC keeps info for all nodes - so you should start from there.</div><div dir="ltr" data-setdir="false"><br></div><div dir="ltr" data-setdir="false">Best Regards,</div><div dir="ltr" data-setdir="false">Strahil Nikolov</div><div><br></div>
</div><div id="ydpcd39e82eyahoo_quoted_1283687719" class="ydpcd39e82eyahoo_quoted">
<div style="font-family:'Helvetica Neue', Helvetica, Arial, sans-serif;font-size:13px;color:#26282a;">
<div>
В вторник, 4 февруари 2020 г., 16:53:49 ч. Гринуич+2, Derek Viljoen <derekv@infinite.io> написа:
</div>
<div><br></div>
<div><br></div>
<div><div id="ydpcd39e82eyiv7383704143"><div dir="ltr">We have a three-node postgres cluster running on Ubuntu 14.04, currently at Postgres 9.5 with Corosync 2.4.2 and Pacemaker 1.1.18.<div><br></div><div>I'm trying to automate upgrading the database to 11.4. (Our product is a network appliance, so it needs to be automated for our customers)</div><div><br></div><div>I first put the cluster into maintenance mode, perform the upgrade, update the resource paths in the crm config to point to the new db instance, restore the db from the old version (required by postgres to do major version upgrades). At the end of all these steps everything looks good.</div><div><br></div><div>But when I turn off maintenance mode all of my db nodes suddenly go down and all three appear to be in slave mode, with no master. If I wait a few minutes it appears that node 2 takes over as master, but it has an empty database, because apparently it wasn't able to replicate the restored db from the original master yet. Can anyone tell me what is causing this?</div><div><br></div><div>Derek Viljoen</div><div><a href="mailto:derekv@infinite.io" rel="nofollow" target="_blank">derekv@infinite.io</a></div></div></div>_______________________________________________<br>Manage your subscription:<br><a href="https://lists.clusterlabs.org/mailman/listinfo/users" rel="nofollow" target="_blank">https://lists.clusterlabs.org/mailman/listinfo/users</a><br><br>ClusterLabs home: <a href="https://www.clusterlabs.org/" rel="nofollow" target="_blank">https://www.clusterlabs.org/</a></div>
</div>
</div></body></html>