<div dir="ltr">Hi everyone. As a followup, I found that the vms were having snapshot backup at the time of the disconnects which I think freezes IO. We'll be addressing that. Is there anything else in the log that can be improved.<div><br></div><div>Thanks,</div><div>Howard</div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Wed, Jun 10, 2020 at 10:06 AM Howard <<a href="mailto:hmoneta@gmail.com">hmoneta@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="ltr">Good morning. Thanks for reading. We have a requirement to provide high availability for PostgreSQL 10. I have built a two node cluster with a quorum device as the third vote, all running on RHEL 8.<div><br></div><div><div>Here are the versions installed:<br></div><div></div></div><div>[postgres@srv2 cluster]$ rpm -qa|grep "pacemaker\|pcs\|corosync\|fence-agents-vmware-soap\|paf"<br>corosync-3.0.2-3.el8_1.1.x86_64<br>corosync-qdevice-3.0.0-2.el8.x86_64<br>corosync-qnetd-3.0.0-2.el8.x86_64<br>corosynclib-3.0.2-3.el8_1.1.x86_64<br>fence-agents-vmware-soap-4.2.1-41.el8.noarch<br>pacemaker-2.0.2-3.el8_1.2.x86_64<br>pacemaker-cli-2.0.2-3.el8_1.2.x86_64<br>pacemaker-cluster-libs-2.0.2-3.el8_1.2.x86_64<br>pacemaker-libs-2.0.2-3.el8_1.2.x86_64<br>pacemaker-schemas-2.0.2-3.el8_1.2.noarch<br>pcs-0.10.2-4.el8.x86_64<br>resource-agents-paf-2.3.0-1.noarch<br></div><div><br></div><div>These are vmare VMs so I configured the cluster to use the ESX host as the fencing device using fence_vmware_soap. </div><div><br></div><div>Throughout each day things generally work very well. The cluster remains online and healthy. Unfortunately, when I check pcs status in the mornings, I see that all kinds of things went wrong overnight. It is hard to pinpoint what the issue is as there is so much information being written to the pacemaker.log. Scrolling through pages and pages of informational log entries trying to find the lines that pertain to the issue. Is there a way to separate the logs out to make it easier to scroll through? Or maybe a list of keywords to GREP for? </div><div><br></div><div>It is clearly indicating that the server lost contact with the other node and also the quorum device. Is there a way to make this configuration more robust or able to recover from a connectivity blip?</div><div><br></div><div>Here are the pacemaker and corosync logs for this morning's failures:</div><div>pacemaker.log<br>/var/log/pacemaker/pacemaker.log:Jun 10 00:06:42 srv2 pacemakerd [10573] (pcmk_quorum_notification) warning: Quorum lost | membership=952 members=1<br>/var/log/pacemaker/pacemaker.log:Jun 10 00:06:42 srv2 pacemaker-controld [10579] (pcmk_quorum_notification) warning: Quorum lost | membership=952 members=1<br>/var/log/pacemaker/pacemaker.log:Jun 10 00:06:43 srv2 pacemaker-schedulerd[10578] (pe_fence_node) warning: Cluster node srv1 will be fenced: peer is no longer part of the cluster<br>/var/log/pacemaker/pacemaker.log:Jun 10 00:06:43 srv2 pacemaker-schedulerd[10578] (determine_online_status) warning: Node srv1 is unclean<br>/var/log/pacemaker/pacemaker.log:Jun 10 00:06:43 srv2 pacemaker-schedulerd[10578] (custom_action) warning: Action pgsqld:1_demote_0 on srv1 is unrunnable (offline)<br>/var/log/pacemaker/pacemaker.log:Jun 10 00:06:43 srv2 pacemaker-schedulerd[10578] (custom_action) warning: Action pgsqld:1_stop_0 on srv1 is unrunnable (offline)<br>/var/log/pacemaker/pacemaker.log:Jun 10 00:06:43 srv2 pacemaker-schedulerd[10578] (custom_action) warning: Action pgsqld:1_demote_0 on srv1 is unrunnable (offline)<br>/var/log/pacemaker/pacemaker.log:Jun 10 00:06:43 srv2 pacemaker-schedulerd[10578] (custom_action) warning: Action pgsqld:1_stop_0 on srv1 is unrunnable (offline)<br>/var/log/pacemaker/pacemaker.log:Jun 10 00:06:43 srv2 pacemaker-schedulerd[10578] (custom_action) warning: Action pgsqld:1_demote_0 on srv1 is unrunnable (offline)<br>/var/log/pacemaker/pacemaker.log:Jun 10 00:06:43 srv2 pacemaker-schedulerd[10578] (custom_action) warning: Action pgsqld:1_stop_0 on srv1 is unrunnable (offline)<br>/var/log/pacemaker/pacemaker.log:Jun 10 00:06:43 srv2 pacemaker-schedulerd[10578] (custom_action) warning: Action pgsqld:1_demote_0 on srv1 is unrunnable (offline)<br>/var/log/pacemaker/pacemaker.log:Jun 10 00:06:43 srv2 pacemaker-schedulerd[10578] (custom_action) warning: Action pgsqld:1_stop_0 on srv1 is unrunnable (offline)<br>/var/log/pacemaker/pacemaker.log:Jun 10 00:06:43 srv2 pacemaker-schedulerd[10578] (custom_action) warning: Action pgsql-master-ip_stop_0 on srv1 is unrunnable (offline)<br>/var/log/pacemaker/pacemaker.log:Jun 10 00:06:43 srv2 pacemaker-schedulerd[10578] (stage6) warning: Scheduling Node srv1 for STONITH<br>/var/log/pacemaker/pacemaker.log:Jun 10 00:06:43 srv2 pacemaker-schedulerd[10578] (pcmk__log_transition_summary) warning: Calculated transition 2 (with warnings), saving inputs in /var/lib/pacemaker/pengine/pe-warn-34.bz2<br>/var/log/pacemaker/pacemaker.log:Jun 10 00:06:45 srv2 pacemaker-controld [10579] (crmd_ha_msg_filter) warning: Another DC detected: srv1 (op=join_offer)<br>/var/log/pacemaker/pacemaker.log:Jun 10 00:06:45 srv2 pacemaker-controld [10579] (destroy_action) warning: Cancelling timer for action 3 (src=307)<br>/var/log/pacemaker/pacemaker.log:Jun 10 00:06:45 srv2 pacemaker-controld [10579] (destroy_action) warning: Cancelling timer for action 2 (src=308)<br>/var/log/pacemaker/pacemaker.log:Jun 10 00:06:45 srv2 pacemaker-controld [10579] (do_log) warning: Input I_RELEASE_DC received in state S_RELEASE_DC from do_election_count_vote<br>/var/log/pacemaker/pacemaker.log:pgsqlms(pgsqld)[1164379]: Jun 10 00:07:19 WARNING: No secondary connected to the master<br>/var/log/pacemaker/pacemaker.log:Sent 5 probes (5 broadcast(s))<br>/var/log/pacemaker/pacemaker.log:Received 0 response(s)<br><br>corosync.log<br>Jun 10 00:06:41 [10558] srv2 corosync warning [MAIN ] Corosync main process was not scheduled for 13006.0615 ms (threshold is 800.0000 ms). Consider token timeout increase.<br>Jun 10 00:06:41 [10558] srv2 corosync notice [TOTEM ] Token has not been received in 12922 ms<br>Jun 10 00:06:41 [10558] srv2 corosync notice [TOTEM ] A processor failed, forming new configuration.<br>Jun 10 00:06:41 [10558] srv2 corosync info [VOTEQ ] lost contact with quorum device Qdevice<br>Jun 10 00:06:41 [10558] srv2 corosync info [KNET ] link: host: 1 link: 0 is down<br>Jun 10 00:06:41 [10558] srv2 corosync info [KNET ] host: host: 1 (passive) best link: 0 (pri: 1)<br>Jun 10 00:06:41 [10558] srv2 corosync warning [KNET ] host: host: 1 has no active links<br>Jun 10 00:06:42 [10558] srv2 corosync info [KNET ] rx: host: 1 link: 0 is up<br>Jun 10 00:06:42 [10558] srv2 corosync info [KNET ] host: host: 1 (passive) best link: 0 (pri: 1)<br>Jun 10 00:06:42 [10558] srv2 corosync info [VOTEQ ] waiting for quorum device Qdevice poll (but maximum for 30000 ms)<br>Jun 10 00:06:42 [10558] srv2 corosync notice [TOTEM ] A new membership (2:952) was formed. Members left: 1<br>Jun 10 00:06:42 [10558] srv2 corosync notice [TOTEM ] Failed to receive the leave message. failed: 1<br>Jun 10 00:06:42 [10558] srv2 corosync warning [CPG ] downlist left_list: 1 received<br>Jun 10 00:06:42 [10558] srv2 corosync notice [QUORUM] This node is within the non-primary component and will NOT provide any services.<br>Jun 10 00:06:42 [10558] srv2 corosync notice [QUORUM] Members[1]: 2<br>Jun 10 00:06:42 [10558] srv2 corosync notice [MAIN ] Completed service synchronization, ready to provide service.<br>Jun 10 00:06:42 [10558] srv2 corosync notice [QUORUM] This node is within the primary component and will provide service.<br>Jun 10 00:06:42 [10558] srv2 corosync notice [QUORUM] Members[1]: 2<br>Jun 10 00:06:43 [10558] srv2 corosync info [VOTEQ ] waiting for quorum device Qdevice poll (but maximum for 30000 ms)<br>Jun 10 00:06:43 [10558] srv2 corosync notice [TOTEM ] A new membership (1:960) was formed. Members joined: 1<br>Jun 10 00:06:43 [10558] srv2 corosync warning [CPG ] downlist left_list: 0 received<br>Jun 10 00:06:43 [10558] srv2 corosync warning [CPG ] downlist left_list: 0 received<br>Jun 10 00:06:45 [10558] srv2 corosync notice [QUORUM] Members[2]: 1 2<br>Jun 10 00:06:45 [10558] srv2 corosync notice [MAIN ] Completed service synchronization, ready to provide service.<br>Jun 10 00:06:45 [10558] srv2 corosync warning [MAIN ] Corosync main process was not scheduled for 1747.0415 ms (threshold is 800.0000 ms). Consider token timeout increase.<br>Jun 10 00:06:45 [10558] srv2 corosync info [VOTEQ ] waiting for quorum device Qdevice poll (but maximum for 30000 ms)<br>Jun 10 00:06:45 [10558] srv2 corosync notice [TOTEM ] A new membership (1:964) was formed. Members<br>Jun 10 00:06:45 [10558] srv2 corosync warning [CPG ] downlist left_list: 0 received<br>Jun 10 00:06:45 [10558] srv2 corosync warning [CPG ] downlist left_list: 0 received<br>Jun 10 00:06:45 [10558] srv2 corosync notice [QUORUM] Members[2]: 1 2<br>Jun 10 00:06:45 [10558] srv2 corosync notice [MAIN ] Completed service synchronization, ready to provide service.<br>Jun 10 00:06:52 [10558] srv2 corosync notice [TOTEM ] Token has not been received in 750 ms<br>Jun 10 00:06:52 [10558] srv2 corosync info [KNET ] link: host: 1 link: 0 is down<br>Jun 10 00:06:52 [10558] srv2 corosync info [KNET ] host: host: 1 (passive) best link: 0 (pri: 1)<br>Jun 10 00:06:52 [10558] srv2 corosync warning [KNET ] host: host: 1 has no active links<br>Jun 10 00:06:52 [10558] srv2 corosync notice [TOTEM ] A processor failed, forming new configuration.<br>Jun 10 00:06:53 [10558] srv2 corosync info [VOTEQ ] waiting for quorum device Qdevice poll (but maximum for 30000 ms)<br>Jun 10 00:06:53 [10558] srv2 corosync notice [TOTEM ] A new membership (2:968) was formed. Members left: 1<br>Jun 10 00:06:53 [10558] srv2 corosync notice [TOTEM ] Failed to receive the leave message. failed: 1<br>Jun 10 00:06:53 [10558] srv2 corosync warning [CPG ] downlist left_list: 1 received<br>Jun 10 00:07:17 [10558] srv2 corosync notice [QUORUM] Members[1]: 2<br>Jun 10 00:07:17 [10558] srv2 corosync notice [MAIN ] Completed service synchronization, ready to provide service.<br>Jun 10 00:08:56 [10558] srv2 corosync notice [TOTEM ] Token has not been received in 750 ms<br>Jun 10 00:09:04 [10558] srv2 corosync warning [MAIN ] Corosync main process was not scheduled for 4477.0459 ms (threshold is 800.0000 ms). Consider token timeout increase.<br>Jun 10 00:09:13 [10558] srv2 corosync warning [MAIN ] Corosync main process was not scheduled for 5302.9785 ms (threshold is 800.0000 ms). Consider token timeout increase.<br>Jun 10 00:09:13 [10558] srv2 corosync notice [TOTEM ] Token has not been received in 5295 ms<br>Jun 10 00:09:13 [10558] srv2 corosync notice [TOTEM ] A processor failed, forming new configuration.<br>Jun 10 00:09:13 [10558] srv2 corosync info [VOTEQ ] waiting for quorum device Qdevice poll (but maximum for 30000 ms)<br>Jun 10 00:09:13 [10558] srv2 corosync notice [TOTEM ] A new membership (2:972) was formed. Members<br>Jun 10 00:09:13 [10558] srv2 corosync warning [CPG ] downlist left_list: 0 received<br>Jun 10 00:09:13 [10558] srv2 corosync notice [QUORUM] Members[1]: 2<br>Jun 10 00:09:13 [10558] srv2 corosync notice [MAIN ] Completed service synchronization, ready to provide service.<br></div><div><br></div><div>Thanks,</div><div>Howard</div><div><br></div></div>
</blockquote></div>