[ClusterLabs] How to figure out the reason for pgsql_monitor timeout, which resulted in a failover

Ken Gaillot kgaillot at redhat.com
Fri Feb 5 23:16:23 UTC 2016


On 01/15/2016 02:44 AM, Benjamin Fras wrote:
> Dear list members,
> 
>  
> 
> we are running a couple of postgres clusters based on corosync / pacemaker,
> each consisting of three nodes (master, slave and a witness host without
> running postgres resources). According to the attached logs, the master is
> referenced by nbgprepdb6, the recovered host by nbgprepdb5 and the witness
> host by nbgprepwitness56. The configuration of the resources you can find in
> pgsql_crm.txt.
> 
>  
> 
> It is a stable setup and in general it is running fine. However, today we
> experienced some strange behaviour on one of our cluster nodes. First we did
> a planned failover and successful recovery, where the recovered host was
> recognized correctly as a slave and the cluster seemed to be just fine.
> After a while pacemaker performed a failover, though. I don't see, why this
> failover actually happened.
> 
>  
> 
> Regarding the logfiles (I have attached the pacemaker.log from all three
> nodes), the demote of the master node and the failover was caused by a
> timeout of the pgsql_monitor on the master server. But why did it time out?
> Postgres itself obviously didn't have a problem, it was a clean shutdown
> triggered by pacemaker. There are neither errors in the postgres.log nor in
> the syslog (e. g. stating system out of memory or similar). I was not able
> to find an explanation for this, so do you have any ideas where to look?

Pacemaker logs generally do not show much interesting for timeouts. The
syslog and application log (if it has one) are more likely to have
something useful.

Pretty much all I can see here is that your witness host was the DC at
the time, and a recurring monitor on the db6 node timed out after 60s.
It looks like pacemaker on the db5 node was just starting around this
time, so I guess you tested a hard failure. I would guess the pgsql
monitor action really did take a long time on the new master due to the
other node being down, but I'm not familiar enough with pgsql to know
whether that's expected.

This is an interesting message:

Jan 14 10:57:44 [31406] nbgprepwitness56    pengine:   notice:
crm_ipc_prepare:         Message exceeds the configured ipc limit (51200
bytes), consider configuring PCMK_ipc_buffer to 110392 or higher to
avoid compression overheads

You would set that in /etc/sysconfig/pacemaker on all nodes (or the
equivalent startup file for your distro). It shouldn't be causing any
problems, but it's a good tuning measure.

> I have to add that we had some issues starting the recovered slave node,
> because the pgsql_start-timeout was too low (120s). As postgres didn't
> manage to catch up within this time, it was shut down by pacemaker. So we
> tried a few times and after a while postgres came up. Anyway, I don't see
> how this could be related to the described issue. 
> 
>  
> 
> Appreciate your help.
> 
> Best regards,
> 
>  
> 
> Ben





More information about the Users mailing list