[ClusterLabs] Could not initialize corosync configuration API error 2

Jan Friesse jfriesse at redhat.com
Fri Mar 31 04:14:17 EDT 2023


Hi,
more information would be needed to really find out real reason, so:
- double check corosync.conf (ip addresses)
- check firewall (mainly local one)
- what is the version of corosync
- try to set debug:on (or trace)
- paste config file
- paste full log - since corosync was started

Also keep in mind if it is version 2.x it's no longer supported by  
upstream and you have to contact your distribution provider support.

Regards,
   Honza

On 30/03/2023 12:08, S Sathish S via Users wrote:
> Hi Team,
> 
> we are unable to start corosync service which is already part of existing cluster same is running fine for longer time. Now we are seeing corosync
> server unable to join "Could not initialize corosync configuration API error 2". Please find the below logs.
> 
> [root at node1 ~]# systemctl status corosync
> ● corosync.service - Corosync Cluster Engine
>     Loaded: loaded (/usr/lib/systemd/system/corosync.service; enabled; vendor preset: disabled)
>     Active: failed (Result: exit-code) since Thu 2023-03-30 10:49:58 WAT; 7min ago
>       Docs: man:corosync
>             man:corosync.conf
>             man:corosync_overview
>    Process: 9922 ExecStop=/usr/share/corosync/corosync stop (code=exited, status=0/SUCCESS)
>    Process: 9937 ExecStart=/usr/share/corosync/corosync start (code=exited, status=1/FAILURE)
> 
> 
> 
> Mar 30 10:48:57 node1 systemd[1]: Starting Corosync Cluster Engine...
> Mar 30 10:49:58 node1 corosync[9937]: Starting Corosync Cluster Engine (corosync): [FAILED]
> Mar 30 10:49:58 node1 systemd[1]: corosync.service: control process exited, code=exited status=1
> Mar 30 10:49:58 node1 systemd[1]: Failed to start Corosync Cluster Engine.
> Mar 30 10:49:58 node1 systemd[1]: Unit corosync.service entered failed state.
> Mar 30 10:49:58 node1 systemd[1]: corosync.service failed.
> 
> Please find the corosync logs error:
> 
> Mar 30 10:49:52 [9947] node1 corosync debug   [MAIN  ] Denied connection, corosync is not ready
> Mar 30 10:49:52 [9947] node1 corosync warning [QB    ] Denied connection, is not ready (9948-10497-23)
> Mar 30 10:49:52 [9947] node1 corosync debug   [MAIN  ] cs_ipcs_connection_destroyed()
> Mar 30 10:49:52 [9947] node1 corosync debug   [MAIN  ] Denied connection, corosync is not ready
> Mar 30 10:49:57 [9947] node1 corosync debug   [MAIN  ] cs_ipcs_connection_destroyed()
> Mar 30 10:49:58 [9947] node1 corosync notice  [MAIN  ] Node was shut down by a signal
> Mar 30 10:49:58 [9947] node1 corosync notice  [SERV  ] Unloading all Corosync service engines.
> Mar 30 10:49:58 [9947] node1 corosync info    [QB    ] withdrawing server sockets
> Mar 30 10:49:58 [9947] node1 corosync debug   [QB    ] qb_ipcs_unref() - destroying
> Mar 30 10:49:58 [9947] node1 corosync notice  [SERV  ] Service engine unloaded: corosync vote quorum service v1.0
> Mar 30 10:49:58 [9947] node1 corosync info    [QB    ] withdrawing server sockets
> Mar 30 10:49:58 [9947] node1 corosync debug   [QB    ] qb_ipcs_unref() - destroying
> Mar 30 10:49:58 [9947] node1 corosync notice  [SERV  ] Service engine unloaded: corosync configuration map access
> Mar 30 10:49:58 [9947] node1 corosync info    [QB    ] withdrawing server sockets
> Mar 30 10:49:58 [9947] node1 corosync debug   [QB    ] qb_ipcs_unref() - destroying
> Mar 30 10:49:58 [9947] node1 corosync notice  [SERV  ] Service engine unloaded: corosync configuration service
> Mar 30 10:49:58 [9947] node1 corosync info    [QB    ] withdrawing server sockets
> Mar 30 10:49:58 [9947] node1 corosync debug   [QB    ] qb_ipcs_unref() - destroying
> Mar 30 10:49:58 [9947] node1 corosync notice  [SERV  ] Service engine unloaded: corosync cluster closed process group service v1.01
> Mar 30 10:49:58 [9947] node1 corosync info    [QB    ] withdrawing server sockets
> Mar 30 10:49:58 [9947] node1 corosync debug   [QB    ] qb_ipcs_unref() - destroying
> Mar 30 10:49:58 [9947] node1 corosync notice  [SERV  ] Service engine unloaded: corosync cluster quorum service v0.1
> Mar 30 10:49:58 [9947] node1 corosync notice  [SERV  ] Service engine unloaded: corosync profile loading service
> Mar 30 10:49:58 [9947] node1 corosync debug   [TOTEM ] sending join/leave message
> Mar 30 10:49:58 [9947] node1 corosync notice  [MAIN  ] Corosync Cluster Engine exiting normally
> 
> 
> While try manually start corosync service also getting below error.
> 
> 
> [root at node1 ~]# bash -x /usr/share/corosync/corosync start
> + desc='Corosync Cluster Engine'
> + prog=corosync
> + PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/sbin
> + '[' -f /etc/sysconfig/corosync ']'
> + . /etc/sysconfig/corosync
> ++ COROSYNC_INIT_TIMEOUT=60
> ++ COROSYNC_OPTIONS=
> + case '/etc/sysconfig' in
> + '[' -f /etc/init.d/functions ']'
> + . /etc/init.d/functions
> ++ TEXTDOMAIN=initscripts
> ++ umask 022
> ++ PATH=/sbin:/usr/sbin:/bin:/usr/bin
> ++ export PATH
> ++ '[' 28864 -ne 1 -a -z '' ']'
> ++ '[' -d /run/systemd/system ']'
> ++ case "$0" in
> ++ '[' -z '' ']'
> ++ COLUMNS=80
> ++ '[' -z '' ']'
> ++ '[' -c /dev/stderr -a -r /dev/stderr ']'
> +++ /sbin/consoletype
> ++ CONSOLETYPE=pty
> ++ '[' -z '' ']'
> ++ '[' -z '' ']'
> ++ '[' -f /etc/sysconfig/i18n -o -f /etc/locale.conf ']'
> ++ . /etc/profile.d/lang.sh
> ++ unset LANGSH_SOURCED
> ++ '[' -z '' ']'
> ++ '[' -f /etc/sysconfig/init ']'
> ++ . /etc/sysconfig/init
> +++ BOOTUP=color
> +++ RES_COL=60
> +++ MOVE_TO_COL='echo -en \033[60G'
> +++ SETCOLOR_SUCCESS='echo -en \033[0;32m'
> +++ SETCOLOR_FAILURE='echo -en \033[0;31m'
> +++ SETCOLOR_WARNING='echo -en \033[0;33m'
> +++ SETCOLOR_NORMAL='echo -en \033[0;39m'
> ++ '[' pty = serial ']'
> ++ __sed_discard_ignored_files='/\(~\|\.bak\|\.orig\|\.rpmnew\|\.rpmorig\|\.rpmsave\)$/d'
> ++ '[' '' = 1 ']'
> +++ cat /proc/cmdline
> ++ strstr 'BOOT_IMAGE=/vmlinuz-3.10.0-693.84.1.el7.x86_64 root=/dev/mapper/VolGroup-lv_root ro crashkernel=auto rd.lvm.lv=VolGroup/lv_root rd.lvm.lv=VolGroup/lv_swap net.ifnames=1 rd.shell=0 ipv6.disable=1 audit=1 processor.max_cstate=1 intel_idle.max_cstate=0 audit=1 audit_backlog_limit=8192' rc.debug
> ++ '[' 'BOOT_IMAGE=/vmlinuz-3.10.0-693.84.1.el7.x86_64 root=/dev/mapper/VolGroup-lv_root ro crashkernel=auto rd.lvm.lv=VolGroup/lv_root rd.lvm.lv=VolGroup/lv_swap net.ifnames=1 rd.shell=0 ipv6.disable=1 audit=1 processor.max_cstate=1 intel_idle.max_cstate=0 audit=1 audit_backlog_limit=8192' = 'BOOT_IMAGE=/vmlinuz-3.10.0-693.84.1.el7.x86_64 root=/dev/mapper/VolGroup-lv_root ro crashkernel=auto rd.lvm.lv=VolGroup/lv_root rd.lvm.lv=VolGroup/lv_swap net.ifnames=1 rd.shell=0 ipv6.disable=1 audit=1 processor.max_cstate=1 intel_idle.max_cstate=0 audit=1 audit_backlog_limit=8192' ']'
> ++ return 1
> ++ return 0
> + '[' -z '' ']'
> + LOCK_FILE=/var/lock/subsys/corosync
> + rtrn=0
> + case "$1" in
> + start
> + echo -n 'Starting Corosync Cluster Engine (corosync): '
> Starting Corosync Cluster Engine (corosync): + cluster_disabled_at_boot
> + grep -q nocluster /proc/cmdline
> + return 0
> + mkdir -p /var/run
> + status corosync
> + corosync
> + '[' 0 '!=' 0 ']'
> + wait_for_ipc
> + try=0
> + max_try=119
> + '[' 119 -le 0 ']'
> + '[' 0 -le 119 ']'
> + corosync-cfgtool -s
> + sleep 0.5
> + try=1
> + '[' 1 -le 119 ']'
> + corosync-cfgtool -s
> + sleep 0.5
> + try=2
> + '[' 2 -le 119 ']'
> + corosync-cfgtool -s
> + sleep 0.5
> + try=3
> + '[' 3 -le 119 ']'
> + corosync-cfgtool -s
> + sleep 0.5
> + try=4
> + '[' 4 -le 119 ']'
> + corosync-cfgtool -s
> + sleep 0.5
> + try=120
> + '[' 120 -le 119 ']'
> + return 1
> + failure
> + local rc=0
> + '[' color '!=' verbose -a -z '' ']'
> + echo_failure
> + '[' color = color ']'
> + echo -en '\033[60G'
>                                                             + echo -n '['
> [+ '[' color = color ']'
> + echo -en '\033[0;31m'
> + echo -n FAILED
> FAILED+ '[' color = color ']'
> + echo -en '\033[0;39m'
> + echo -n ']'
> ]+ echo -ne '\r'
> + return 1
> + '[' -x /bin/plymouth ']'
> + /bin/plymouth --details
> + return 0
> + rtrn=1
> + echo
> + exit 1
> 
> 
> [root at Node1 /]# corosync-cfgtool -s
> Printing ring status.
> Could not initialize corosync configuration API error 2
> 
> Thanks and Regards,
> S Sathish S
> 
> 
> _______________________________________________
> Manage your subscription:
> https://lists.clusterlabs.org/mailman/listinfo/users
> 
> ClusterLabs home: https://www.clusterlabs.org/
> 



More information about the Users mailing list