[ClusterLabs] Corosync/Pacemaker bug methinks! (Was: pacemaker won't start because duplicate node but can't remove dupe node because pacemaker won't start)
JC
snafuxnj at yahoo.com
Thu Dec 19 05:38:20 EST 2019
Hi Ken,
I took a little time away from the problem. Getting back to it now. I found that the corosync logs were not only in journalctl but also in /var/log/syslog. I think the logs in syslog are more interesting, though I haven’t actually done a thorough comparison. Nevertheless, I’m pasting what the logs in syslog say and am hoping there’s more interesting data here. The time signatures match perfectly here, too.
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [TOTEM ] waiting_trans_ack changed to 1
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [TOTEM ] Token Timeout (3000 ms) retransmit timeout (294 ms)
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [TOTEM ] token hold (225 ms) retransmits before loss (10 retrans)
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [TOTEM ] join (50 ms) send_join (0 ms) consensus (3600 ms) merge (200 ms)
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [TOTEM ] downcheck (1000 ms) fail to recv const (2500 msgs)
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [TOTEM ] seqno unchanged const (30 rotations) Maximum network MTU 1401
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [TOTEM ] window size per rotation (50 messages) maximum messages per rotation (17 messages)
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [TOTEM ] missed count const (5 messages)
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [TOTEM ] send threads (0 threads)
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [TOTEM ] RRP token expired timeout (294 ms)
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [TOTEM ] RRP token problem counter (2000 ms)
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [TOTEM ] RRP threshold (10 problem count)
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [TOTEM ] RRP multicast threshold (100 problem count)
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [TOTEM ] RRP automatic recovery check timeout (1000 ms)
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [TOTEM ] RRP mode set to none.
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [TOTEM ] heartbeat_failures_allowed (0)
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [TOTEM ] max_network_delay (50 ms)
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [TOTEM ] HeartBeat is Disabled. To enable set heartbeat_failures_allowed > 0
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [TOTEM ] Initializing transport (UDP/IP Multicast).
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [TOTEM ] Initializing transmit/receive security (NSS) crypto: none hash: none
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [TOTEM ] Receive multicast socket recv buffer size (320000 bytes).
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [TOTEM ] Transmit multicast socket send buffer size (320000 bytes).
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [TOTEM ] Local receive multicast loop socket recv buffer size (320000 bytes).
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [TOTEM ] Local transmit multicast loop socket send buffer size (320000 bytes).
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [TOTEM ] The network interface [192.168.99.225] is now up.
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [TOTEM ] Created or loaded sequence id 74.192.168.99.225 for this ring.
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [QB ] server name: cmap
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [QB ] server name: cfg
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [QB ] server name: cpg
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [QB ] server name: votequorum
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [QB ] server name: quorum
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [TOTEM ] entering GATHER state from 15(interface change).
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [TOTEM ] Creating commit token because I am the rep.
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [TOTEM ] Saving state aru 0 high seq received 0
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [TOTEM ] entering COMMIT state.
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [TOTEM ] got commit token
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [TOTEM ] entering RECOVERY state.
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [TOTEM ] position [0] member 192.168.99.225:
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [TOTEM ] previous ring seq 74 rep 192.168.99.225
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [TOTEM ] aru 0 high delivered 0 received flag 1
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [TOTEM ] Did not need to originate any messages in recovery.
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [TOTEM ] got commit token
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [TOTEM ] Sending initial ORF token
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [TOTEM ] token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 0, aru 0
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [TOTEM ] install seq 0 aru 0 high seq received 0
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [TOTEM ] token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 1, aru 0
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [TOTEM ] install seq 0 aru 0 high seq received 0
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [TOTEM ] token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 2, aru 0
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [TOTEM ] install seq 0 aru 0 high seq received 0
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [TOTEM ] token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 3, aru 0
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [TOTEM ] install seq 0 aru 0 high seq received 0
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [TOTEM ] retrans flag count 4 token aru 0 install seq 0 aru 0 0
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [TOTEM ] Resetting old ring state
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [TOTEM ] recovery to regular 1-0
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [TOTEM ] waiting_trans_ack changed to 1
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [TOTEM ] entering OPERATIONAL state.
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [TOTEM ] A new membership (192.168.99.225:120) was formed. Members joined: 1084777441
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [TOTEM ] waiting_trans_ack changed to 0
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [QB ] IPC credentials authenticated (2946-2958-18)
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [QB ] connecting to client [2958]
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [QB ] shm size:1048589; real_size:1052672; rb->word_size:263168
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: message repeated 2 times: [ [QB ] shm size:1048589; real_size:1052672; rb->word_size:263168]
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [QB ] IPC credentials authenticated (2946-2958-19)
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [QB ] connecting to client [2958]
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [QB ] shm size:1048589; real_size:1052672; rb->word_size:263168
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: message repeated 2 times: [ [QB ] shm size:1048589; real_size:1052672; rb->word_size:263168]
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [QB ] IPC credentials authenticated (2946-2958-20)
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [QB ] connecting to client [2958]
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [QB ] shm size:1048589; real_size:1052672; rb->word_size:263168
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: message repeated 2 times: [ [QB ] shm size:1048589; real_size:1052672; rb->word_size:263168]
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [QB ] HUP conn (2946-2958-20)
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [QB ] qb_ipcs_disconnect(2946-2958-20) state:2
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [QB ] Free'ing ringbuffer: /dev/shm/qb-cfg-response-2946-2958-20-header
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [QB ] Free'ing ringbuffer: /dev/shm/qb-cfg-event-2946-2958-20-header
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [QB ] Free'ing ringbuffer: /dev/shm/qb-cfg-request-2946-2958-20-header
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [TOTEM ] entering GATHER state from 11(merge during join).
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [TOTEM ] got commit token
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [TOTEM ] Saving state aru 6 high seq received 6
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [TOTEM ] entering COMMIT state.
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [TOTEM ] got commit token
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [TOTEM ] entering RECOVERY state.
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [TOTEM ] TRANS [0] member 192.168.99.225:
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [TOTEM ] position [0] member 192.168.99.223:
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [TOTEM ] previous ring seq 78 rep 192.168.99.223
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [TOTEM ] aru e high delivered e received flag 1
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [TOTEM ] position [1] member 192.168.99.224:
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [TOTEM ] previous ring seq 78 rep 192.168.99.223
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [TOTEM ] aru e high delivered e received flag 1
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [TOTEM ] position [2] member 192.168.99.225:
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [TOTEM ] previous ring seq 78 rep 192.168.99.225
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [TOTEM ] aru 6 high delivered 6 received flag 1
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [TOTEM ] Did not need to originate any messages in recovery.
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [TOTEM ] token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 0, aru ffffffff
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [TOTEM ] install seq 0 aru 0 high seq received 0
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [TOTEM ] token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 1, aru 0
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [TOTEM ] install seq 0 aru 0 high seq received 0
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [TOTEM ] token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 2, aru 0
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [TOTEM ] install seq 0 aru 0 high seq received 0
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [TOTEM ] token retrans flag is 0 my set retrans flag0 retrans queue empty 1 count 3, aru 0
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [TOTEM ] install seq 0 aru 0 high seq received 0
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [TOTEM ] retrans flag count 4 token aru 0 install seq 0 aru 0 0
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [TOTEM ] Resetting old ring state
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [TOTEM ] recovery to regular 1-0
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [TOTEM ] waiting_trans_ack changed to 1
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [TOTEM ] entering OPERATIONAL state.
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [TOTEM ] A new membership (192.168.99.223:124) was formed. Members joined: 1 3
Dec 18 23:44:21 region-ctrl-2 corosync[2946]: [TOTEM ] waiting_trans_ack changed to 0
Dec 18 23:44:40 region-ctrl-2 corosync[2946]: [QB ] IPC credentials authenticated (2946-2976-20)
Dec 18 23:44:40 region-ctrl-2 corosync[2946]: [QB ] connecting to client [2976]
Dec 18 23:44:40 region-ctrl-2 corosync[2946]: [QB ] shm size:1048589; real_size:1052672; rb->word_size:263168
Dec 18 23:44:40 region-ctrl-2 corosync[2946]: message repeated 2 times: [ [QB ] shm size:1048589; real_size:1052672; rb->word_size:263168]
Dec 18 23:44:40 region-ctrl-2 corosync[2946]: [QB ] IPC credentials authenticated (2946-2976-21)
Dec 18 23:44:40 region-ctrl-2 corosync[2946]: [QB ] connecting to client [2976]
Dec 18 23:44:40 region-ctrl-2 corosync[2946]: [QB ] shm size:1048589; real_size:1052672; rb->word_size:263168
Dec 18 23:44:40 region-ctrl-2 corosync[2946]: message repeated 2 times: [ [QB ] shm size:1048589; real_size:1052672; rb->word_size:263168]
Dec 18 23:44:40 region-ctrl-2 corosync[2946]: [QB ] HUP conn (2946-2976-21)
Then that last few lines repeat over and over again…
I’m very curious if you spot a bug. The way this is manifesting now is:
# crm configure show
node 1: region-ctrl-1
node 1084777441: region-ctrl-2
node 3: postgres-sb
property cib-bootstrap-options: \
have-watchdog=false \
dc-version=1.1.18-2b07d5c5a9 \
cluster-infrastructure=corosync \
cluster-name=debian \
stonith-enabled=false
# crm cluster status
Services:
corosync active/running/disabled
pacemaker deactivating/stop-sigterm/disabled
Printing ring status.
Local node ID 1084777441
RING ID 0
id = 192.168.99.225
status = ring 0 active with no faults
# pcs cluster status
Cluster Status:
Stack: corosync
Current DC: region-ctrl-1 (version 1.1.18-2b07d5c5a9) - partition with quorum
Last updated: Thu Dec 19 00:22:50 2019
Last change: Wed Dec 18 23:44:40 2019 by hacluster via crmd on region-ctrl-2
3 nodes configured
0 resources configured
PCSD Status:
postgres-sb: Online
region-ctrl-1: Online
The corosync cluster doesn’t even have a nodeid: 2 in the nodelist so this thing is getting autodetected somehow:
# cat /etc/corosync/corosync.conf
totem {
version: 2
cluster_name: maas-cluster
token: 3000
token_retransmits_before_loss_const: 10
clear_node_high_bit: yes
crypto_cipher: none
crypto_hash: none
interface {
ringnumber: 0
bindnetaddr: 192.168.99.0
mcastport: 5405
ttl: 1
}
}
logging {
fileline: off
to_stderr: no
to_logfile: yes
to_syslog: yes
syslog_facility: daemon
debug: on
timestamp: on
logger_subsys {
subsys: QUORUM
debug: on
}
}
quorum {
provider: corosync_votequorum
expected_votes: 3
two_node: 1
}
nodelist {
node {
ring0_addr: postgres-sb
nodeid: 3
}
node {
ring0_addr: region-ctrl-1
nodeid: 1
}
}
Moreover, I’ve tried deleting node 2 (doesn’t exist so this fails). I’ve tried deleting/clearing 1084777441. Delete fails. Clear works. When the node goes away and I try to recreate nodeid: 2 the errant node comes back instead as node 1084777441.
Finally, please review my other relevant settings:
# cat /etc/hosts
127.0.0.1 localhost
#127.0.1.1 region-ctrl-2
192.168.99.223 region-ctrl-1
192.168.99.224 postgres-sb
192.168.99.225 region-ctrl-2
192.168.7.223 region-ctrl-1
192.168.7.224 postgres-sb
192.168.7.225 region-ctrl-2
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
# hostname
region-ctrl-2
# uname -n
region-ctrl-2
Is there some other setting I could be missing here that could be causing this problem?
- Jim
> On Dec 18, 2019, at 13:24, Ken Gaillot <kgaillot at redhat.com> wrote:
>
> On Wed, 2019-12-18 at 12:21 -0800, JC wrote:
>> Adding logs (minus time stamps)
>>
>> info: crm_log_init: Changed active directory to
>> /var/lib/pacemaker/cores
>> info: get_cluster_type: Detected an active 'corosync' cluster
>> info: qb_ipcs_us_publish: server name: pacemakerd
>> info: pcmk__ipc_is_authentic_process_active: Could not
>> connect to lrmd IPC: Connection refused
>> info: pcmk__ipc_is_authentic_process_active: Could not
>> connect to cib_ro IPC: Connection refused
>> info: pcmk__ipc_is_authentic_process_active: Could not
>> connect to crmd IPC: Connection refused
>> info: pcmk__ipc_is_authentic_process_active: Could not
>> connect to attrd IPC: Connection refused
>> info: pcmk__ipc_is_authentic_process_active: Could not
>> connect to pengine IPC: Connection refused
>> info: pcmk__ipc_is_authentic_process_active: Could not
>> connect to stonith-ng IPC: Connection refused
>> info: corosync_node_name: Unable to get node name for nodeid
>> 1084777441
>> notice: get_node_name: Could not obtain a node name for
>> corosync nodeid 1084777441
>
> This ID appears to be coming from corosync. You have only to_syslog
> turned on in corosync.conf, so look in the system log around this same
> time to see what corosync is thinking. It does seem odd; I wonder if --
> purge is missing something.
>
> BTW you don't need bindnetaddr to be different for each host; it's the
> network address (e.g. the .0 for a /24), not the host address.
>
>> info: crm_get_peer: Created entry ea4ec23e-e676-4798-9b8b-
>> 00af39d3bb3d/0x5555f74984d0 for node (null)/1084777441 (1 total)
>> info: crm_get_peer: Node 1084777441 has uuid 1084777441
>> info: crm_update_peer_proc: cluster_connect_cpg: Node
>> (null)[1084777441] - corosync-cpg is now online
>> notice: cluster_connect_quorum: Quorum acquired
>> info: crm_get_peer: Created entry 882c0feb-d546-44b7-955f-
>> 4c8a844a0db1/0x5555f7499fd0 for node postgres-sb/3 (2 total)
>> info: crm_get_peer: Node 3 is now known as postgres-sb
>> info: crm_get_peer: Node 3 has uuid 3
>> info: crm_get_peer: Created entry 4e6a6b1e-d687-4527-bffc-
>> 5d701ff60a66/0x5555f749a6f0 for node region-ctrl-2/2 (3 total)
>> info: crm_get_peer: Node 2 is now known as region-ctrl-2
>> info: crm_get_peer: Node 2 has uuid 2
>> info: crm_get_peer: Created entry 5532a3cc-2577-4764-b9ee-
>> 770d437ccec0/0x5555f749a0a0 for node region-ctrl-1/1 (4 total)
>> info: crm_get_peer: Node 1 is now known as region-ctrl-1
>> info: crm_get_peer: Node 1 has uuid 1
>> info: corosync_node_name: Unable to get node name for nodeid
>> 1084777441
>> notice: get_node_name: Defaulting to uname -n for the local
>> corosync node name
>> warning: crm_find_peer: Node 1084777441 and 2 share the same
>> name: 'region-ctrl-2'
>> info: crm_get_peer: Node 1084777441 is now known as region-ctrl-2
>> info: pcmk_quorum_notification: Quorum retained |
>> membership=32 members=3
>> notice: crm_update_peer_state_iter: Node region-ctrl-1 state is
>> now member | nodeid=1 previous=unknown
>> source=pcmk_quorum_notification
>> notice: crm_update_peer_state_iter: Node postgres-sb state is now
>> member | nodeid=3 previous=unknown source=pcmk_quorum_notification
>> notice: crm_update_peer_state_iter: Node region-ctrl-2 state is
>> now member | nodeid=1084777441 previous=unknown
>> source=pcmk_quorum_notification
>> info: crm_reap_unseen_nodes: State of node region-ctrl-
>> 2[2] is still unknown
>> info: pcmk_cpg_membership: Node 1084777441 joined group
>> pacemakerd (counter=0.0, pid=32765, unchecked for rivals)
>> info: pcmk_cpg_membership: Node 1 still member of group
>> pacemakerd (peer=region-ctrl-1:900, counter=0.0, at least once)
>> info: crm_update_peer_proc: pcmk_cpg_membership: Node region-
>> ctrl-1[1] - corosync-cpg is now online
>> info: pcmk_cpg_membership: Node 3 still member of group
>> pacemakerd (peer=postgres-sb:976, counter=0.1, at least once)
>> info: crm_update_peer_proc: pcmk_cpg_membership: Node postgres-
>> sb[3] - corosync-cpg is now online
>> info: pcmk_cpg_membership: Node 1084777441 still member of group
>> pacemakerd (peer=region-ctrl-2:3016, counter=0.2, at least once)
>> pengine: info: crm_log_init: Changed active directory to
>> /var/lib/pacemaker/cores
>> lrmd: info: crm_log_init: Changed active directory to
>> /var/lib/pacemaker/cores
>> lrmd: info: qb_ipcs_us_publish: server name: lrmd
>> pengine: info: qb_ipcs_us_publish: server name: pengine
>> cib: info: crm_log_init: Changed active directory to
>> /var/lib/pacemaker/cores
>> attrd: info: crm_log_init: Changed active directory to
>> /var/lib/pacemaker/cores
>> attrd: info: get_cluster_type: Verifying cluster type:
>> 'corosync'
>> attrd: info: get_cluster_type: Assuming an active 'corosync'
>> cluster
>> info: crm_log_init: Changed active directory to
>> /var/lib/pacemaker/cores
>> attrd: notice: crm_cluster_connect: Connecting to cluster
>> infrastructure: corosync
>> cib: info: get_cluster_type: Verifying cluster type:
>> 'corosync'
>> cib: info: get_cluster_type: Assuming an active 'corosync'
>> cluster
>> info: get_cluster_type: Verifying cluster type: 'corosync'
>> info: get_cluster_type: Assuming an active 'corosync' cluster
>> notice: crm_cluster_connect: Connecting to cluster infrastructure:
>> corosync
>> attrd: info: corosync_node_name: Unable to get node
>> name for nodeid 1084777441
>> cib: info: validate_with_relaxng: Creating RNG parser
>> context
>> crmd: info: crm_log_init: Changed active directory to
>> /var/lib/pacemaker/cores
>> crmd: info: get_cluster_type: Verifying cluster type:
>> 'corosync'
>> crmd: info: get_cluster_type: Assuming an active 'corosync'
>> cluster
>> crmd: info: do_log: Input I_STARTUP received in state
>> S_STARTING from crmd_init
>> attrd: notice: get_node_name: Could not obtain a node name
>> for corosync nodeid 1084777441
>> attrd: info: crm_get_peer: Created entry af5c62c9-21c5-
>> 4428-9504-ea72a92de7eb/0x560870420e90 for node (null)/1084777441 (1
>> total)
>> attrd: info: crm_get_peer: Node 1084777441 has uuid
>> 1084777441
>> attrd: info: crm_update_peer_proc: cluster_connect_cpg:
>> Node (null)[1084777441] - corosync-cpg is now online
>> attrd: notice: crm_update_peer_state_iter: Node (null)
>> state is now member | nodeid=1084777441 previous=unknown
>> source=crm_update_peer_proc
>> attrd: info: init_cs_connection_once: Connection to
>> 'corosync': established
>> info: corosync_node_name: Unable to get node name for nodeid
>> 1084777441
>> notice: get_node_name: Could not obtain a node name for
>> corosync nodeid 1084777441
>> info: crm_get_peer: Created entry 5bcb51ae-0015-4652-b036-
>> b92cf4f1d990/0x55f583634700 for node (null)/1084777441 (1 total)
>> info: crm_get_peer: Node 1084777441 has uuid 1084777441
>> info: crm_update_peer_proc: cluster_connect_cpg: Node
>> (null)[1084777441] - corosync-cpg is now online
>> notice: crm_update_peer_state_iter: Node (null) state is now
>> member | nodeid=1084777441 previous=unknown
>> source=crm_update_peer_proc
>> attrd: info: corosync_node_name: Unable to get node
>> name for nodeid 1084777441
>> attrd: notice: get_node_name: Defaulting to uname -n for
>> the local corosync node name
>> attrd: info: crm_get_peer: Node 1084777441 is now known
>> as region-ctrl-2
>> info: corosync_node_name: Unable to get node name for nodeid
>> 1084777441
>> notice: get_node_name: Defaulting to uname -n for the local
>> corosync node name
>> info: init_cs_connection_once: Connection to 'corosync':
>> established
>> info: corosync_node_name: Unable to get node name for nodeid
>> 1084777441
>> notice: get_node_name: Defaulting to uname -n for the local
>> corosync node name
>> info: crm_get_peer: Node 1084777441 is now known as region-ctrl-2
>> cib: notice: crm_cluster_connect: Connecting to cluster
>> infrastructure: corosync
>> cib: info: corosync_node_name: Unable to get node
>> name for nodeid 1084777441
>> cib: notice: get_node_name: Could not obtain a node name
>> for corosync nodeid 1084777441
>> cib: info: crm_get_peer: Created entry a6ced2c1-9d51-
>> 445d-9411-2fb19deab861/0x55848365a150 for node (null)/1084777441 (1
>> total)
>> cib: info: crm_get_peer: Node 1084777441 has uuid
>> 1084777441
>> cib: info: crm_update_peer_proc: cluster_connect_cpg:
>> Node (null)[1084777441] - corosync-cpg is now online
>> cib: notice: crm_update_peer_state_iter: Node (null)
>> state is now member | nodeid=1084777441 previous=unknown
>> source=crm_update_peer_proc
>> cib: info: init_cs_connection_once: Connection to
>> 'corosync': established
>> cib: info: corosync_node_name: Unable to get node
>> name for nodeid 1084777441
>> cib: notice: get_node_name: Defaulting to uname -n for
>> the local corosync node name
>> cib: info: crm_get_peer: Node 1084777441 is now known
>> as region-ctrl-2
>> cib: info: qb_ipcs_us_publish: server name: cib_ro
>> cib: info: qb_ipcs_us_publish: server name: cib_rw
>> cib: info: qb_ipcs_us_publish: server name: cib_shm
>> cib: info: pcmk_cpg_membership: Node 1084777441
>> joined group cib (counter=0.0, pid=0, unchecked for rivals)
>> _______________________________________________
>> Manage your subscription:
>> https://lists.clusterlabs.org/mailman/listinfo/users <https://lists.clusterlabs.org/mailman/listinfo/users>
>>
>> ClusterLabs home: https://www.clusterlabs.org/ <https://www.clusterlabs.org/>
>>
> --
> Ken Gaillot <kgaillot at redhat.com <mailto:kgaillot at redhat.com>>
>
> _______________________________________________
> Manage your subscription:
> https://lists.clusterlabs.org/mailman/listinfo/users <https://lists.clusterlabs.org/mailman/listinfo/users>
>
> ClusterLabs home: https://www.clusterlabs.org/ <https://www.clusterlabs.org/>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.clusterlabs.org/pipermail/users/attachments/20191219/c9a77c0f/attachment-0001.html>
More information about the Users
mailing list