[Pacemaker] Problems with corosync while forking processes during node startup.

marc rosenbaum da-nuke at web.de
Tue Feb 19 13:29:26 UTC 2013


Hi everyone,
I am using SLES	11 sp2 64bit with corosync 1.4.5, Pacemaker 1.1.6.
The cluster is using a simple two node configuration with NFS, samba, one clusterip and sfex for service fencing.
This cluster did a good job, but a few days ago one clusternode hung during the shutdownprocedure. Because of this i had to reboot the node. Since hat the node is doing some trouble while trying to go online.

For me it looks like corosync has problems while forking the processes like cib, crmd, attrd, pengine and mgmtd. 
If i start these processes manually as root they are doing their job but while starting via corosync they are not. 

I tried to switch the communication from multicast to broadcast to be sure that there is no networkproblem. It seems like corosync can communicate between the nodes with both configurations. 

Attached  you will find the configuration and the related logfile.

It would be cool if someone could give me the hint to solve my problem.
Thanks

Marc
Corosync.conf:

# Please read the corosync.conf.5 manual page
compatibility: whitetank

aisexec {
         # Run as root - this is necessary to be able to manage
         # resources with Pacemaker
         user:           root
         group:          root
}

service {
         # Load the Pacemaker Cluster Resource Manager
         ver:            0
         name:           pacemaker
         use_mgmtd:      yes
         use_logd:       yes
}

totem {
         # The only valid version is 2
         version:        2

         # How long before declaring a token lost (ms)
         token:          5000

         # How many token retransmits before forming a new configuration
         token_retransmits_before_loss_const: 10

         # How long to wait for join messages in the membership protocol 
(ms)
         join:           60

         # How long to wait for consensus to be achieved before starting
         # a new round of membership configuration (ms)
         consensus:      6000

         # Turn off the virtual synchrony filter
         vsftype:        none

         # Number of messages that may be sent by one processor on
         # receipt of the token
         max_messages:   20

         # Limit generated nodeids to 31-bits (positive signed integers)
         clear_node_high_bit: yes

         # Disable encryption
         secauth:        on

         # How many threads to use for encryption/decryption
         threads:        0

         # Optionally assign a fixed node id (integer)
         # nodeid:       1234

         interface {
                 ringnumber: 0
                 bindnetaddr: 10.10.36.0
                 broadcast: yes
#               mcastaddr: 226.94.1.1
#                mcastaddr: 239.255.0.11
                 mcastport: 5405
                 ttl: 1
         }
}

logging {
         fileline: off
         to_stderr: no
         to_logfile: no
         to_syslog: yes
#       syslog_facility: daemon
         syslog_facility: local3
         debug: on
         timestamp: off
         logger_subsys {
                 subsys: AMF
                 debug: off
         }
}

amf {
         mode: disabled
}




Logfile:




Feb 15 09:23:02 server3 corosync[22666]:  [MAIN  ] Corosync Cluster 
Engine ('1.4.5'): started and ready to provide service.
Feb 15 09:23:02 server3 corosync[22666]:  [MAIN  ] Corosync built-in 
features: nss rdma
Feb 15 09:23:02 server3 corosync[22666]:  [MAIN  ] Successfully 
configured openais services to load
Feb 15 09:23:02 server3 corosync[22666]:  [MAIN  ] Successfully read 
main configuration file '/etc/corosync/corosync.conf'.
Feb 15 09:23:02 server3 corosync[22666]:  [TOTEM ] waiting_trans_ack 
changed to 1
Feb 15 09:23:02 server3 corosync[22666]:  [TOTEM ] Token Timeout (5000 
ms) retransmit timeout (490 ms)
Feb 15 09:23:02 server3 corosync[22666]:  [TOTEM ] token hold (382 ms) 
retransmits before loss (10 retrans)
Feb 15 09:23:02 server3 corosync[22666]:  [TOTEM ] join (60 ms) 
send_join (0 ms) consensus (6000 ms) merge (200 ms)
Feb 15 09:23:02 server3 corosync[22666]:  [TOTEM ] downcheck (1000 ms) 
fail to recv const (2500 msgs)
Feb 15 09:23:02 server3 corosync[22666]:  [TOTEM ] seqno unchanged const 
(30 rotations) Maximum network MTU 1402
Feb 15 09:23:02 server3 corosync[22666]:  [TOTEM ] window size per 
rotation (50 messages) maximum messages per rotation (20 messages)
Feb 15 09:23:02 server3 corosync[22666]:  [TOTEM ] missed count const (5 
messages)
Feb 15 09:23:02 server3 corosync[22666]:  [TOTEM ] send threads (0 threads)
Feb 15 09:23:02 server3 corosync[22666]:  [TOTEM ] RRP token expired 
timeout (490 ms)
Feb 15 09:23:02 server3 corosync[22666]:  [TOTEM ] RRP token problem 
counter (2000 ms)
Feb 15 09:23:02 server3 corosync[22666]:  [TOTEM ] RRP threshold (10 
problem count)
Feb 15 09:23:02 server3 corosync[22666]:  [TOTEM ] RRP multicast 
threshold (100 problem count)
Feb 15 09:23:02 server3 corosync[22666]:  [TOTEM ] RRP automatic
recovery check timeout (1000 ms)
Feb 15 09:23:02 server3 corosync[22666]:  [TOTEM ] RRP mode set to none.
Feb 15 09:23:02 server3 corosync[22666]:  [TOTEM ] 
heartbeat_failures_allowed (0)
Feb 15 09:23:02 server3 corosync[22666]:  [TOTEM ] max_network_delay (50 ms)
Feb 15 09:23:02 server3 corosync[22666]:  [TOTEM ] HeartBeat is 
Disabled. To enable set heartbeat_failures_allowed > 0
Feb 15 09:23:02 server3 corosync[22666]:  [TOTEM ] Initializing 
transport (UDP/IP Multicast).
Feb 15 09:23:02 server3 corosync[22666]:  [TOTEM ] Initializing 
transmit/receive security: libtomcrypt SOBER128/SHA1HMAC (mode 0).
Feb 15 09:23:02 server3 corosync[22666]:  [IPC   ] you are using ipc api v2
Feb 15 09:23:02 server3 corosync[22666]:  [TOTEM ] Receive multicast 
socket recv buffer size (262142 bytes).
Feb 15 09:23:02 server3 corosync[22666]:  [TOTEM ] Transmit multicast 
socket send buffer size (262142 bytes).
Feb 15 09:23:02 server3 corosync[22666]:  [TOTEM ] Local receive 
multicast loop socket recv buffer size (262142 bytes).
Feb 15 09:23:02 server3 corosync[22666]:  [TOTEM ] Local transmit 
multicast loop socket send buffer size (262142 bytes).
Feb 15 09:23:02 server3 corosync[22666]:  [TOTEM ] The network interface 
[10.10.36.1] is now up.
Feb 15 09:23:02 server3 corosync[22666]:  [TOTEM ] Created or loaded 
sequence id 350.10.10.36.1 for this ring.
Feb 15 09:23:02 server3 corosync[22666]:  [SERV  ] Service engine 
loaded: openais cluster membership service B.01.01
Feb 15 09:23:02 server3 corosync[22666]:  [EVT   ] Evt exec init request
Feb 15 09:23:02 server3 corosync[22666]:  [SERV  ] Service engine 
loaded: openais event service B.01.01
Feb 15 09:23:02 server3 corosync[22666]:  [SERV  ] Service engine 
loaded: openais checkpoint service B.01.01
Feb 15 09:23:02 server3 corosync[22666]:  [SERV  ] Service engine 
loaded: openais availability management framework B.01.01
Feb 15 09:23:02 server3 corosync[22666]:  [MSG   ] [DEBUG]: msg_exec_init_fn
Feb 15 09:23:02 server3 stonith-ng: [22672]: info: Invoked: 
/usr/lib64/heartbeat/stonithd
Feb 15 09:23:02 server3 lrmd: [22674]: info: Signal sent to pid=4825, 
waiting for process to exit
Feb 15 09:23:02 server3 corosync[22666]:  [SERV  ] Service engine 
loaded: openais message service B.03.01
Feb 15 09:23:02 server3 corosync[22666]:  [LCK   ] [DEBUG]: lck_exec_init_fn
Feb 15 09:23:02 server3 corosync[22666]:  [SERV  ] Service engine 
loaded: openais distributed locking service B.03.01
Feb 15 09:23:02 server3 corosync[22666]:  [SERV  ] Service engine 
loaded: openais timer service A.01.01
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] debug: 
pcmk_user_lookup: Cluster user root has uid=0 gid=0
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] info: 
process_ais_conf: Reading configure
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] info: 
config_find_init: Local handle: 8535092201842016258 for logging
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] info: 
config_find_next: Processing additional logging options...
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] info: get_config_opt: 
Found 'on' for option: debug
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] info: get_config_opt: 
Found 'no' for option: to_logfile
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] info: get_config_opt: 
Found 'yes' for option: to_syslog
Feb 15 09:23:02 server3 lrmd: [4825]: info: lrmd is shutting down
Feb 15 09:23:02 server3 stonith-ng: [22672]: info: crm_log_init_worker: 
Changed active directory to /var/lib/heartbeat/cores/root
Feb 15 09:23:02 server3 lrmd: [4825]: debug: [lrmd] stopped
Feb 15 09:23:02 server3 stonith-ng: [22672]: info: get_cluster_type: 
Cluster type is: 'openais'
Feb 15 09:23:02 server3 stonith-ng: [22672]: notice: 
crm_cluster_connect: Connecting to cluster infrastructure: classic 
openais (with plugin)
Feb 15 09:23:02 server3 stonith-ng: [22672]: info: 
init_ais_connection_classic: Creating connection to our Corosync plugin
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] info: get_config_opt: 
Found 'local3' for option: syslog_facility
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] info: 
config_find_init: Local handle: 8054506479773810691 for quorum
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] info: 
config_find_next: No additional configuration supplied for: quorum
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] info: get_config_opt: 
No default for option: provider
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] info: 
config_find_init: Local handle: 7664968412203843588 for service
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] info: 
config_find_next: Processing additional service options...
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] info: 
config_find_next: Processing additional service options...
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] info: 
config_find_next: Processing additional service options...
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] info: 
config_find_next: Processing additional service options...
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] info: 
config_find_next: Processing additional service options...
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] info: 
config_find_next: Processing additional service options...
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] info: 
config_find_next: Processing additional service options...
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] info: 
config_find_next: Processing additional service options...
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] info: get_config_opt: 
Found '0' for option: ver
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] info: get_config_opt: 
Defaulting to 'pcmk' for option: clustername
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] info: get_config_opt: 
Found 'yes' for option: use_logd
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] info: get_config_opt: 
Found 'yes' for option: use_mgmtd
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] info: pcmk_startup: 
CRM: Initialized
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] Logging: Initialized 
pcmk_startup
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] info: pcmk_startup: 
Maximum core file size is: 18446744073709551615
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] debug: 
pcmk_user_lookup: Cluster user hacluster has uid=90 gid=90
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] info: pcmk_startup: 
Service: 9
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] info: pcmk_startup: 
Local hostname: server3
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] info: 
pcmk_update_nodeid: Local node id: 824445450
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] info: update_member: 
Creating entry for node 824445450 born on 0
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] info: update_member: 
0x697310 Node 824445450 now known as server3 (was: (null))
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] info: update_member: 
Node server3 now has 1 quorum votes (was 0)
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] info: update_member: 
Node 824445450/server3 is now: member
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] info: spawn_child: 
Forked child 22672 for process stonith-ng
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] debug: 
pcmk_user_lookup: Cluster user hacluster has uid=90 gid=90
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] info: spawn_child: 
Forked child 22673 for process cib
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] info: spawn_child: 
Forked child 22674 for process lrmd
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] debug: 
pcmk_user_lookup: Cluster user hacluster has uid=90 gid=90
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] info: spawn_child: 
Forked child 22675 for process attrd
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] debug: 
pcmk_user_lookup: Cluster user hacluster has uid=90 gid=90
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] info: spawn_child: 
Forked child 22676 for process pengine
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] debug: 
pcmk_user_lookup: Cluster user hacluster has uid=90 gid=90
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] info: spawn_child: 
Forked child 22677 for process crmd
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] info: spawn_child: 
Forked child 22678 for process mgmtd
Feb 15 09:23:02 server3 corosync[22666]:  [SERV  ] Service engine 
loaded: Pacemaker Cluster Manager 1.1.6
Feb 15 09:23:02 server3 corosync[22666]:  [SERV  ] Service engine 
loaded: corosync extended virtual synchrony service
Feb 15 09:23:02 server3 corosync[22666]:  [SERV  ] Service engine 
loaded: corosync configuration service
Feb 15 09:23:02 server3 corosync[22666]:  [SERV  ] Service engine 
loaded: corosync cluster closed process group service v1.01
Feb 15 09:23:02 server3 corosync[22666]:  [SERV  ] Service engine 
loaded: corosync cluster config database access v1.01
Feb 15 09:23:02 server3 corosync[22666]:  [SERV  ] Service engine 
loaded: corosync profile loading service
Feb 15 09:23:02 server3 corosync[22666]:  [SERV  ] Service engine 
loaded: corosync cluster quorum service v0.1
Feb 15 09:23:02 server3 corosync[22666]:  [MAIN  ] Compatibility mode 
set to whitetank.  Using V1 and V2 of the synchronization engine.
Feb 15 09:23:02 server3 corosync[22666]:  [TOTEM ] entering GATHER state 
from 15.
Feb 15 09:23:02 server3 corosync[22666]:  [TOTEM ] Creating commit token 
because I am the rep.
Feb 15 09:23:02 server3 corosync[22666]:  [TOTEM ] Saving state aru 0 
high seq received 0
Feb 15 09:23:02 server3 corosync[22666]:  [TOTEM ] Storing new sequence 
id for ring 354
Feb 15 09:23:02 server3 corosync[22666]:  [TOTEM ] entering COMMIT state.
Feb 15 09:23:02 server3 corosync[22666]:  [TOTEM ] got commit token
Feb 15 09:23:02 server3 corosync[22666]:  [TOTEM ] entering RECOVERY state.
Feb 15 09:23:02 server3 corosync[22666]:  [TOTEM ] position [0] member 
10.10.36.1:
Feb 15 09:23:02 server3 corosync[22666]:  [TOTEM ] previous ring seq 350 
rep 10.10.36.1
Feb 15 09:23:02 server3 corosync[22666]:  [TOTEM ] aru 0 high delivered 
0 received flag 1
Feb 15 09:23:02 server3 corosync[22666]:  [TOTEM ] Did not need to 
originate any messages in recovery.
Feb 15 09:23:02 server3 corosync[22666]:  [TOTEM ] got commit token
Feb 15 09:23:02 server3 corosync[22666]:  [TOTEM ] Sending initial ORF token
Feb 15 09:23:02 server3 corosync[22666]:  [TOTEM ] token retrans flag is 
0 my set retrans flag0 retrans queue empty 1 count 0, aru 0
Feb 15 09:23:02 server3 corosync[22666]:  [TOTEM ] install seq 0 aru 0 
high seq received 0
Feb 15 09:23:02 server3 corosync[22666]:  [TOTEM ] token retrans flag is 
0 my set retrans flag0 retrans queue empty 1 count 1, aru 0
Feb 15 09:23:02 server3 corosync[22666]:  [TOTEM ] install seq 0 aru 0 
high seq received 0
Feb 15 09:23:02 server3 corosync[22666]:  [TOTEM ] token retrans flag is 
0 my set retrans flag0 retrans queue empty 1 count 2, aru 0
Feb 15 09:23:02 server3 corosync[22666]:  [TOTEM ] install seq 0 aru 0 
high seq received 0
Feb 15 09:23:02 server3 corosync[22666]:  [TOTEM ] token retrans flag is 
0 my set retrans flag0 retrans queue empty 1 count 3, aru 0
Feb 15 09:23:02 server3 corosync[22666]:  [TOTEM ] install seq 0 aru 0 
high seq received 0
Feb 15 09:23:02 server3 corosync[22666]:  [TOTEM ] retrans flag count 4 
token aru 0 install seq 0 aru 0 0
Feb 15 09:23:02 server3 corosync[22666]:  [TOTEM ] Resetting old ring state
Feb 15 09:23:02 server3 corosync[22666]:  [TOTEM ] recovery to regular 1-0
Feb 15 09:23:02 server3 corosync[22666]:  [CLM   ] CLM CONFIGURATION CHANGE
Feb 15 09:23:02 server3 corosync[22666]:  [CLM   ] New Configuration:
Feb 15 09:23:02 server3 corosync[22666]:  [CLM   ] Members Left:
Feb 15 09:23:02 server3 corosync[22666]:  [CLM   ] Members Joined:
Feb 15 09:23:02 server3 corosync[22666]:  [EVT   ] Evt conf change 1
Feb 15 09:23:02 server3 corosync[22666]:  [EVT   ] m 0, j 0 l 0
Feb 15 09:23:02 server3 corosync[22666]:  [LCK   ] [DEBUG]: lck_confchg_fn
Feb 15 09:23:02 server3 corosync[22666]:  [MSG   ] [DEBUG]: msg_confchg_fn
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] notice: 
pcmk_peer_update: Transitional membership event on ring 852: memb=0, 
new=0, lost=0
Feb 15 09:23:02 server3 corosync[22666]:  [TOTEM ] waiting_trans_ack 
changed to 1
Feb 15 09:23:02 server3 corosync[22666]:  [CLM   ] CLM CONFIGURATION CHANGE
Feb 15 09:23:02 server3 corosync[22666]:  [CLM   ] New Configuration:
Feb 15 09:23:02 server3 corosync[22666]:  [CLM   ]     r(0) ip(10.10.36.1)
Feb 15 09:23:02 server3 corosync[22666]:  [CLM   ] Members Left:
Feb 15 09:23:02 server3 corosync[22666]:  [CLM   ] Members Joined:
Feb 15 09:23:02 server3 corosync[22666]:  [CLM   ]     r(0) ip(10.10.36.1)
Feb 15 09:23:02 server3 corosync[22666]:  [EVT   ] Evt conf change 0
Feb 15 09:23:02 server3 corosync[22666]:  [EVT   ] m 1, j 1 l 0
Feb 15 09:23:02 server3 corosync[22666]:  [LCK   ] [DEBUG]: lck_confchg_fn
Feb 15 09:23:02 server3 corosync[22666]:  [MSG   ] [DEBUG]: msg_confchg_fn
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] notice: 
pcmk_peer_update: Stable membership event on ring 852: memb=1, new=1, lost=0
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] info: 
pcmk_peer_update: NEW:  server3 824445450
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] debug: 
pcmk_peer_update: Node 824445450 has address r(0) ip(10.10.36.1)
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] info: 
pcmk_peer_update: MEMB: server3 824445450
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] debug: 
send_cluster_id: Leaving born-on unset: 852
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] debug: 
send_cluster_id: Local update: id=824445450, born=0, seq=852
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] info: update_member: 
Node server3 now has process list: 00000000000000000000000000151312 
(1381138)
Feb 15 09:23:02 server3 corosync[22666]:  [SYNC  ] This node is within 
the primary component and will provide service.
Feb 15 09:23:02 server3 corosync[22666]:  [TOTEM ] entering OPERATIONAL 
state.
Feb 15 09:23:02 server3 corosync[22666]:  [TOTEM ] A processor joined or 
left the membership and a new membership was formed.
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] debug: 
pcmk_cluster_id_callback: Node update: server3 (1.1.6)
Feb 15 09:23:02 server3 corosync[22666]:  [SYNC  ] confchg entries 1
Feb 15 09:23:02 server3 corosync[22666]:  [SYNC  ] Barrier Start 
Received From 824445450
Feb 15 09:23:02 server3 corosync[22666]:  [SYNC  ] Barrier completion 
status for nodeid 824445450 = 1.
Feb 15 09:23:02 server3 corosync[22666]:  [SYNC  ] Synchronization 
barrier completed
Feb 15 09:23:02 server3 corosync[22666]:  [SYNC  ] Synchronization 
actions starting for (openais cluster membership service B.01.01)
Feb 15 09:23:02 server3 corosync[22666]:  [CLM   ] got nodejoin message 
10.10.36.1
Feb 15 09:23:02 server3 corosync[22666]:  [SYNC  ] confchg entries 1
Feb 15 09:23:02 server3 corosync[22666]:  [SYNC  ] Barrier Start 
Received From 824445450
Feb 15 09:23:02 server3 corosync[22666]:  [SYNC  ] Barrier completion 
status for nodeid 824445450 = 1.
Feb 15 09:23:02 server3 corosync[22666]:  [SYNC  ] Synchronization 
barrier completed
Feb 15 09:23:02 server3 corosync[22666]:  [SYNC  ] Committing 
synchronization for (openais cluster membership service B.01.01)
Feb 15 09:23:02 server3 corosync[22666]:  [SYNC  ] Synchronization 
actions starting for (dummy AMF service)
Feb 15 09:23:02 server3 corosync[22666]:  [SYNC  ] confchg entries 1
Feb 15 09:23:02 server3 corosync[22666]:  [SYNC  ] Barrier Start 
Received From 824445450
Feb 15 09:23:02 server3 corosync[22666]:  [SYNC  ] Barrier completion 
status for nodeid 824445450 = 1.
Feb 15 09:23:02 server3 corosync[22666]:  [SYNC  ] Synchronization 
barrier completed
Feb 15 09:23:02 server3 corosync[22666]:  [SYNC  ] Committing 
synchronization for (dummy AMF service)
Feb 15 09:23:02 server3 corosync[22666]:  [SYNC  ] Synchronization 
actions starting for (openais checkpoint service B.01.01)
Feb 15 09:23:02 server3 corosync[22666]:  [SYNC  ] confchg entries 1
Feb 15 09:23:02 server3 corosync[22666]:  [SYNC  ] Barrier Start 
Received From 824445450
Feb 15 09:23:02 server3 corosync[22666]:  [SYNC  ] Barrier completion 
status for nodeid 824445450 = 1.
Feb 15 09:23:02 server3 corosync[22666]:  [SYNC  ] Synchronization 
barrier completed
Feb 15 09:23:02 server3 corosync[22666]:  [SYNC  ] Committing 
synchronization for (openais checkpoint service B.01.01)
Feb 15 09:23:02 server3 corosync[22666]:  [SYNC  ] Synchronization 
actions starting for (openais event service B.01.01)
Feb 15 09:23:02 server3 corosync[22666]:  [EVT   ] Evt synchronize 
initialization
Feb 15 09:23:02 server3 corosync[22666]:  [EVT   ] My node ID r(0) 
ip(10.10.36.1)
Feb 15 09:23:02 server3 corosync[22666]:  [EVT   ] Process Evt 
synchronization
Feb 15 09:23:02 server3 corosync[22666]:  [EVT   ] Send max event ID updates
Feb 15 09:23:02 server3 corosync[22666]:  [EVT   ] Process Evt 
synchronization
Feb 15 09:23:02 server3 corosync[22666]:  [EVT   ] Send open count updates
Feb 15 09:23:02 server3 corosync[22666]:  [EVT   ] DONE Sending open counts
Feb 15 09:23:02 server3 corosync[22666]:  [EVT   ] Remote channel 
operation request
Feb 15 09:23:02 server3 corosync[22666]:  [EVT   ] my node ID: 0x31240a0a
Feb 15 09:23:02 server3 corosync[22666]:  [EVT   ] Receive 
EVT_CONF_CHANGE_DONE from nodeid r(0) ip(10.10.36.1)  members 1 checked in 1
Feb 15 09:23:02 server3 corosync[22666]:  [EVT   ] Process Evt 
synchronization
Feb 15 09:23:02 server3 corosync[22666]:  [EVT   ] DONE Sending retained 
events
Feb 15 09:23:02 server3 corosync[22666]:  [EVT   ] Remote channel 
operation request
Feb 15 09:23:02 server3 corosync[22666]:  [EVT   ] my node ID: 0x31240a0a
Feb 15 09:23:02 server3 corosync[22666]:  [EVT   ] Receive EVT_CONF_DONE 
from nodeid r(0) ip(10.10.36.1) , members 1 checked in 1
Feb 15 09:23:02 server3 corosync[22666]:  [EVT   ] Process Evt 
synchronization
Feb 15 09:23:02 server3 corosync[22666]:  [EVT   ] Recovery complete
Feb 15 09:23:02 server3 corosync[22666]:  [SYNC  ] confchg entries 1
Feb 15 09:23:02 server3 corosync[22666]:  [SYNC  ] Barrier Start 
Received From 824445450
Feb 15 09:23:02 server3 corosync[22666]:  [SYNC  ] Barrier completion 
status for nodeid 824445450 = 1.
Feb 15 09:23:02 server3 corosync[22666]:  [SYNC  ] Synchronization 
barrier completed
Feb 15 09:23:02 server3 corosync[22666]:  [EVT   ] Evt synchronize 
activation
Feb 15 09:23:02 server3 corosync[22666]:  [SYNC  ] Committing 
synchronization for (openais event service B.01.01)
Feb 15 09:23:02 server3 corosync[22666]:  [SYNC  ] Synchronization 
actions starting for (corosync cluster closed process group service v1.01)
Feb 15 09:23:02 server3 corosync[22666]:  [CPG   ] comparing: sender 
r(0) ip(10.10.36.1) ; members(old:0 left:0)
Feb 15 09:23:02 server3 corosync[22666]:  [CPG   ] chosen downlist: 
sender r(0) ip(10.10.36.1) ; members(old:0 left:0)
Feb 15 09:23:02 server3 corosync[22666]:  [SYNC  ] confchg entries 1
Feb 15 09:23:02 server3 corosync[22666]:  [SYNC  ] Barrier Start 
Received From 824445450
Feb 15 09:23:02 server3 corosync[22666]:  [SYNC  ] Barrier completion 
status for nodeid 824445450 = 1.
Feb 15 09:23:02 server3 corosync[22666]:  [SYNC  ] Synchronization 
barrier completed
Feb 15 09:23:02 server3 corosync[22666]:  [SYNC  ] Committing 
synchronization for (corosync cluster closed process group service v1.01)
Feb 15 09:23:02 server3 corosync[22666]:  [LCK   ] [DEBUG]: lck_sync_init
Feb 15 09:23:02 server3 corosync[22666]:  [LCK   ] [DEBUG]: 
lck_sync_resource_lock_timer_stop
Feb 15 09:23:02 server3 corosync[22666]:  [LCK   ] [DEBUG]: lck_sync_process
Feb 15 09:23:02 server3 corosync[22666]:  [SYNCV2] Committing 
synchronization for openais distributed locking service B.03.01
Feb 15 09:23:02 server3 corosync[22666]:  [LCK   ] [DEBUG]: 
lck_sync_activate
Feb 15 09:23:02 server3 corosync[22666]:  [LCK   ] [DEBUG]: 
lck_sync_resource_free
Feb 15 09:23:02 server3 corosync[22666]:  [LCK   ] [DEBUG]: 
lck_sync_resource_lock_timer_start
Feb 15 09:23:02 server3 corosync[22666]:  [LCK   ] [DEBUG]: 
  global_lock_count = 0
Feb 15 09:23:02 server3 corosync[22666]:  [MSG   ] [DEBUG]: msg_sync_init
Feb 15 09:23:02 server3 corosync[22666]:  [MSG   ] [DEBUG]: 
msg_sync_queue_enter
Feb 15 09:23:02 server3 corosync[22666]:  [MSG   ] [DEBUG]: msg_sync_process
Feb 15 09:23:02 server3 corosync[22666]:  [MSG   ] [DEBUG]: 
msg_sync_queue_iterate
Feb 15 09:23:02 server3 corosync[22666]:  [MSG   ] [DEBUG]: 
msg_sync_group_enter
Feb 15 09:23:02 server3 corosync[22666]:  [MSG   ] [DEBUG]: msg_sync_process
Feb 15 09:23:02 server3 corosync[22666]:  [MSG   ] [DEBUG]: 
msg_sync_group_iterate
Feb 15 09:23:02 server3 corosync[22666]:  [MSG   ] [DEBUG]: 
msg_sync_reply_enter
Feb 15 09:23:02 server3 corosync[22666]:  [MSG   ] [DEBUG]: msg_sync_process
Feb 15 09:23:02 server3 corosync[22666]:  [MSG   ] [DEBUG]: 
msg_sync_reply_iterate
Feb 15 09:23:02 server3 corosync[22666]:  [SYNCV2] Committing 
synchronization for openais message service B.03.01
Feb 15 09:23:02 server3 corosync[22666]:  [MSG   ] [DEBUG]: 
msg_sync_activate
Feb 15 09:23:02 server3 corosync[22666]:  [MSG   ] [DEBUG]: 
msg_sync_queue_free
Feb 15 09:23:02 server3 corosync[22666]:  [MSG   ] [DEBUG]: 
msg_sync_group_free
Feb 15 09:23:02 server3 corosync[22666]:  [MSG   ] [DEBUG]: 
msg_sync_reply_free
Feb 15 09:23:02 server3 corosync[22666]:  [MSG   ] [DEBUG]: 
msg_queue_timer_restart
Feb 15 09:23:02 server3 corosync[22666]:  [SYNCV2] Committing 
synchronization for openais availability management framework B.01.01
Feb 15 09:23:02 server3 corosync[22666]:  [MAIN  ] Completed service 
synchronization, ready to provide service.
Feb 15 09:23:02 server3 corosync[22666]:  [TOTEM ] waiting_trans_ack 
changed to 0
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] info: pcmk_ipc: 
Recorded connection 0x6a9370 for stonith-ng/22672
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] debug: 
process_ais_message: Msg[0] (dest=local:ais, 
from=server3:stonith-ng.22672, remote=true, size=6): 22672
Feb 15 09:23:02 server3 stonith-ng: [22672]: debug: 
init_ais_connection_classic: Adding fd=4 to mainloop
Feb 15 09:23:02 server3 stonith-ng: [22672]: info: 
init_ais_connection_classic: AIS connection established
Feb 15 09:23:02 server3 stonith-ng: [22672]: info: get_ais_nodeid: 
Server details: id=824445450 uname=server3 cname=pcmk
Feb 15 09:23:02 server3 stonith-ng: [22672]: info: 
init_ais_connection_once: Connection to 'classic openais (with plugin)': 
established
Feb 15 09:23:02 server3 stonith-ng: [22672]: debug: crm_new_peer:
Creating entry for node server3/824445450
Feb 15 09:23:02 server3 stonith-ng: [22672]: info: crm_new_peer: Node 
server3 now has id: 824445450
Feb 15 09:23:02 server3 stonith-ng: [22672]: info: crm_new_peer: Node 
824445450 is now known as server3
Feb 15 09:23:02 server3 stonith-ng: [22672]: info: main: Starting 
stonith-ng mainloop
Feb 15 09:23:02 server3 corosync[22666]:  [TOTEM ] entering GATHER state 
from 11.
Feb 15 09:23:02 server3 corosync[22666]:  [TOTEM ] Creating commit token 
because I am the rep.
Feb 15 09:23:02 server3 corosync[22666]:  [TOTEM ] Saving state aru 10 
high seq received 10
Feb 15 09:23:02 server3 corosync[22666]:  [TOTEM ] Storing new sequence 
id for ring 358
Feb 15 09:23:02 server3 corosync[22666]:  [TOTEM ] entering COMMIT state.
Feb 15 09:23:02 server3 corosync[22666]:  [TOTEM ] got commit token
Feb 15 09:23:02 server3 corosync[22666]:  [TOTEM ] entering RECOVERY state.
Feb 15 09:23:02 server3 corosync[22666]:  [TOTEM ] TRANS [0] member 
10.10.36.1:
Feb 15 09:23:02 server3 corosync[22666]:  [TOTEM ] position [0] member 
10.10.36.1:
Feb 15 09:23:02 server3 corosync[22666]:  [TOTEM ] previous ring seq 354 
rep 10.10.36.1
Feb 15 09:23:02 server3 corosync[22666]:  [TOTEM ] aru 10 high delivered 
10 received flag 1
Feb 15 09:23:02 server3 corosync[22666]:  [TOTEM ] position [1] member 
10.10.36.2:
Feb 15 09:23:02 server3 corosync[22666]:  [TOTEM ] previous ring seq 354 
rep 10.10.36.2
Feb 15 09:23:02 server3 corosync[22666]:  [TOTEM ] aru 11 high delivered 
11 received flag 1
Feb 15 09:23:02 server3 corosync[22666]:  [TOTEM ] Did not need to 
originate any messages in recovery.
Feb 15 09:23:02 server3 corosync[22666]:  [TOTEM ] got commit token
Feb 15 09:23:02 server3 corosync[22666]:  [TOTEM ] Sending initial ORF token
Feb 15 09:23:02 server3 corosync[22666]:  [TOTEM ] token retrans flag is 
0 my set retrans flag0 retrans queue empty 1 count 0, aru 0
Feb 15 09:23:02 server3 corosync[22666]:  [TOTEM ] install seq 0 aru 0 
high seq received 0
Feb 15 09:23:02 server3 corosync[22666]:  [TOTEM ] token retrans flag is 
0 my set retrans flag0 retrans queue empty 1 count 1, aru 0
Feb 15 09:23:02 server3 corosync[22666]:  [TOTEM ] install seq 0 aru 0 
high seq received 0
Feb 15 09:23:02 server3 corosync[22666]:  [TOTEM ] token retrans flag is 
0 my set retrans flag0 retrans queue empty 1 count 2, aru 0
Feb 15 09:23:02 server3 corosync[22666]:  [TOTEM ] install seq 0 aru 0 
high seq received 0
Feb 15 09:23:02 server3 corosync[22666]:  [TOTEM ] token retrans flag is 
0 my set retrans flag0 retrans queue empty 1 count 3, aru 0
Feb 15 09:23:02 server3 corosync[22666]:  [TOTEM ] install seq 0 aru 0 
high seq received 0
Feb 15 09:23:02 server3 corosync[22666]:  [TOTEM ] retrans flag count 4 
token aru 0 install seq 0 aru 0 0
Feb 15 09:23:02 server3 corosync[22666]:  [TOTEM ] Resetting old ring state
Feb 15 09:23:02 server3 corosync[22666]:  [TOTEM ] recovery to regular 1-0
Feb 15 09:23:02 server3 corosync[22666]:  [CLM   ] CLM CONFIGURATION CHANGE
Feb 15 09:23:02 server3 corosync[22666]:  [CLM   ] New Configuration:
Feb 15 09:23:02 server3 corosync[22666]:  [CLM   ]     r(0) ip(10.10.36.1)
Feb 15 09:23:02 server3 corosync[22666]:  [CLM   ] Members Left:
Feb 15 09:23:02 server3 corosync[22666]:  [CLM   ] Members Joined:
Feb 15 09:23:02 server3 corosync[22666]:  [EVT   ] Evt conf change 1
Feb 15 09:23:02 server3 corosync[22666]:  [EVT   ] m 1, j 0 l 0
Feb 15 09:23:02 server3 corosync[22666]:  [LCK   ] [DEBUG]: lck_confchg_fn
Feb 15 09:23:02 server3 corosync[22666]:  [MSG   ] [DEBUG]: msg_confchg_fn
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] notice: 
pcmk_peer_update: Transitional membership event on ring 856: memb=1, 
new=0, lost=0
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] info: 
pcmk_peer_update: memb: server3 824445450
Feb 15 09:23:02 server3 corosync[22666]:  [TOTEM ] waiting_trans_ack 
changed to 1
Feb 15 09:23:02 server3 corosync[22666]:  [CLM   ] CLM CONFIGURATION CHANGE
Feb 15 09:23:02 server3 corosync[22666]:  [CLM   ] New Configuration:
Feb 15 09:23:02 server3 corosync[22666]:  [CLM   ]     r(0) ip(10.10.36.1)
Feb 15 09:23:02 server3 corosync[22666]:  [CLM   ]     r(0) ip(10.10.36.2)
Feb 15 09:23:02 server3 corosync[22666]:  [CLM   ] Members Left:
Feb 15 09:23:02 server3 corosync[22666]:  [CLM   ] Members Joined:
Feb 15 09:23:02 server3 corosync[22666]:  [CLM   ]     r(0) ip(10.10.36.2)
Feb 15 09:23:02 server3 corosync[22666]:  [EVT   ] Evt conf change 0
Feb 15 09:23:02 server3 corosync[22666]:  [EVT   ] m 2, j 1 l 0
Feb 15 09:23:02 server3 corosync[22666]:  [LCK   ] [DEBUG]: lck_confchg_fn
Feb 15 09:23:02 server3 corosync[22666]:  [MSG   ] [DEBUG]: msg_confchg_fn
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] notice: 
pcmk_peer_update: Stable membership event on ring 856: memb=2, new=1, lost=0
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] info: update_member: 
Creating entry for node 841222666 born on 856
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] info: update_member: 
Node 841222666/unknown is now: member
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] info: 
pcmk_peer_update: NEW:  .pending. 841222666
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] debug: 
pcmk_peer_update: Node 841222666 has address r(0) ip(10.10.36.2)
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] info: 
pcmk_peer_update: MEMB: server3 824445450
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] info: 
pcmk_peer_update: MEMB: .pending. 841222666
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] debug: 
pcmk_peer_update: 1 nodes changed
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] info: 
send_member_notification: Sending membership update 856 to 0 children
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] debug: 
send_cluster_id: Born-on set to: 856 (peer)
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] debug: 
send_cluster_id: Local update: id=824445450, born=856, seq=856
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] info: update_member: 
0x697310 Node 824445450 ((null)) born on: 856
Feb 15 09:23:02 server3 corosync[22666]:  [SYNC  ] This node is within 
the primary component and will provide service.
Feb 15 09:23:02 server3 corosync[22666]:  [TOTEM ] entering OPERATIONAL 
state.
Feb 15 09:23:02 server3 corosync[22666]:  [TOTEM ] A processor joined or 
left the membership and a new membership was formed.
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] debug: 
pcmk_cluster_id_callback: Node update: server4 (1.1.6)
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] info: update_member: 
0x6920b0 Node 841222666 (server4) born on: 620
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] info: update_member: 
0x6920b0 Node 841222666 now known as server4 (was: (null))
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] info: update_member: 
Node server4 now has process list: 00000000000000000000000000111312 
(1118994)
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] info: update_member: 
Node server4 now has 1 quorum votes (was 0)
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] info: 
send_member_notification: Sending membership update 856 to 0 children
Feb 15 09:23:02 server3 corosync[22666]:  [SYNC  ] confchg entries 2
Feb 15 09:23:02 server3 corosync[22666]:  [SYNC  ] Barrier Start 
Received From 841222666
Feb 15 09:23:02 server3 corosync[22666]:  [SYNC  ] Barrier completion 
status for nodeid 824445450 = 0.
Feb 15 09:23:02 server3 corosync[22666]:  [SYNC  ] Barrier completion 
status for nodeid 841222666 = 1.
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] debug: 
pcmk_cluster_id_callback: Node update: server3 (1.1.6)
Feb 15 09:23:02 server3 corosync[22666]:  [SYNC  ] confchg entries 2
Feb 15 09:23:02 server3 corosync[22666]:  [SYNC  ] Barrier Start 
Received From 824445450
Feb 15 09:23:02 server3 corosync[22666]:  [SYNC  ] Barrier completion 
status for nodeid 824445450 = 1.
Feb 15 09:23:02 server3 corosync[22666]:  [SYNC  ] Barrier completion 
status for nodeid 841222666 = 1.
Feb 15 09:23:02 server3 corosync[22666]:  [SYNC  ] Synchronization 
barrier completed
Feb 15 09:23:02 server3 corosync[22666]:  [SYNC  ] Synchronization 
actions starting for (openais cluster membership service B.01.01)
Feb 15 09:23:02 server3 corosync[22666]:  [CLM   ] got nodejoin message 
10.10.36.1
Feb 15 09:23:02 server3 corosync[22666]:  [CLM   ] got nodejoin message 
10.10.36.2
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] WARN: 
route_ais_message: Sending message to local.crmd failed: ipc delivery 
failed (rc=-2)
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] Msg[438] 
(dest=local:crmd, from=server4:crmd.30563, remote=true, size=176): 
<create_request_adv origin="post_cache_update" t="crmd" version="3.0.5" 
subt="request" ref
Feb 15 09:23:02 server3 corosync[22666]:  [SYNC  ] confchg entries 2
Feb 15 09:23:02 server3 corosync[22666]:  [SYNC  ] Barrier Start 
Received From 824445450
Feb 15 09:23:02 server3 corosync[22666]:  [SYNC  ] Barrier completion 
status for nodeid 824445450 = 1.
Feb 15 09:23:02 server3 corosync[22666]:  [SYNC  ] Barrier completion 
status for nodeid 841222666 = 0.
Feb 15 09:23:02 server3 corosync[22666]:  [SYNC  ] confchg entries 2
Feb 15 09:23:02 server3 corosync[22666]:  [SYNC  ] Barrier Start 
Received From 841222666
Feb 15 09:23:02 server3 corosync[22666]:  [SYNC  ] Barrier completion 
status for nodeid 824445450 = 1.
Feb 15 09:23:02 server3 corosync[22666]:  [SYNC  ] Barrier completion 
status for nodeid 841222666 = 1.
Feb 15 09:23:02 server3 corosync[22666]:  [SYNC  ] Synchronization 
barrier completed
Feb 15 09:23:02 server3 corosync[22666]:  [SYNC  ] Committing 
synchronization for (openais cluster membership service B.01.01)
Feb 15 09:23:02 server3 corosync[22666]:  [SYNC  ] Synchronization 
actions starting for (dummy AMF service)
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] WARN: 
route_ais_message: Sending message to local.cib failed: ipc delivery 
failed (rc=-2)
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] Msg[1234] 
(dest=local:cib, from=server4:cib.30559, remote=true, size=834): 
<cib_command __name__="cib_command" t="cib" 
cib_async_id="5297282d-f542-4178-89ab-2750df43
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] WARN: 
route_ais_message: Sending message to local.cib failed: ipc delivery 
failed (rc=-2)
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] Msg[1235] 
(dest=local:cib, from=server4:cib.30559, remote=true, size=851): 
<cib_command __name__="cib_command" t="cib" 
cib_async_id="5297282d-f542-4178-89ab-2750df43
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] WARN: 
route_ais_message: Sending message to local.cib failed: ipc delivery 
failed (rc=-2)
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] Msg[1236] 
(dest=local:cib, from=server4:cib.30559, remote=true, size=955): 
<cib_command __name__="cib_command" t="cib" 
cib_async_id="5297282d-f542-4178-89ab-2750df43
Feb 15 09:23:02 server3 corosync[22666]:  [SYNC  ] confchg entries 2
Feb 15 09:23:02 server3 corosync[22666]:  [SYNC  ] Barrier Start 
Received From 841222666
Feb 15 09:23:02 server3 corosync[22666]:  [SYNC  ] Barrier completion 
status for nodeid 824445450 = 0.
Feb 15 09:23:02 server3 corosync[22666]:  [SYNC  ] Barrier completion 
status for nodeid 841222666 = 1.
Feb 15 09:23:02 server3 corosync[22666]:  [SYNC  ] confchg entries 2
Feb 15 09:23:02 server3 corosync[22666]:  [SYNC  ] Barrier Start 
Received From 824445450
Feb 15 09:23:02 server3 corosync[22666]:  [SYNC  ] Barrier completion 
status for nodeid 824445450 = 1.
Feb 15 09:23:02 server3 corosync[22666]:  [SYNC  ] Barrier completion 
status for nodeid 841222666 = 1.
Feb 15 09:23:02 server3 corosync[22666]:  [SYNC  ] Synchronization 
barrier completed
Feb 15 09:23:02 server3 corosync[22666]:  [SYNC  ] Committing 
synchronization for (dummy AMF service)
Feb 15 09:23:02 server3 corosync[22666]:  [SYNC  ] Synchronization 
actions starting for (openais checkpoint service B.01.01)
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] WARN: 
route_ais_message: Sending message to local.cib failed: ipc delivery 
failed (rc=-2)
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] Msg[1237] 
(dest=local:cib, from=server4:cib.30559, remote=true, size=1320): 
<cib_command __name__="cib_command" t="cib" 
cib_async_id="5297282d-f542-4178-89ab-2750df43
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] WARN: 
route_ais_message: Sending message to local.cib failed: ipc delivery 
failed (rc=-2)
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] Msg[1238] 
(dest=local:cib, from=server4:cib.30559, remote=true, size=888): 
<cib_command __name__="cib_command" t="cib" 
cib_async_id="5297282d-f542-4178-89ab-2750df43
Feb 15 09:23:02 server3 corosync[22666]:  [SYNC  ] confchg entries 2
Feb 15 09:23:02 server3 corosync[22666]:  [SYNC  ] Barrier Start 
Received From 824445450
Feb 15 09:23:02 server3 corosync[22666]:  [SYNC  ] Barrier completion 
status for nodeid 824445450 = 1.
Feb 15 09:23:02 server3 corosync[22666]:  [SYNC  ] Barrier completion 
status for nodeid 841222666 = 0.
Feb 15 09:23:02 server3 corosync[22666]:  [SYNC  ] confchg entries 2
Feb 15 09:23:02 server3 corosync[22666]:  [SYNC  ] Barrier Start 
Received From 841222666
Feb 15 09:23:02 server3 corosync[22666]:  [SYNC  ] Barrier completion 
status for nodeid 824445450 = 1.
Feb 15 09:23:02 server3 corosync[22666]:  [SYNC  ] Barrier completion 
status for nodeid 841222666 = 1.
Feb 15 09:23:02 server3 corosync[22666]:  [SYNC  ] Synchronization 
barrier completed
Feb 15 09:23:02 server3 corosync[22666]:  [SYNC  ] Committing 
synchronization for (openais checkpoint service B.01.01)
Feb 15 09:23:02 server3 corosync[22666]:  [SYNC  ] Synchronization 
actions starting for (openais event service B.01.01)
Feb 15 09:23:02 server3 corosync[22666]:  [EVT   ] Evt synchronize 
initialization
Feb 15 09:23:02 server3 corosync[22666]:  [EVT   ] Process Evt 
synchronization
Feb 15 09:23:02 server3 corosync[22666]:  [EVT   ] Send max event ID updates
Feb 15 09:23:02 server3 corosync[22666]:  [EVT   ] Send set evt ID 0 to 
r(0) ip(10.10.36.1)
Feb 15 09:23:02 server3 corosync[22666]:  [EVT   ] Remote channel 
operation request
Feb 15 09:23:02 server3 corosync[22666]:  [EVT   ] my node ID: 0x31240a0a
Feb 15 09:23:02 server3 corosync[22666]:  [EVT   ] Received Set event ID 
OP from nodeid 32240a0a to 0 for 31240a0a my addr r(0) ip(10.10.36.1)  
base 1
Feb 15 09:23:02 server3 corosync[22666]:  [EVT   ] Remote channel 
operation request
Feb 15 09:23:02 server3 corosync[22666]:  [EVT   ] my node ID: 0x31240a0a
Feb 15 09:23:02 server3 corosync[22666]:  [EVT   ] Received Set event ID 
OP from nodeid 32240a0a to 0 for 32240a0a my addr r(0) ip(10.10.36.1)  
base 1
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] WARN: 
route_ais_message: Sending message to local.crmd failed: ipc delivery 
failed (rc=-2)
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] Msg[439] 
(dest=local:crmd, from=server4:crmd.30563, remote=true, size=176): 
<create_request_adv origin="post_cache_update" t="crmd" version="3.0.5" 
subt="request" ref
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] WARN: 
route_ais_message: Sending message to local.cib failed: ipc delivery 
failed (rc=-2)
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] Msg[1239] 
(dest=local:cib, from=server4:cib.30559, remote=true, size=934): 
<cib_command __name__="cib_command" t="cib" 
cib_async_id="5297282d-f542-4178-89ab-2750df43
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] WARN: 
route_ais_message: Sending message to local.cib failed: ipc delivery 
failed (rc=-2)
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] Msg[1240] 
(dest=local:cib, from=server4:cib.30559, remote=true, size=1150): 
<cib_command __name__="cib_command" t="cib" 
cib_async_id="5297282d-f542-4178-89ab-2750df43
Feb 15 09:23:02 server3 corosync[22666]:  [EVT   ] Remote channel 
operation request
Feb 15 09:23:02 server3 corosync[22666]:  [EVT   ] my node ID: 0x31240a0a
Feb 15 09:23:02 server3 corosync[22666]:  [EVT   ] Received Set event ID 
OP from nodeid 31240a0a to 0 for 31240a0a my addr r(0) ip(10.10.36.1)  
base 1
Feb 15 09:23:02 server3 corosync[22666]:  [EVT   ] Process Evt 
synchronization
Feb 15 09:23:02 server3 corosync[22666]:  [EVT   ] Send open count updates
Feb 15 09:23:02 server3 corosync[22666]:  [EVT   ] DONE Sending open counts
Feb 15 09:23:02 server3 corosync[22666]:  [EVT   ] Remote channel 
operation request
Feb 15 09:23:02 server3 corosync[22666]:  [EVT   ] my node ID: 0x31240a0a
Feb 15 09:23:02 server3 corosync[22666]:  [EVT   ] Receive 
EVT_CONF_CHANGE_DONE from nodeid r(0) ip(10.10.36.2)  members 2 checked in 1
Feb 15 09:23:02 server3 corosync[22666]:  [EVT   ] Remote channel 
operation request
Feb 15 09:23:02 server3 corosync[22666]:  [EVT   ] my node ID: 0x31240a0a
Feb 15 09:23:02 server3 corosync[22666]:  [EVT   ] Receive 
EVT_CONF_CHANGE_DONE from nodeid r(0) ip(10.10.36.1)  members 2 checked in 2
Feb 15 09:23:02 server3 corosync[22666]:  [EVT   ] I am oldest in my
transitional config
Feb 15 09:23:02 server3 corosync[22666]:  [EVT   ] Process Evt 
synchronization
Feb 15 09:23:02 server3 corosync[22666]:  [EVT   ] Send retained event 
updates
Feb 15 09:23:02 server3 corosync[22666]:  [EVT   ] Process Evt 
synchronization
Feb 15 09:23:02 server3 corosync[22666]:  [EVT   ] DONE Sending retained 
events
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] WARN: 
route_ais_message: Sending message to local.cib failed: ipc delivery 
failed (rc=-2)
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] Msg[1241] 
(dest=local:cib, from=server4:cib.30559, remote=true, size=955): 
<cib_command __name__="cib_command" t="cib" 
cib_async_id="5297282d-f542-4178-89ab-2750df43
Feb 15 09:23:02 server3 corosync[22666]:  [EVT   ] Remote channel 
operation request
Feb 15 09:23:02 server3 corosync[22666]:  [EVT   ] my node ID: 0x31240a0a
Feb 15 09:23:02 server3 corosync[22666]:  [EVT   ] Receive EVT_CONF_DONE 
from nodeid r(0) ip(10.10.36.1) , members 2 checked in 1
Feb 15 09:23:02 server3 corosync[22666]:  [EVT   ] Process Evt 
synchronization
Feb 15 09:23:02 server3 corosync[22666]:  [EVT   ] Wait for retained events
Feb 15 09:23:02 server3 corosync[22666]:  [EVT   ] Remote channel 
operation request
Feb 15 09:23:02 server3 corosync[22666]:  [EVT   ] my node ID: 0x31240a0a
Feb 15 09:23:02 server3 corosync[22666]:  [EVT   ] Receive EVT_CONF_DONE 
from nodeid r(0) ip(10.10.36.2) , members 2 checked in 2
Feb 15 09:23:02 server3 corosync[22666]:  [EVT   ] Process Evt 
synchronization
Feb 15 09:23:02 server3 corosync[22666]:  [EVT   ] Recovery complete
Feb 15 09:23:02 server3 corosync[22666]:  [SYNC  ] confchg entries 2
Feb 15 09:23:02 server3 corosync[22666]:  [SYNC  ] Barrier Start 
Received From 841222666
Feb 15 09:23:02 server3 corosync[22666]:  [SYNC  ] Barrier completion 
status for nodeid 824445450 = 0.
Feb 15 09:23:02 server3 corosync[22666]:  [SYNC  ] Barrier completion 
status for nodeid 841222666 = 1.
Feb 15 09:23:02 server3 corosync[22666]:  [SYNC  ] confchg entries 2
Feb 15 09:23:02 server3 corosync[22666]:  [SYNC  ] Barrier Start 
Received From 824445450
Feb 15 09:23:02 server3 corosync[22666]:  [SYNC  ] Barrier completion 
status for nodeid 824445450 = 1.
Feb 15 09:23:02 server3 corosync[22666]:  [SYNC  ] Barrier completion 
status for nodeid 841222666 = 1.
Feb 15 09:23:02 server3 corosync[22666]:  [SYNC  ] Synchronization 
barrier completed
Feb 15 09:23:02 server3 corosync[22666]:  [EVT   ] Evt synchronize 
activation
Feb 15 09:23:02 server3 corosync[22666]:  [SYNC  ] Committing 
synchronization for (openais event service B.01.01)
Feb 15 09:23:02 server3 corosync[22666]:  [SYNC  ] Synchronization 
actions starting for (corosync cluster closed process group service v1.01)
Feb 15 09:23:02 server3 corosync[22666]:  [CPG   ] comparing: sender 
r(0) ip(10.10.36.2) ; members(old:1 left:0)
Feb 15 09:23:02 server3 corosync[22666]:  [CPG   ] comparing: sender 
r(0) ip(10.10.36.1) ; members(old:1 left:0)
Feb 15 09:23:02 server3 corosync[22666]:  [CPG   ] chosen downlist: 
sender r(0) ip(10.10.36.1) ; members(old:1 left:0)
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] WARN: 
route_ais_message: Sending message to local.cib failed: ipc delivery 
failed (rc=-2)
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] Msg[1242] 
(dest=local:cib, from=server4:cib.30559, remote=true, size=1096): 
<cib_command __name__="cib_command" t="cib" 
cib_async_id="5297282d-f542-4178-89ab-2750df43
Feb 15 09:23:02 server3 corosync[22666]:  [SYNC  ] confchg entries 2
Feb 15 09:23:02 server3 corosync[22666]:  [SYNC  ] Barrier Start 
Received From 824445450
Feb 15 09:23:02 server3 corosync[22666]:  [SYNC  ] Barrier completion 
status for nodeid 824445450 = 1.
Feb 15 09:23:02 server3 corosync[22666]:  [SYNC  ] Barrier completion 
status for nodeid 841222666 = 0.
Feb 15 09:23:02 server3 corosync[22666]:  [SYNC  ] confchg entries 2
Feb 15 09:23:02 server3 corosync[22666]:  [SYNC  ] Barrier Start 
Received From 841222666
Feb 15 09:23:02 server3 corosync[22666]:  [SYNC  ] Barrier completion 
status for nodeid 824445450 = 1.
Feb 15 09:23:02 server3 corosync[22666]:  [SYNC  ] Barrier completion 
status for nodeid 841222666 = 1.
Feb 15 09:23:02 server3 corosync[22666]:  [SYNC  ] Synchronization 
barrier completed
Feb 15 09:23:02 server3 corosync[22666]:  [SYNC  ] Committing 
synchronization for (corosync cluster closed process group service v1.01)
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] WARN: 
route_ais_message: Sending message to local.crmd failed: ipc delivery 
failed (rc=-2)
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] Msg[440] 
(dest=local:crmd, from=server4:crmd.30563, remote=true, size=223): 
<create_request_adv origin="join_make_offer" t="crmd" version="3.0.5" 
subt="request" refer
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] WARN: 
route_ais_message: Sending message to local.cib failed: ipc delivery 
failed (rc=-2)
Feb 15 09:23:02 server3 corosync[22666]:  [pcmk  ] Msg[1243] 
(dest=local:cib, from=server4:cib.30559, remote=true, size=934): 
<cib_command __name__="cib_command" t="cib" 
cib_async_id="5297282d-f542-4178-89ab-2750df43
Feb 15 09:23:02 server3 corosync[22666]:  [LCK   ] [DEBUG]: lck_sync_init
Feb 15 09:23:02 server3 corosync[22666]:  [LCK   ] [DEBUG]: 
lck_sync_resource_lock_timer_stop
Feb 15 09:23:02 server3 corosync[22666]:  [LCK   ] [DEBUG]: lck_sync_process
Feb 15 09:23:02 server3 corosync[22666]:  [SYNCV2] Committing 
synchronization for openais distributed locking service B.03.01
Feb 15 09:23:02 server3 corosync[22666]:  [LCK   ] [DEBUG]: 
lck_sync_activate
Feb 15 09:23:02 server3 corosync[22666]:  [LCK   ] [DEBUG]: 
lck_sync_resource_free
Feb 15 09:23:02 server3 corosync[22666]:  [LCK   ] [DEBUG]: 
lck_sync_resource_lock_timer_start
Feb 15 09:23:02 server3 corosync[22666]:  [LCK   ] [DEBUG]: 
  global_lock_count = 0
Feb 15 09:23:02 server3 corosync[22666]:  [MSG   ] [DEBUG]: msg_sync_init
Feb 15 09:23:02 server3 corosync[22666]:  [MSG   ] [DEBUG]: 
msg_sync_queue_enter
Feb 15 09:23:02 server3 corosync[22666]:  [MSG   ] [DEBUG]: msg_sync_process
Feb 15 09:23:02 server3 corosync[22666]:  [MSG   ] [DEBUG]: 
msg_sync_queue_iterate
Feb 15 09:23:02 server3 corosync[22666]:  [MSG   ] [DEBUG]: 
msg_sync_group_enter
Feb 15 09:23:02 server3 corosync[22666]:  [MSG   ] [DEBUG]: msg_sync_process
Feb 15 09:23:02 server3 corosync[22666]:  [MSG   ] [DEBUG]: 
msg_sync_group_iterate
Feb 15 09:23:02 server3 corosync[22666]:  [MSG   ] [DEBUG]: 
msg_sync_reply_enter
Feb 15 09:23:02 server3 corosync[22666]:  [MSG   ] [DEBUG]: msg_sync_process
Feb 15 09:23:02 server3 corosync[22666]:  [MSG   ] [DEBUG]: 
msg_sync_reply_iterate
Feb 15 09:23:02 server3 corosync[22666]:  [SYNCV2] Committing 
synchronization for openais message service B.03.01
Feb 15 09:23:02 server3 corosync[22666]:  [MSG   ] [DEBUG]: 
msg_sync_activate
Feb 15 09:23:02 server3 corosync[22666]:  [MSG   ] [DEBUG]: 
msg_sync_queue_free
Feb 15 09:23:02 server3 corosync[22666]:  [MSG   ] [DEBUG]: 
msg_sync_group_free
Feb 15 09:23:02 server3 corosync[22666]:  [MSG   ] [DEBUG]: 
msg_sync_reply_free
Feb 15 09:23:02 server3 corosync[22666]:  [MSG   ] [DEBUG]: 
msg_queue_timer_restart
Feb 15 09:23:02 server3 corosync[22666]:  [SYNCV2] Committing 
synchronization for openais availability management framework B.01.01
Feb 15 09:23:02 server3 corosync[22666]:  [MAIN  ] Completed service 
synchronization, ready to provide service.
Feb 15 09:23:02 server3 corosync[22666]:  [TOTEM ] waiting_trans_ack 
changed to 0
Feb 15 09:23:03 server3 lrmd: [22674]: info: enabling coredumps
Feb 15 09:23:03 server3 lrmd: [22674]: WARN: Core dumps could be lost if 
multiple dumps occur.
Feb 15 09:23:03 server3 lrmd: [22674]: WARN: Consider setting 
non-default value in /proc/sys/kernel/core_pattern (or equivalent) for 
maximum supportability
Feb 15 09:23:03 server3 lrmd: [22674]: WARN: Consider setting 
/proc/sys/kernel/core_uses_pid (or equivalent) to 1 for maximum 
supportability
Feb 15 09:23:03 server3 lrmd: [22674]: debug: main: run the loop...
Feb 15 09:23:03 server3 lrmd: [22674]: info: Started.
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] ERROR: 
pcmk_wait_dispatch: Child process cib exited (pid=22673, rc=100)
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] notice: 
pcmk_wait_dispatch: Child process cib no longer wishes to be respawned
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] debug: 
send_cluster_id: Local update: id=824445450, born=856, seq=856
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] info: update_member: 
Node server3 now has process list: 00000000000000000000000000151212 
(1380882)
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] debug: 
send_cluster_id: Local update: id=824445450, born=856, seq=856
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] ERROR: 
pcmk_wait_dispatch: Child process crmd exited (pid=22677, rc=100)
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] notice: 
pcmk_wait_dispatch: Child process crmd no longer wishes to be respawned
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] debug: 
send_cluster_id: Local update: id=824445450, born=856, seq=856
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] info: update_member: 
Node server3 now has process list: 00000000000000000000000000151012 
(1380370)
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] debug: 
send_cluster_id: Local update: id=824445450, born=856, seq=856
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] ERROR: 
pcmk_wait_dispatch: Child process attrd exited (pid=22675, rc=100)
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] notice: 
pcmk_wait_dispatch: Child process attrd no longer wishes to be respawned
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] debug: 
send_cluster_id: Local update: id=824445450, born=856, seq=856
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] info: update_member: 
Node server3 now has process list: 00000000000000000000000000150012 
(1376274)
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] debug: 
send_cluster_id: Local update: id=824445450, born=856, seq=856
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] ERROR: 
pcmk_wait_dispatch: Child process pengine exited (pid=22676, rc=100)
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] notice: 
pcmk_wait_dispatch: Child process pengine no longer wishes to be respawned
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] debug: 
send_cluster_id: Local update: id=824445450, born=856, seq=856
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] info: update_member: 
Node server3 now has process list: 00000000000000000000000000140012 
(1310738)
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] debug: 
send_cluster_id: Local update: id=824445450, born=856, seq=856
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] ERROR: 
pcmk_wait_dispatch: Child process mgmtd exited (pid=22678, rc=100)
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] notice: 
pcmk_wait_dispatch: Child process mgmtd no longer wishes to be respawned
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] debug: 
send_cluster_id: Local update: id=824445450, born=856, seq=856
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] info: update_member: 
Node server3 now has process list: 00000000000000000000000000100012 
(1048594)
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] debug: 
send_cluster_id: Local update: id=824445450, born=856, seq=856
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] debug: 
pcmk_cluster_id_callback: Node update: server3 (1.1.6)
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] info: update_member: 
Node server3 now has process list: 00000000000000000000000000151212 
(1380882)
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] info: update_member: 
Node server3 now has process list: 00000000000000000000000000100012 
(1048594)
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] info: 
send_member_notification: Sending membership update 856 to 0 children
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] debug: 
pcmk_cluster_id_callback: Node update: server3 (1.1.6)
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] info: update_member: 
Node server3 now has process list: 00000000000000000000000000151212 
(1380882)
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] info: update_member: 
Node server3 now has process list: 00000000000000000000000000100012 
(1048594)
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] info: 
send_member_notification: Sending membership update 856 to 0 children
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] debug: 
pcmk_cluster_id_callback: Node update: server3 (1.1.6)
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] info: update_member: 
Node server3 now has process list: 00000000000000000000000000151012 
(1380370)
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] info: update_member: 
Node server3 now has process list: 00000000000000000000000000100012 
(1048594)
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] info: 
send_member_notification: Sending membership update 856 to 0 children
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] debug: 
pcmk_cluster_id_callback: Node update: server3 (1.1.6)
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] info: update_member: 
Node server3 now has process list: 00000000000000000000000000151012 
(1380370)
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] info: update_member: 
Node server3 now has process list: 00000000000000000000000000100012 
(1048594)
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] info: 
send_member_notification: Sending membership update 856 to 0 children
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] debug: 
pcmk_cluster_id_callback: Node update: server3 (1.1.6)
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] info: update_member: 
Node server3 now has process list: 00000000000000000000000000150012 
(1376274)
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] info: update_member: 
Node server3 now has process list: 00000000000000000000000000100012 
(1048594)
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] info: 
send_member_notification: Sending membership update 856 to 0 children
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] debug: 
pcmk_cluster_id_callback: Node update: server3 (1.1.6)
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] info: update_member: 
Node server3 now has process list: 00000000000000000000000000150012 
(1376274)
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] info: update_member: 
Node server3 now has process list: 00000000000000000000000000100012 
(1048594)
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] info: 
send_member_notification: Sending membership update 856 to 0 children
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] debug: 
pcmk_cluster_id_callback: Node update: server3 (1.1.6)
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] info: update_member: 
Node server3 now has process list: 00000000000000000000000000140012 
(1310738)
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] info: update_member: 
Node server3 now has process list: 00000000000000000000000000100012 
(1048594)
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] info: 
send_member_notification: Sending membership update 856 to 0 children
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] debug: 
pcmk_cluster_id_callback: Node update: server3 (1.1.6)
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] info: update_member: 
Node server3 now has process list: 00000000000000000000000000140012 
(1310738)
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] info: update_member: 
Node server3 now has process list: 00000000000000000000000000100012 
(1048594)
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] info: 
send_member_notification: Sending membership update 856 to 0 children
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] debug: 
pcmk_cluster_id_callback: Node update: server3 (1.1.6)
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] debug: 
pcmk_cluster_id_callback: Node update: server3 (1.1.6)
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] WARN: 
route_ais_message: Sending message to local.crmd failed: ipc delivery 
failed (rc=-2)
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] Msg[443] 
(dest=local:crmd, from=server4:crmd.30563, remote=true, size=176): 
<create_request_adv origin="post_cache_update" t="crmd" version="3.0.5" 
subt="request" ref
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] WARN: 
route_ais_message: Sending message to local.cib failed: ipc delivery 
failed (rc=-2)
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] Msg[1244] 
(dest=local:cib, from=server4:cib.30559, remote=true, size=955): 
<cib_command __name__="cib_command" t="cib" 
cib_async_id="5297282d-f542-4178-89ab-2750df43
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] WARN: 
route_ais_message: Sending message to local.cib failed: ipc delivery 
failed (rc=-2)
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] Msg[1245] 
(dest=local:cib, from=server4:cib.30559, remote=true, size=1096): 
<cib_command __name__="cib_command" t="cib" 
cib_async_id="5297282d-f542-4178-89ab-2750df43
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] WARN: 
route_ais_message: Sending message to local.crmd failed: ipc delivery 
failed (rc=-2)
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] Msg[444] 
(dest=local:crmd, from=server4:crmd.30563, remote=true, size=176): 
<create_request_adv origin="post_cache_update" t="crmd" version="3.0.5" 
subt="request" ref
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] WARN: 
route_ais_message: Sending message to local.cib failed: ipc delivery 
failed (rc=-2)
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] Msg[1246] 
(dest=local:cib, from=server4:cib.30559, remote=true, size=934): 
<cib_command __name__="cib_command" t="cib" 
cib_async_id="5297282d-f542-4178-89ab-2750df43
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] WARN: 
route_ais_message: Sending message to local.cib failed: ipc delivery 
failed (rc=-2)
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] Msg[1247] 
(dest=local:cib, from=server4:cib.30559, remote=true, size=1151): 
<cib_command __name__="cib_command" t="cib" 
cib_async_id="5297282d-f542-4178-89ab-2750df43
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] WARN: 
route_ais_message: Sending message to local.cib failed: ipc delivery 
failed (rc=-2)
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] Msg[1248] 
(dest=local:cib, from=server4:cib.30559, remote=true, size=955): 
<cib_command __name__="cib_command" t="cib" 
cib_async_id="5297282d-f542-4178-89ab-2750df43
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] WARN: 
route_ais_message: Sending message to local.cib failed: ipc delivery 
failed (rc=-2)
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] Msg[1249] 
(dest=local:cib, from=server4:cib.30559, remote=true, size=1097): 
<cib_command __name__="cib_command" t="cib" 
cib_async_id="5297282d-f542-4178-89ab-2750df43
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] WARN: 
route_ais_message: Sending message to local.crmd failed: ipc delivery 
failed (rc=-2)
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] Msg[445] 
(dest=local:crmd, from=server4:crmd.30563, remote=true, size=176): 
<create_request_adv origin="post_cache_update" t="crmd" version="3.0.5" 
subt="request" ref
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] WARN: 
route_ais_message: Sending message to local.cib failed: ipc delivery 
failed (rc=-2)
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] Msg[1250] 
(dest=local:cib, from=server4:cib.30559, remote=true, size=934): 
<cib_command __name__="cib_command" t="cib" 
cib_async_id="5297282d-f542-4178-89ab-2750df43
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] WARN: 
route_ais_message: Sending message to local.cib failed: ipc delivery 
failed (rc=-2)
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] Msg[1251] 
(dest=local:cib, from=server4:cib.30559, remote=true, size=955): 
<cib_command __name__="cib_command" t="cib" 
cib_async_id="5297282d-f542-4178-89ab-2750df43
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] WARN: 
route_ais_message: Sending message to local.cib failed: ipc delivery 
failed (rc=-2)
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] Msg[1252] 
(dest=local:cib, from=server4:cib.30559, remote=true, size=1097): 
<cib_command __name__="cib_command" t="cib" 
cib_async_id="5297282d-f542-4178-89ab-2750df43
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] WARN: 
route_ais_message: Sending message to local.crmd failed: ipc delivery 
failed (rc=-2)
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] Msg[446] 
(dest=local:crmd, from=server4:crmd.30563, remote=true, size=176): 
<create_request_adv origin="post_cache_update" t="crmd" version="3.0.5" 
subt="request" ref
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] WARN: 
route_ais_message: Sending message to local.cib failed: ipc delivery 
failed (rc=-2)
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] Msg[1253] 
(dest=local:cib, from=server4:cib.30559, remote=true, size=934): 
<cib_command __name__="cib_command" t="cib" 
cib_async_id="5297282d-f542-4178-89ab-2750df43
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] WARN: 
route_ais_message: Sending message to local.cib failed: ipc delivery 
failed (rc=-2)
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] Msg[1254] 
(dest=local:cib, from=server4:cib.30559, remote=true, size=955): 
<cib_command __name__="cib_command" t="cib" 
cib_async_id="5297282d-f542-4178-89ab-2750df43
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] WARN: 
route_ais_message: Sending message to local.cib failed: ipc delivery 
failed (rc=-2)
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] Msg[1255] 
(dest=local:cib, from=server4:cib.30559, remote=true, size=1097): 
<cib_command __name__="cib_command" t="cib" 
cib_async_id="5297282d-f542-4178-89ab-2750df43
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] WARN: 
route_ais_message: Sending message to local.crmd failed: ipc delivery 
failed (rc=-2)
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] Msg[447] 
(dest=local:crmd, from=server4:crmd.30563, remote=true, size=176): 
<create_request_adv origin="post_cache_update" t="crmd" version="3.0.5" 
subt="request" ref
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] WARN: 
route_ais_message: Sending message to local.cib failed: ipc delivery 
failed (rc=-2)
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] Msg[1256] 
(dest=local:cib, from=server4:cib.30559, remote=true, size=934): 
<cib_command __name__="cib_command" t="cib" 
cib_async_id="5297282d-f542-4178-89ab-2750df43
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] WARN: 
route_ais_message: Sending message to local.cib failed: ipc delivery 
failed (rc=-2)
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] Msg[1257] 
(dest=local:cib, from=server4:cib.30559, remote=true, size=955): 
<cib_command __name__="cib_command" t="cib" 
cib_async_id="5297282d-f542-4178-89ab-2750df43
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] WARN: 
route_ais_message: Sending message to local.cib failed: ipc delivery 
failed (rc=-2)
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] Msg[1258] 
(dest=local:cib, from=server4:cib.30559, remote=true, size=1097): 
<cib_command __name__="cib_command" t="cib" 
cib_async_id="5297282d-f542-4178-89ab-2750df43
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] WARN: 
route_ais_message: Sending message to local.cib failed: ipc delivery 
failed (rc=-2)
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] Msg[1259] 
(dest=local:cib, from=server4:cib.30559, remote=true, size=934): 
<cib_command __name__="cib_command" t="cib" 
cib_async_id="5297282d-f542-4178-89ab-2750df43
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] WARN: 
route_ais_message: Sending message to local.cib failed: ipc delivery 
failed (rc=-2)
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] Msg[1260] 
(dest=local:cib, from=server4:cib.30559, remote=true, size=9090): <copy 
__name__="cib_command" t="cib" 
cib_clientid="b7b07259-f8b4-4a2d-a66d-db2947a1cb36" c
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] WARN: 
route_ais_message: Sending message to local.cib failed: ipc delivery 
failed (rc=-2)
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] Msg[1261] 
(dest=local:cib, from=server4:cib.30559, remote=true, size=886): 
<cib_command __name__="cib_command" t="cib" 
cib_async_id="5297282d-f542-4178-89ab-2750df43
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] WARN: 
route_ais_message: Sending message to local.cib failed: ipc delivery 
failed (rc=-2)
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] Msg[1262] 
(dest=local:cib, from=server4:cib.30559, remote=true, size=4495): 
<cib_command __name__="cib_command" t="cib" 
cib_async_id="5297282d-f542-4178-89ab-2750df43
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] WARN: 
route_ais_message: Sending message to local.cib failed: ipc delivery 
failed (rc=-2)
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] Msg[1263] 
(dest=local:cib, from=server4:cib.30559, remote=true, size=8505): 
<cib_command __name__="cib_command" t="cib" 
cib_async_id="5297282d-f542-4178-89ab-2750df43
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] WARN: 
route_ais_message: Sending message to local.cib failed: ipc delivery 
failed (rc=-2)
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] Msg[1264] 
(dest=local:cib, from=server4:cib.30559, remote=true, size=955): 
<cib_command __name__="cib_command" t="cib" 
cib_async_id="5297282d-f542-4178-89ab-2750df43
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] WARN: 
route_ais_message: Sending message to local.cib failed: ipc delivery 
failed (rc=-2)
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] Msg[1265] 
(dest=local:cib, from=server4:cib.30559, remote=true, size=1401): 
<cib_command __name__="cib_command" t="cib" 
cib_async_id="5297282d-f542-4178-89ab-2750df43
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] WARN: 
route_ais_message: Sending message to local.cib failed: ipc delivery 
failed (rc=-2)
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] Msg[1266] 
(dest=local:cib, from=server4:cib.30559, remote=true, size=872): 
<cib_command __name__="cib_command" t="cib" 
cib_async_id="5297282d-f542-4178-89ab-2750df43
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] WARN: 
route_ais_message: Sending message to local.cib failed: ipc delivery 
failed (rc=-2)
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] Msg[1267] 
(dest=local:cib, from=server4:cib.30559, remote=true, size=913): 
<cib_command __name__="cib_command" t="cib" 
cib_async_id="90e34654-1b5c-48be-8acd-3a748f8a
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] WARN: 
route_ais_message: Sending message to local.attrd failed: ipc delivery 
failed (rc=-2)
Feb 15 09:23:03 server3 corosync[22666]:  [pcmk  ] Msg[34] 
(dest=local:attrd, from=server4:attrd.30561, remote=true, size=179):
<attrd_trigger_update t="attrd" src="server4" task="flush" 
attr_name="probe_complete" at


More information about the Pacemaker mailing list